id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
239885739
pes2o/s2orc
v3-fos-license
Quantum state engineering using weak measurement State preparation via postselected weak measurement in three wave mixing process is studied. Assuming the signal input mode prepared in a vacuum state, coherent state or squeezed vacuum state, separately, while the idler input prepared in weak coherent state and passing the medium characterized by the second-order nonlinear susceptibility. It is shown that when the single photon is detected at one of the output channels of idler beam's path, the signal output channel is prepared in single-photon Fock state, single-photon-added coherent state or single-photon-added squeezed vacuum state with very high fidelity, depending upon the input signal states and related controllable parameters. The properties including squeezing, signal amplification, second order correlation and Wigner functions of the weak measurement based output states are also investigated. Our scheme promising to provide alternate new effective method for producing useful nonclassical states in quantum information processing. We know that the purposing the feasible schemes to generate specific quantum states and their implementations in the Lab are exciting and challenging tasks to the researchers. In specific quantum state generation processes we usually used the conditional measurement since it useful to control the requested parameters to produce the desired quantum states [47][48][49][50][51][52][53][54]. The weak measurement proposed in 1988 [55] by Aharonov, Albert, and Vaidman is a typical conditional measurement characterized by postselection and weak value. The weak measurement theory have various applications (see [56] and references therein) and it recently widely used to the state optimization problems [57][58][59]. One of the author of this work studied the state optimization by using weak measurement [60][61][62] and showed that the postselected weak measurement really can change the inherent properties of the given states. Furthermore, in recent work [63], they purposed a theoretical scheme to amplify the single-photon nonlinearity using weak measurements implemented in cross-Kerr interaction medium characterized by the third-order nonlinear susceptibility χ (3) and its experimental realization is given in [64]. On the other hand, Shikano and his collaborators [65] studied the generation of phase-squeezed optical pulses with large coherent amplitudes by post-selection of single photon based on the same setup of Ref. [63]. Those results also indicated the potential usefulness of postselected weak measurement in quantum state engineering processes. However, to our knowledge, the specific quantum state generation via weak measurement has not been investigated in detail in any literature, and it is worth to study. In this paper, we introduce a new scheme to generate some typical nonclassical states such single-photon Fock states, single photon added coherent (SPAC) state and single photon added squeezed vacuum (SPASV) state in three optical wave mixing process via postselected weak measurement [55]. In order to achieve our goal, we consider the signal and idler beams as pointer (measuring system) and measured system, respectively. We assume that initially the measured system prepared in very weak coherent state while the pointer (signal) state prepared in coherent or squeezed vacuum state. The strong pump field is treated as classical and the weak coupling between the pointer and measured system is realized by BBO nonlinear crystal which can generate entanglement between them. By properly choosing the pre-and post-selction states of measured system and detecting one photon in one of the output of idler mode, the output channel of the pointer is prepared in desired state with high purity for controllable parameters. We found that if our input pointer state is prepared in coherent (squeezed vacuum) state, then we can generate SPAC (SPASV) state with very high fidelity accompanied by small successful rate. Our results indicated that in our scheme we also can generate single photon Fock state if the initial pointer state prepared in vacuum state. To further confirm the identities of those generated states we also investigate their related properties such as squeezing, second order correlations and Wigner functions. Interestingly, we found that the new generated SPAC state in our scheme have advantages to increase the signal-to-noise ratio (SNR) in postselected weak measurement over nonpostselected case. This paper is organized as follows. Section. II, presents the basic scheme for generation of new nonclassical states in three wave mixing process via postselected weak measurement technique. The generation of SPAC and SPAVS states and their inherent properties are discussed in Section. III and Section.IV, respectively. In Section. III, we also investigate the advantages of postselected weak measurement in signal amplification process over nonpostselected case for SPAC state by adjusting the weak value of measured system observables. Finally, a summary and concluding remarks are given in Section. V. II. MODEL SETUP FOR THE NEW STATE GENERATION VIA POSTSELECTED WEAK MEASUREMENT The Hamiltonian of a three-wave mixing device [66], under the rotating wave approximation (RWA), neglecting external drive and signal fields, is where a, b and c are the annihilation operators of the signal, idler and pump with frequencies ω s , ω i and ω p , and χ (2) is the coupling strength characterized by a secondorder nonlinear susceptibility of BBO.This Hamiltonian can describe the process of nondegenerate parametric down-conversion whereby a photon of the pump field is converted into two photons, one for each of the modes a and b [66]. Using the parametric approximation, assuming that the pump field to be a strong coherent state of the form |γe −iωpt , then we can rewrite the above Hamiltonian in interaction picture with ω p = ω i + ω s as where η = γχ (2) . Further, the above Hamiltonian is equivalent to if we introduce and with [A, B] = i and [q, p] = i, respectively. The two terms in Hamiltonian, Eq. ( 3), are in the forms we usually used in weak measurement problems [55]. In this work we take the signal beam with variables q and p is pointer, and The signal and idler beams acts as pointer and measured system, respectively. The preselection state prepared by weak coherent state |α passing through the unbalanced beam splitter (BS) with deviation ǫ, and signal beam initially prepared in some specific sates. The BBO crystal playing the role for realizing the weak interaction between pointer and measured system.The 50:50 BS in the upper Mach-Zehnder interferometers takes the role of postselction, and the desired conditional quantum state is generated in the output mode of signal beam after we detect one photon by second photon detector (D2) in idler beam's path. idler beam with variables A and B is measured system, respectively. The schematic setup of our state generation model is showed in Fig.1. As we can see from Fig.1, there have two Mach-Zehnder interferometers in our setup and the beam splitters are taking very important roles to the implementation of our scheme. Beam splitters are basic manipulations in classical and quantum optics to splitting and mixing the optical beams. The input and output relations of beam splitters can be described by Lie algebra [67]. In the Heisenberg picture, the photon annihilation operators of output beam, b k (k = 1, 2) can be connected to input beam's annihilation operators, a k , as where U kj is the element of the scattering matrix U = cos ϑe iϕt sin ϑe iϕr − sin ϑe −iϕr cos ϑe −iϕt . Here, T = cos ϑe iϕt and R = sin ϑe iϕr are transmittance and reflectance of the beam splitter, respectively. If ϕ r = ϕ t = 0 and ϑ = π 4 , then it becomes 50 : 50 beam splitter. We assume that initially the measured system (idler beam) prepared in weak coherent state with small amplitude (α ≪ 1), and signal beam prepared in some specific states such as squeezed vacuum state and coherent state separately. In the upper optical path of our scheme, we assume that the first beam splitter is slightly imbalanced with small deviate ǫ to 50 : 50 so that the preselection state of the measured system can be written as [68] Here, the subscripts t and r indicates the transmitted and reflected beams from the beam splitter. Then, the three wave mixing is realized by the nonlinear BBO crystal which play the role to implement the weak measurement process. In this process, the input photon annihilates and produces two new mutually entangled photons. The unitary evolution operator corresponding to the interaction where g = ηt. Actually, this is the squeezing operator can generate the two-mode vacuum squeezed state [66]. Here, g is can be considered as squeezing parameter which depends on pump intensity, the crystal length, and its nonlinear coefficients. Following the experimental work [45], we set g = 0.105 throughout this work. We can then rewrite the above unitary evolution operator U as If we assume that the initial state of system and pointer are |ψ i and |φ , after the unitary evolution the total system state becomes as This is total system state before arrive to the second beam splitters in our model (see Fig. 1). In our scheme the second splitters are 50:50 with 50% transmission and 50% reflection. We take a postselection to the idler beam accomplished by detectors in the upper optical paths. Assume that the second photon detector (D2) is detect one photon and the first photon detector (D1) no click i.e., |1 2d |0 1d . This postselection process can be described by where are the field operators relations between input and output modes of the beam-splitter stransformation. After taking the postselection with the postselected state |ψ f onto Eq. (11 ), we can obtain the non normalized form of the final state of the pointer (signal beam) and it reads as where and are the weak values of A and B, respectively. The probability of finding one photon at D2 and no photon at D1 is As we can see, the success probability of postselection P s is depends on the imbalance ǫ caused by the little difference between the reflection and transmission coefficients of the beam splitter in the upper interferometer and weak coherent state amplitude α of the idler input state. From the Eqs. (15) and (16), it can be seen that the weak values are generally complex, and can take large values when the pre-selected state |ψ i and post-selected states|ψ f are almost orthogonal. The magnitudes of weak idler input state amplitude α, beam splitter's deviation ǫ, and coupling coefficient g are all controllable in optical experiments. Thus, we can manipulate and change the inherent properties of the output signal state |Φ by adjusting these parameters. In the remaining parts of the work, we study the new state generation and its verification processes by taking the initial signal input sate |φ as coherent state and vacuum squeezed state, respectively. III. GENERATION OF SPAC STATE In this section, we assume that the initial signal input state is prepared as coherent state which defined as, where β = |β|e iθ is complex number. For this case, the output state of the signal, i.e., Eq. ( 14), is reads as Here, is the normalization constant, κ 1 = 1 − gβα √ 2 and κ 2 = g √ 2αε , respectively. It is very clear from Eq. (19 ) that the output signal state is a superposition of coherent state |β and SPAC state a † |β . As aforementioned, since the all parameters g, α , ǫ and β are adjustable, the dominance of coherent state |β and SPAC state a † |β can be completely controlled by adjusting the related parameters. From the Eq. ( 19), we can see that if κ 2 ≫ κ 1 the state |Θ reduced to the SPAC state |1, β = a † |β √ 1+|β| 2 . In next sub sections we extend the discussions about properties of the conditional output state |Θ . A. State Distance In quantum information theory, the quantification of the distance of two quantum states described by density operators ρ and σ can be characterized by the quantum fidelity (or the called Uhlmann-Jozsa fidelity) which is defined as If both states are pure i.e., ρ = |ψ ψ| and σ = |φ φ|, then This quantity is indeed a natural candidate for the state distance since it corresponds to the closeness of states in the natural geometry of Hilbert space. If F = 0, the states are orthogonal or called totally different (i.e., perfectly distinguishable). If F = 1, then the two states are totally same, |ψ = |φ . Here, in order to study the similarity of the output signal state |Θ between coherent state |β and normalized SPAC state |1, α , the fidelity between |β , |1, α and |Θ are calculated, and the result are given by and respectively. In Fig. 2, we plot the fidelity F 1 and F 2 as a function of coherent state parameter |β| for other fixed system parameters. As showed in Fig. 2, the red dashed line shows the closeness between the output signal state and the SPAC state, and it can be seen that the fidelity of these two states always keeping the constant value (F = 1) for all |β|. The Fig. 2 also indicated that the F 1 is increased from zero to unity as |β| increasing. It can be seen that when α, ǫ are much less than one and |β| is smaller, we can deduce that κ 2 ≫ κ 1 . Under this condition our generated output signal state is exactly the SPAC state. B. Second order correlation and Mandel factor Here, we study the second-order correlation function g (2) (0) and Mandel factor Q m of our generated signal state |Θ . The second order correlation function of a single-mode radiation field is defined as Its relations with the Mandel factor Q m is If 0 ≤ g (2) (0) < 1 and −1 ≤ Q m < 0 simultaneously, the corresponding radiation field has sub-Poissonian statistics and more nonclassical. We have remember that the Mandel factor Q m can never be smaller than −1 for any radiation fields, and negative Q m values, which are equivalent to sub-poissonian statistics, cannot be produced by any classical field. The second-order correlation function g (2) (0) and Mandel factor Q m of our generated output signal state |Θ are given as [69] g (2) and respectively, with and In Fig. 3, we plot g (2) (0) and Q m as functions of coherent state parameter β by fixing other paramters to θ = 0, g = 0.105, α = 0.01 and ǫ = 0.1. As observed in Fig. 3, 0 ≤ g (2) (0) < 1 and−1 ≤ Q m < 0 for all plotted regions.This means that our generated signal output field have sub-Poisson statistics which only possessed in nonclassical states. Actually, the curves showed in Fig. 3 are matched well with the corresponding curves of SPAC state |1, α [35]. Thus, we can further verified that in our scheme we could effectively generate the SPAC state if the initial signal input state is in coherent state with moderate parameter β. C. Winger function To further verify our claim, in this subsection, we investigate the Wigner function of |Θ . A state of a quantum mechanical system is completely described by density matrix of a phase space distribution such as the Wigner function. Every state function has it unique phase space distributions and the Wigner distribution function is the closest quantum analogue of the classical distribution function in phase space. By evaluating the Wigner function we can intuitively determine the strength of corresponding quantum nature, and most importantly the negative value of the Wigner function can prove the nonclassicality of the state. In general, the Wigner function is defined as the two-dimensional Fourier transform of the symmetric order characteristic function, and the Wigner function for the state ρ = |Θ Θ| can be written as [66] where C N (λ) is the normal ordered characteristic function, and is defined as After some calculation we can get the explicit expression of the Wigner function of the state |Θ and it given as We can see that this Wigner function consists three parts. The first and second terms corresponded to the Wigner function of coherent state |β and SPAC state |1, β , respectively, and third term caused by their superposition. In Fig. 4, we plot the Wigner function of the state |Θ for different amplitude β. From the Fig. 4, we can see that the negativity of W (z) vanished gradually with increasing the amplitude β. We know that every wave function has its phase space distribution which characterized by Wigner function, and it is an unique. This presented phenomena in Fig. 4 is exactly the phase space distribution of SPAC state |1, β [35]. Thus, when κ 2 ≫ κ 1 , the |Θ gives us the new type of nonclassical state, i.e., |1, β . D. Signal to noise ratio (SNR) As shown in our schematic Fig. 1, the new output state |Θ of the signal beam is generated after we taking the postselection to the idler beam which accomplished by D1 and D2. If we didn't take the postselection, then final state of the signal will gives by Eq. ( 11) after taking a trace to the idler beam with state |ψ i . However, since in the nonpostselection case will not occur weak value of operators A and B which possess the signal amplification feature, the postselected weak measurement may have advantages over nonpostselected measeurement in signal amplification process. To show the usefulness of new generated state |Θ , here we study the ratio of SNRs between the postselected and nonpostselected weak measurements [69] Here, R p X represents the SNR of postselected weak measurement defined as with Here, N is the total number of measurements, P s is probability of finding the postselected state for a given preselected state and for our scheme it equal to P s = |αǫ| 2 , and N P s is the number of times the system was found in a postselected state |ψ f . Here, q f denotes the expectation value of measuring observable which defined in Eq. ( 5) under the final state of the pointer (signal beam) |Θ . When dealing with nonpostselected measurement, there is no postselection process after the interaction between the system and pointer. Thus, the definition of SNR for nonpostselected weak measurement can be given as with Here, q f ′ denotes the expectation value of measuring observable under the final state of the pointer without postselection which can be derived in Eq. ( 11). In order to evaluate the ration χ of SNRs, we have to calculate the related quantities and related expressions are given as : (1) the expectation value of q f is where (2) the expectation value of q 2 f is where The other quantities also can be obtained, and here we didn't show all of them. The ratio of SNRs between postselected and nonpostselected weak measurement is plotted as a function of coherent state parameter β, and results are shown in Fig. 5. As we observed in Fig.5, the ratio χ is increased and can be more larger then unity with increasing the unbalanced parameter ǫ of the beam splitter for not very large |β|. We have noticed that the magnitudes of weak values of andB, Eq. ( 15) and Eq. ( 16) are inverse to ǫ. Thus, the small weak value is, the better postslected SNR is achieved than nonpostselected one. In a word, it can draw a conclusion that the postselected weak measurment can improve the SNR rather than without post-selection one for smaller weak values. IV. GENERATION OF SINGLE-PHOTON -ADDED VACUUM SQUEEZED STATE If assume the initial input state |φ of the signal beam is prepared as SV state [70] with S (ξ) = exp 1 2 ξa †2 − 1 2 ξ * a 2 , ξ = ηe iϕ . Then, the output state of signal beam, Eq. ( 14), becomes as Here and is the normalization constant. In the below discussions, we can neglect the term associated with the coefficient λ 1 since it is too small compared to λ 2 for our allowed parameters. As we can see, the state we prepared by optical modeling |Ω is the superposition of vacuum squeezed (VS) and single-photon-added vacuum squeezed (SPAVS) states. These two states dominance depends on the coefficients λ 1 and their amplitudes can be controlled by beam spllitters and BBO crystal in our scheme (see Fig. 1). In this section, by calculating the state distance, squeezing parameter and Wigner function we proved that in allowed parameters region our generated new state |Ω is very distinguished over initial input state |φ 1 . A. State Distance In order to investigate the similarities and differences of the generated state |Ω between two states including SV state and SPASV state, we evaluate the state distances between them. 1.The state distance between |Ω and squeezed vacuum (SV) state|φ 1 is given as 2. The state distance between |Ω and PASV state |φ 2 = a † |ξ cosh η is given as In Fig. 6, we plot separately the state distances between |Ω and two states vs the squeezing parameter η. As indictated in Fig. 6 (a), when the input idler coherent state is too weak, α = 0.01, the output signal state |Ω is very different with initial input state and the generated state is totally same with the SPASV state. For α = 0.75, in very weak squeezing parameter η, the output state |Ω are very similar to SV and SPASV states. But, as increasing the squeezing parameter η, the similarities between |Ω and SPASV (SV) state is increased ( decreased) significantly (see the Fig. 6(b)). Although, the SPASV state only have one photon difference between SV state, it have very different features over SV state. Next we study the squeezing parameter and Wigner functions of the new generated state |Ω . (Color online) The state distance of |Ω between SVS and PASVS as a function squeezed state parameter η (a) for α = 0.01, (b) for α = 0.75. Other parameters are the same as Fig. 2. B. Squeezing parameter As we know, the SV state is an ideal state which possess very strong squeezing effect. To investigate the squeezing effect of the field quadrature of the generated state |Ω , in this subsection we study the squeezing parameter of |Ω . The squeezing parameter of radiation field is defined as is the quadrature operator of the field, and △X φ = X2 φ − X φ 2 is the variance of variable X θ . The minimum value of S φ is −0.5 and if −0.5 ≤ S φ < 0 the field is called nonclassical. We can calculate the squeezing parameter of SV state |φ 1 , PASV state |φ 2 and generated output state |Ω easily and their curves can be seen in Fig. 7. We observe from Fig. 7 (a) the squeezing parameter of the new generated output signal state |Ω exactly same with the squeezing parameter of SPASV state |φ 2 , and it have very good squeezing as initial input state |φ 1 when the squeezing parameter η become larger. Furthermore, as showed in Fig. 7(b), if α = 0.75, then the squeezing parameter of |Ω is same with the initial input state |φ 1 . Here, we have to mention that in our scheme it is required that the measured system is initially prepared in very weak coherent state. Thus, the α = 0.75 case is not our main points. C. Winger function of new generated state To further confirm similarities between SPASV state |φ 2 and the new generated state |Ω , in this subsection we study the Wigner function of |Ω . The Wigner function for the state ρ = |Ω Ω| can be written as [66] Here, C W (λ) is the characteristic function, and is defined as and z = x + ip is represent the normalized dimensionless position and momentum observables of the beam in phase space. After some math, we can calculate the explicit expression of Wigner function of the new generated state |Ω , and it reads as with . We can observe that this Wigner function is a real function and its value is bounded − 2 π ≤ W (α) ≤ 2 π in whole phase space. In the derivation of the above Wigner function we have used the identities S(ξ)aS † (ξ) = a cosh(η) − a † e iϕ sinh(η), (54a) The w 2 (z) in the Wigner function ( 52) is the Winger function of SV state |φ 1 . Although the SV state |φ 1 is a nonclassical state, its Winger function is Gaussian and positive in phase space [36] . It is very clear in Eq. (52 ) that it contains non-Gaussian terms such as w 2 (z), w 3 (z) and w 6 (z). Thus the Wigner function of our new generated signal state is non-Gaussian in the phase space. We present the plots of the Winger functions of initial input signal state |φ 1 , new generated output signal state |Ω and SPASV state |φ 2 in phase space in Fig. 8 for different squeezing parameters which we set as η = 0, 1, 2. Winger functions of SPASV state |φ 2 , respectively. By comparing the curves of those Winger functions, we observed that the generated state in our scheme is a typical nonclassical state. It is very clear from Fig. 8 (d)-(f) that as initial input state |φ 1 , the new state |Ω has squeezing in one of the quadrature, and there is also some negative regions of the Winger functions in the phase space. These two features of the new state |Ω can show that its nonclassicality. Furthermore, it is proved that our new generated state |Ω have exactly same phase space distribution as SPASV state |φ 2 (see second and third rows of Fig. 8). As indicated in Fig. 8(d), if the input state of the pointer is vacuum, then the output signal state is prepared in single-phton Fock state. V. CONCLUSION In summary, we have designed a fully laboratory feasible optical model to successfully prepare nonclassical states such as single-photon Fock state, SPAC state and SPASV state by using postselected weak measurement in three wave mixing process. In our scheme the signal and idler beams are taken as pointer and measured system, respectively, and entanglement between them is realized by BBO crystal which can take the role of weak measurement. In other words, in our study, a nonlinear BBO crystal was chosen to introduce weak interaction between three-wave mixing including pump, idle and signal light. By taking the pre-and post-selections on measured system, the final pointer state is prepared desired nonclassical state which depends on the initial input signal state (initial pointer state). Further, we investigated the properties including squeezing, second order correlation and Winger functions of conditional output states. We found that if the input signal (pointer) is vacuum state then the output signal state is prepared in single-photon Fock state which is typical quantum state exclusively used in many quantum information processing. We also found that if the input signal state is coherent (squeezed vacuum) state, then the output signal state prepared in SPAC (SPAVS) state, respectively, and their purities can be controlled by optical elements easily. Furthermore, we also found that the post-selective measurement characterized by weak values and postselection have a positive effect on the output SNR over non-postselection for coherent state input case. Our scheme for the preparation of nonclassical states can be implemented in optical Lab and we anticipate that this scheme could provide other effective methods to the generation of other useful nonclassical state such as Schrödinger kitten state [71]. ACKNOWLEDGMENTS This work was supported by the National Natural Science Foundation of China (Nos. 11865017, 11664041),
2021-10-27T01:15:52.916Z
2021-10-26T00:00:00.000
{ "year": 2021, "sha1": "fe5f94d646ce4bead65b62aaf83e4bb80adce350", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "fe5f94d646ce4bead65b62aaf83e4bb80adce350", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
84842096
pes2o/s2orc
v3-fos-license
Convenient methodology for extraction and subsequent selective propagation of mouse melanocytes in culture from adult mouse skin tissue Mouse melanoma B16-BL6 cells are useful cells for cancer metastatic studies. To understand the metastatic principle at molecular levels, it is necessary to carry out experiments in which cancer cells and their normal counterparts are compared. However, unlike normal human melanocytes, preparation of normal mouse melanocytes is quite difficult due to the lack of marketing and insufficient information on an established protocol for primary culture of mouse melanocytes. In this study, we aimed to establish a convenient method for primary culture of mouse melanocytes on the basis of the protocol for human melanocytes. The main obstacles to preparing pure mouse melanocytes are how to digest mouse skin tissue and how to reduce the contamination of keratinocytes and fibroblasts. The obstacles were overcome by collagenase digestion for skin specimens, short time trypsinization for separating melanocytes and keratinocytes, and use of 12-O-Tetradecanoylphorbol 13-acetate (TPA) and cholera toxin in the culture medium. These supplements act to prevent the proliferation of keratinocytes and fibroblasts, respectively. The convenient procedure enabled us to prepare a pure culture of normal mouse melanocytes. Using enriched normal mouse melanocytes and cancerous B16-BL6 cells, we compared the expression levels of melanoma cell adhesion molecule (MCAM), an important membrane protein for melanoma metastasis, in the cells. The results showed markedly higher expression of MCAM in B16-BL6 cells than in normal mouse melanocytes. Introduction Normal cells in cultivation are a crucial material in experimental studies in the field of life science and its relevant fields, especially in comparison with their abnormal counterparts such as cancer cells, by which the causes of the alteration or changing events can be determined at both cellular and molecular levels. We have been conducting mechanistic studies on lung tropic melanoma metastasis [1,2], and we have found that S100A8/A9, a heterodimer complex of S100A8 and S100A9 proteins [3][4][5], which are Ca2+ binding small proteins of about 10 KDa in molecular mass belonging to the S100 family, and its novel receptor, melanoma cell adhesion molecule (MCAM) play important roles in the metastasis [6,7]. Owing to the intrinsically different characters of cancer cells from the normal counterparts in our living body, the lung, one of the very sensitive tissues to cancer cells as a foreign substance, falls into a state of cancer-derived inflammation, resulting in the production and secretion of S100A8/A9 there at a significant level [8,9]. On the other hand, distant melanoma cells catch the S100A8/A9 signal from the inflammatory lung through the MCAM sensor that exists on the melanoma cell surface, resulting in acceleration of lung-oriented metastasis of melanoma cells. This metastatic event in vivo was observed in a well-established syngenic model using mouse B16-BL6 melanoma cells and immunocompetent C57BL/6J mice [6]. In this system, human melanoma cells are not adapted because of immune exclusion of human cells. To understand the metastatic role of MCAM in mouse B16-BL6 melanoma cells, it is inevitably required to learn expression level of MCAM in mouse B16-BL6 melanoma cells in comparison to that in its normal counterparts. However, we faced a difficult problem in the preparation of normal mouse melanocytes at that time. Surprisingly, unlike normal human melanocytes, normal mouse melanocytes were not marketed widely as a commercial product, and little is known about the methods for isolation and cultivation of normal mouse melanocytes. This is probably due to technically difficult problems for effective isolation of cells with maintenance in a living condition and subsequent selective propagation of a melanocyte population from the adult mouse skin tissue since distributions of melanocytes in the skin of mice and humans are different. We confirmed that the expression level of MCAM was highly elevated in various human melanoma cell lines in a consistent manner when compared to that of normal human melanocytes from a commercial source (our unpublished data). However, at that time, we could not define the expression level of MCAM protein in mouse melanoma cell lines in comparison to their normal counterparts. We therefore tried to establish a convenient method to readily extract and selectively propagate a normal mouse melanocyte population from adult mouse skin tissue. When the isolated melanocytes were eventually compared with B16-BL6 melanoma cells for their intrinsic MCAM expression, we confirmed that MCAM shows markedly higher expression at the protein level in B16-BL6 melanoma cells than in normal mouse melanocytes. Normal mouse melanocytes Skin tissue was collected from an 8-week-old C57BL/6J mouse after epilation and chopped into pieces of about 3 mm in diameter (see Fig. 1). The collected tissues were then treated with either a serum-free D/F medium (Thermo Fisher Scientific) containing collagenase (WAKO, Hiroshima, Osaka, Japan) at a final concentration of 1 mg/ml or a serum-free trypsin medium (TrypLE™ Express, Thermo Fisher Scientific), both media supplemented with kanamycin (50 μg/ml) and amphotericin B (100 μg/ml), for 24 h at 4°C under gentle rotation. After incubation of the specimens, tissue debris was removed by passing the mixture through a 70-μm pore sized cell strainer (Corning, Corning, NY). The collected cell suspensions were centrifuged at 1500 rpm for 10 min, and the clear supernatants were removed. Then a melanocyte culture medium (a modified medium on the basis of the DermaLife Ma Melanocyte Medium Complete Kit; Lifeline Cell Technology, Frederick, MD) supplemented with 12-O-Tetradecanoylphorbol 13-acetate (TPA, 10 ng/ml, WAKO) and cholera toxin (10 nM, Sigma-Ardrich, St. Louis, MO) was added. At this time, the epidermal cell mixtures in pellets were disaggregated mechanically by repeated pipetting up and down and were seeded on a culture dish (35 mm in diameter). The culture medium was changed after 48 h and kept for another 3 days. When the cell density had reached about 70% confluency, the cells were subcultured by trypsinization with 0.05% trypsin/0.02% EDTA solution at room temperature. To collect as many melanocytes as possible, trypsinization was done shortly under microscopically checking the state of melanocyte detachment that sets apart from that of keratinocyte detachment. The cells were then continuously cultivated. Western blot analysis Western blot analysis was performed under conventional conditions. The antibodies used were as follows: rabbit anti-MCAM antibody (Sigma-Aldrich, St Louis, MO), mouse anti-TRP1 antibody (Santa Cruz Biotechnology, Santa Cruz, CA), rabbit anti-cyclin D1 antibody (Cell Signaling Technology, Beverly, MA), mouse anti-cyclin D3 antibody (Cell Signaling Technology), rabbit anti-cyclin E1 antibody (Cell Signaling Technology), mouse anti-p21/WAF1 antibody (Merck KGaA, Darmstadt, Germany) and mouse anti-tubulin antibody (Sigma-Aldrich). The second antibody was horseradish peroxidase-conjugated anti-mouse or anti-rabbit IgG antibody (Cell Signaling Technology). All primary antibodies used show cross-reactivity to their targeted proteins from not only human but also mouse source. Extraction of skin cells from adult mouse skin tissue In human skin, simple enzymatic digestion using trypsin is sufficient to dissociate melanocytes from a skin specimen since human dermal melanocytes are mainly located in the basal layer of the skin epidermis [10,11]. However, a trypsin method similar to that used for human melanocytes may not be applicable to the extraction of mouse melanocytes from adult mouse skin tissue because most of the mouse melanocytes are distributed in hair follicles that are located in the skin dermis. With the aim of efficient digestion of the dermis area, we used collagenase, which may increase the rate of dissociation of the melanocyte population from the mouse skin. First, we prepared mouse skin tissue and cut the tissue with scissors into pieces of about 3 mm in diameter (Fig. 1). The chopped specimens were treated with either collagenase or trypsin. After removal of the digested skin debris from each treated sample, the dissociated cells were collected by centrifugation. At that time, we noticed that the number of extracted cells was much larger with collagenase treatment than with trypsin treatment, suggesting more efficient digestion of the skin specimen with collagenase than with trypsin. The cells were then cultivated with a medium specialized to normal human melanocytes. This specialized medium is good for cultivation of melanocytes. However, the medium is not adapted to selective propagation of a melanocyte population from a mixed cell condition that includes mainly keratinocytes and fibroblasts, which exhibit higher growing potential in culture. We hence supplemented the medium with TPA and cholera toxin. TPA and cholera toxin are effective for suppressing growth of contaminated keratinocytes and fibroblasts, respectively, without harmful effects on melanocytes [10,12]. By using the modified medium, we started the primary culture. Interestingly, in the collagenase-treated sample, there were many melanocyte-like cells with elongated protrusions like neuronal cells that were clearly different from the shape of fibroblasts and keratinocytes on culture Day 4 and Day 5 (Fig. 2a). The mixed cell culture also included keratinocyte-like cell populations but not fibroblast-like cell populations. On the other hand, in the trypsintreated sample, only keratinocyte-like populations were appeared as clear colonies on Day 4 and Day 5 (Fig. 2b). A similar phenomenon was also observed when we used a third digestion medium that includes both collagenase and trypsin enzymes for the first procedure of mouse skin digestion (data not shown), probably due to cleavage of collagenase by trypsin, leading to inactivation of collagenase. The results indicate that single treatment with collagenase enables efficient extraction of melanocytes from adult mouse skin tissue. Selective propagation of a melanocyte-like cell population in culture To remove as many contaminated keratinocytes as possible from the collagenase-treated culture, we performed selective dissociation of melanocyte-like cells with trypsin on Day 5, when a time lag of detachment between melanocyte-like cells (weak attachment) and keratinocytes (tight attachment) occurred. In order to leave the keratinocyte population on the dish, we treated the cells for a short time under observation with a phase contrast microscope. By using the timelag-based trypsinization method, we succeeded in obtaining an enriched melanocyte-like population with only one subculturing (Day 15) (Fig. 2c). The population-doubling level (PDL) of the cells was monitored and the resulting data was shown in Fig. 3a and it was possible to extend primary culture to 6 passages until cell longevity ceased. Analysis of the characteristics of enriched mouse melanocyte-like cells To determine whether the enriched melanocyte-like cells in culture were real melanocytes, cells at passage 1 (P1) were collected and subjected to Western blot analysis for detection of a representative melanocyte marker, tyrosinase-related protein-1 (TRP-1). We found that the propagated cells express TRP-1 at a pronounced level (Fig. 3b). Using the validated melanocyte population at the indicated passage numbers (Fig. 3a), we next examined the expression levels of cell cycle-related proteins. The cell cycle accelerators, cyclin D1, D3 and E1 were detected with significant levels at younger passages (P0-P3) and then they were all downregulated at the increased passages just starting from P1 through P6, while the expression of a representative cell cycle inhibitor, p21/WAF1 exhibited an inverse patterns to those of cyclins (Fig. 3c). We finally examined the expression level of MCAM in mouse B16-BL6 melanoma cells in comparison to that in normal mouse melanocytes at passage 1. As shown in Fig. 3d, we confirmed that the expression of MCAM is markedly higher in B16-BL6 melanoma cells than in normal cells. Interestingly, in normal cells, although MCAM was highly expressed in younger cells (P0-P3), it was markedly reduced in the older cells (P4-P6) like cyclines (Fig. 3c). These results suggest that MCAM plays a significant role in regulation of cellular growth or senescence in normal melanocytes. Thus, we succeeded in obtaining a convenient protocol for selective propagation of normal mouse melanocytes that is useful for several scientific aims. In this protocol, we learned mainly three tricks, i.e., use of collagenase for digestion of an adult mouse skin specimen, short trypsinization for subculturing, and use of TPA and cholera toxin to overcome the obstacle of contamination of keratinocytes and fibroblasts [13]. TPA is known to support the proliferation of normal human melanocytes in culture, but it causes growth suppression and rapid differentiation of keratinocytes [14]. In addition, TPA acts to prevent attachment of keratinocytes to the culture dish after trypsinization [15]. We hence considered that TPA and short trypsinization cooperatively cause the disappearance of contaminating keratinocytes from the primary culture. This may be the main reason for effective removal of keratinocytes in the primary culture. We also used cholera toxin, an adenylate cyclase activator, to prevent fibroblasts contamination. Although cholera toxin is useful for optimal proliferation of normal human melanocytes like TPA [10,11,16], it functions to prevent fibroblast proliferation since an intracellular increase in cyclic AMP produced by the activated adenylate cyclase enzyme efficiently blocks DNA synthesis of fibroblasts [16,17]. Considering the disappearance of fibroblasts in the primary mixed culture at a very early stage, the use of cholera toxin may greatly contribute to the removal of fibroblasts. When we searched for reports of similar methods, we found that Sviderskaya et al., top researchers in the field of melanocytes, had reported the beneficial role of TPA and cholera toxin for the primary culture of normal mouse melanocytes, which were from trypsinized embryonic mouse skin tissue [18]. We hence believe that our convenient protocol is firmly reliable as an experimental procedure for providing mouse melanocytes from adult mouse skin tissue. Lastly, for removal of the fibroblast population, other than cholera toxin, the antibiotic geneticin (G418 sulfate) may be effective since it was reported that treatment of a mixed primary culture from a human skin specimen with G418 at a concentration of 100 μg/ml for 2 days resulted in pure culture of normal human melanocytes [15]. Conclusion In this study, our convenient method enabled the preparation of a pure population of normal mouse melanocytes in a culture system, which is very useful for comparison of cellular behaviors, alteration in the expression of genes and proteins, and metabolic alteration between mouse melanoma cells and their normal counterparts. The protocol may also be useful for young scientists who are doing research in fields related to melanocytes since, unlike human melanocytes, there is little information on normal mouse melanocytes due to the small number of reports on mouse melanocytes. Conflicts of interest The authors declare that they have no conflicts of interest.
2019-03-23T13:03:00.829Z
2019-02-22T00:00:00.000
{ "year": 2019, "sha1": "0c7ce4212dd5d9336a12609087d6638abe4887b6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.bbrep.2019.100619", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4f4d8bd5266a8ee5a760fcfef7796b7d1e5feb01", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
203927238
pes2o/s2orc
v3-fos-license
Arbitrary Microphone Array Optimization Method Based on TDOA for Specific Localization Scenarios Various microphone array geometries (e.g., linear, circular, square, cubic, spherical, etc.) have been used to improve the positioning accuracy of sound source localization. However, whether these array structures are optimal for various specific localization scenarios is still a subject of debate. This paper addresses a microphone array optimization method for sound source localization based on TDOA (time difference of arrival). The geometric structure of the microphone array is established in parametric form. A triangulation method with TDOA was used to build the spatial sound source location model, which consists of a group of nonlinear multivariate equations. Through reasonable transformation, the nonlinear multivariate equations can be converted to a group of linear equations that can be approximately solved by the weighted least square method. Then, an optimization model based on particle swarm optimization (PSO) algorithm was constructed to optimize the geometric parameters of the microphone array under different localization scenarios combined with the spatial sound source localization model. In the optimization model, a reasonable fitness evaluation function is established which can comprehensively consider the positioning accuracy and robustness of the microphone array. In order to verify the array optimization method, two specific localization scenarios and two array optimization strategies for each localization scenario were constructed. The optimal array structure parameters were obtained through numerical iteration simulation. The localization performance of the optimal array structures obtained by the method proposed in this paper was compared with the optimal structures proposed in the literature as well as with random array structures. The simulation results show that the optimized array structure gave better positioning accuracy and robustness under both specific localization scenarios. The optimization model proposed could solve the problem of array geometric structure design based on TDOA and could achieve the customization of microphone array structures under different specific localization scenarios. Introduction In the past two decades, microphone array technology has consistently been a hot research field. Microphone arrays are mainly used for sound source localization and identification, and have been an important practical technology with many valuable applications, such as noise source localization [1,2], target sound source tracking [3], teleconferencing systems [4,5], intelligent robots [6][7][8], and so on. In microphone array technology, there are three main methods for sound source localization, namely, beamforming, acoustic holography, and time difference of arrival (TDOA). The beamforming method applies delay-and-sum to signals from an array of microphones, and in the direction of the source, a beam peak will form to locate the sound sources [9]. Acoustic holography reconstructs the acoustic fields to locate the sound sources by solving the inverse propagation problems [10]. Beamforming and acoustic holography methods usually involve planar microphone arrays and calculation points located on a surface at a certain distance with respect to the array, which provides a poor resolution in the direction perpendicular to the array. In recent years, beamforming with several deconvolution techniques [11] and inverse methods with additional issues [12,13] have been proposed to construct volumetric sound source imaging, which can give the exact three-dimensional (3D) coordinates of sound sources. The method based on TDOA, virtually a triangulation method, locates the sound source using geometric relationships between microphones and sound sources, which can give the spatial position of sound sources with reasonable accuracy using a small number of sensors [14]. TDOA methods have been widely used for real-time sound source localization [15,16]. Moreover, in some sound source localization scenarios, such as simple sound source tracking, TDOA methods show better application prospects. Different numbers of microphones and different kinds of array structures are used in these three methods. In general, the number of microphones used in beamforming and acoustic holography is much larger than in the TDOA method because the number of microphones has a significant influence on the reconstruction accuracy of the sound source mapping [17]. Nevertheless, the number of microphones required in the TDOA method is much smaller. In theory, only four microphones are needed to locate a sound source in three-dimensional space. For example, Wu and Zhu [15] used only four microphones to locate arbitrarily time-dependent acoustic sources in a free three-dimensional space in real-time. In addition, the number of microphones is not the decisive factor of its location accuracy. The array structure is another main factor that relates to the accuracy of source localization for these three methods. Many kinds of microphone array structures are applied in sound source localization, which can mainly be divided into three categories: 1-dimensional, 2-dimensional, and 3-dimensional. The 1-dimensional array structure group mainly comprises binaural arrays [18] and linear arrays [19]. 2-dimensional array structures include square [20], cross [21], spiral [22,23], and circular geometries [24]. 3-dimensional arrays mainly include cubic [25], pyramidal [25], hemispherical [26], and spherical [27] geometries. Relevant scholars have analyzed the performance of various arrays, indicating that each kind of array structure is only suitable for specific localization algorithms and scenarios. There is no array structure that can achieve good localization performance under any kinds of scenarios and algorithms. For example, the two-or three-dimensional localization accuracy of a randomly distributed array will vary widely with respect to the relative position of the sound source [28]. Therefore, the optimization of the microphone array structure becomes an important research point. Wang and Bei [29] proposed an optimization method based on acoustic holography theory to optimize the microphone array coordinates on a fixed cross X-type array structure, and the main side lobe ratio and the main lobe area were selected as the optimization objective function. Kodrasi et al. [30] adopted different heuristic optimization approaches and an exhaustive search approach to optimize the microphone positions for an arbitrary planar array based on the beamforming method, and Kodrasi's methods found near-optimal configurations. Recently, Yan and Ma [31], Sarradj [32], Bjelić et al. [33], Teng and Lv [34], and Le Courtois et al. [35] also proposed new methods for planar array optimization based on the beamforming method, and compared the array performance under different localization scenarios. In the optimization procedure, the main-lobe width and side-lobe level are generally selected as the optimization objective function. Padois et al. [36] proposed a spherical microphone array with polyhedral discretization and compared it with a spherical array with a slightly different geometry based on the beamforming method. The results showed that the polyhedral discretization array could obtain better positioning accuracy. In 2019, Padois et al. [37] carried out further research on array geometry optimization based on time-domain beamforming. They proposed an optimal spherical microphone array geometry using a nonlinear optimization. Numerical and experimental results showed that the optimized geometry improved the sound source mapping. From the above, it can be seen that a great deal of research work has been done in the field of microphone array optimization for sound source localization. However, these array optimization methods are mainly based on the beamforming and acoustic holography methods. The optimization procedure is also usually based on existing array structures, such as cross array [29], circle array [31], spiral array [32], irregular planar array [33,34], spherical array [38], and so on. As such, certain constraints for the array structure have been introduced to the optimization. A pre-constrained array structure may lead to a local optimum, which may not be suitable for certain specific localization scenarios. In addition, besides the array optimization based on beamforming and acoustic holography, the research on array optimization based on TDOA is relatively rare. Zietlow et al. [39] established a simulation model based on TDOA to compare the source positioning accuracy of different microphone arrangements. The microphone arrays consist of eight microphones in three different arrangements, namely cube, twisted cube, and random. These array arrangements were fixed, and no optimization was performed for the array structures. Hu et al. [40] proposed an analytical method based on TDOA to optimize microphone array structure, which could guarantee that the sound source localization had the same performance in all directions for omni-directional estimation. However, the optimal result led to a set of nonlinear equations which could not give deterministic analytical solutions. With additional constraints, only a particular solution in a regular polyhedron form can be obtained. Further, only five kinds of array structures with a specified number of microphones belong to the solution of regular polyhedron form, including the tetrahedron (5 microphones), the hexahedron (9 microphones), the octahedron (7 microphones), the dodecahedron (21 microphones), and the icosahedron (13 microphones). The limited solution of array structures restricts the practical application of Hu's method. Meanwhile, due to some constraints in modeling and solving the method, these five kinds of array structures may not give the best positioning results under some specific localization scenarios, such as the scenarios with the asymmetrical distribution of sound sources. Therefore, more in-depth research needs to be carried out in the field of array structure optimization for sound source localization based on TDOA. This paper is devoted to an arbitrary microphone arrays optimization method for sound source localization based on TDOA. The method proposed is a numerical approach based on the particle swarm optimization (PSO) algorithm, which can optimize the array structure of an arbitrary number of microphones under any specific localization scenarios without prior array structure information. Examples of localization scenarios were constructed to obtain the optimal array structures through the proposed method. Additionally the optimal array structures were compared with the array structures proposed by Hu et al. [40] as well as random array structures under the constructed specific scenarios. This article makes four main contributions. First, a numerical approach of microphone array optimization based on the PSO algorithm for the TDOA method is proposed. Second, the proposed model can perform array structure optimization with an arbitrary number of microphones, and no prior array structure information is introduced into the optimization procedures, which is likely to obtain the more optimal solutions. Third, the array optimization model has general applicability, which can effectively solve the problem of microphone arrangements in sound source localization under different specific localization scenarios. Fourth, the fitness evaluation function constructed in the optimization model can give good consideration to the accuracy and robustness of sound source localization based on TDOA. The two specific localization scenarios established here verify the proposed optimization method. In the following sections, the optimization model is introduced in detail, and the localization performance is compared with the array structures proposed by Hu et al. as well as with random array structures. Section 2 introduces the construction of the TDOA-based sound source localization model for an arbitrary microphone array, as well as the solution for the localization model. The numerical optimization model based on PSO for an arbitrary array structure is presented in Section 3, in addition to the optimization procedure. Simulations were performed and their results are discussed in Section 4, followed by conclusions in Section 5. Construction of Localization Model Based on TDOA The sound source localization model is the basis of the array structure optimization. The TDOA method was used to locate the sound source. Therefore, the time difference and the spatial geometric relationship between the array and the sound source were used to establish the localization model. Geometric Structure Parameterization for Arbitrary Microphone Array In order to optimize the microphone array, the geometric structure of the array needs to be parameterized first. Because the sound source localization method in this paper is based on TDOA, a reference microphone is needed in the microphone array. For convenience, the coordinate of the reference microphone M 0 is set as (0, 0, 0). Then, the other microphones' coordinates can be expressed by the radial distance l i , the azimuth angle α i , and the elevation angle β i in three-dimensional space, as shown as Figure 1. The coordinates of the other microphones are M i (l i cos(β i ) cos(α i ), l i cos(β i ) sin(α i ), l i sin(β i )), where i = 1, 2, 3..., N m , N m stands for the number of microphones except for the reference microphone. Therefore, the optimal parameter of the microphone array is The constraint of the azimuth angle α i is [0 • , 360 • ] and for the elevation angle it is β i is [−90 • , 90 • ]. The range of radial distance l i is related to the size of microphone array and the frequency band of sound source, as well as the requirements for actual positioning scenarios. Normally, the lower limit of l i is the diameter of microphone d m , and the upper limit of l i is c/(2 f m ), where c is the speed of sound and f m is the main periodic frequency present in the sources. Then, the set of the optimal search space can be described as Spatial Source Localization Model Based on TDOA The spatial source location model was constructed based on the TDOA method, which is a triangulation method. Suppose the coordinate of the sound source is S = (x, y, z). The mathematical description is shown as follows: where r 0 is the distance between the sound source and the reference microphone M 0 , and r i is the distance between the sound source and the other microphones M i . i = 1, 2, · · · , N m . By constructing the time arrival difference from the sound source to the reference microphone and the other microphones, the spatial source location model can be obtained. The model is shown as follows: where τ i,0 is the sound arrival time difference between M i and M 0 . τ i,0 can be estimated by the method of cross correlation. Suppose that u i (t) and u 0 (t) are the acoustic signals acquired by microphones M i and M 0 separately. The cross-correlation function between the two signals is The spatial source location model consists of a group of nonlinear multivariate equations, which is difficult to solve. An alternative method is to transform the model into a set of linear equations. Spatial distance satisfies the relationship shown in Equation (7): Then, Equation (4) can be rewritten as: where (8) is a group of linear equations, which can be written in matrix form: Solution for Spatial Source Localization model When the number of microphones is N m = 3, the spatial source location model (Equation (9)) can be solved directly, as represented in [41]. However, the direct solution method may produce two different answers, which leads to localization ambiguity. Meanwhile, the accuracy and robustness of source localization are not very good under the condition of N m = 3. Adding redundant sensors can effectively improve the performance of source localization. When there are more than four microphones (N m ≥ 4), the system is overdetermined, as the number of measurements is greater than the number of unknowns. The LS (least-square) method can be used to solve the overdetermined linear equations. Chan and Ho [42] proposed an alternative solution algorithm in closed-form, valid for both distant and close sources, which used twice-weighted LS to give the localization results. Chan's method gives an explicit solution with reasonable accuracy and is non-iterative with low computational complexity. Therefore, Chan's method is more suitable for the optimization calculation of acoustic array structure in this paper. The First Weighted Least-Square Solution Process In order to solve the source localization model by least-square method, Equation (9) should be rewritten to construct an error vector. Because of noise in the TDOA estimation, the error vector can be derived as: where (·) 2 stands for the expectations of variables without noise. Suppose that the noise of TDOA estimation is n i . In practice, the condition r 0 i r i,0 = cn i,0 is usually satisfied. Therefore, the second term on the right hand side of Equation (11) can be ignored. Then, the covariance matrix of ψ can be given as: where Q = E(nn T ) = Cov(n). Then, the first weighted LS method is used to solve Equation (10): When the source is far from the array, each r 0 i is close to r 0 , so R ≈ r 0 I. Then, an approximate solution of Equation (13) is When the source is close to the array, Equation (14) can be firstly used to obtain an initial solution to estimate R, which can be substituted into Equations (12) and (13) to get a more accurate result. The Second Weighted Least-Square Solution Process The above solution of z a assumes that x, y, and r 0 are independent. However, r 0 is related to the source location. In order to incorporate this relationship to give an improved estimate, z a can be expressed as z a,1 = x 0 + e 1 , z a,2 = y 0 + e 2 , z a,3 = z 0 + e 3 , z a, where e 1 , e 2 , e 3 , and e 4 are estimation errors of z a . (x 0 , y 0 , z 0 ) are the coordinates of the real sources. Then, a new error vector ψ can be obtained as: Substitute Equation (15) into Equation (16): The covariance matrix of ψ is where Then, the second weighted LS method is used to solve Equation (16): The matrix Ψ is not known since it contains the true values. However, R can be approximated by using the values in z a . If the source is far away, then the covariance matrix of z a can be represented as: Then, Equation (19) reduces to: The final sound source position is estimated as: or Numerical Optimization Method for Array Structures Given a certain number of microphones, there are infinite spatial geometric structures for microphone arrays. Nevertheless, in various practical scenarios, it is necessary to find the optimal array structure to effectively reduce the positioning error of sound sources in the target area. Because the microphone array consists of multiple microphones and each microphone's coordinates have three independent variables, it the structure optimization of the microphone array in this paper is a multidimensional optimization problem. The evolutionary algorithm is a global optimization method with high robustness and broad applicability. Unlike classic optimization methods such as gradient descent and quasi-Newton methods, the gradient of the problem being optimized is not required for the evolutionary algorithm. Meanwhile, the evolutionary algorithm makes few or no assumptions about the optimization problem and has great advantages in the application of unsupervised, complex multidimensional problems that cannot be solved using traditional deterministic algorithms [43]. The genetic algorithm (GA) and particle swarm optimization (PSO) algorithm are evolutionary algorithms. GA searches for the optimal solution by imitating the mechanism of selection and inheritance in nature. The selection of crossover rate and mutation rate in GA seriously affects the quality of the solution, and the selection mostly depends on experience. Additionally, GA is very slow and difficult to converge for high-dimensional problems. Particle swarm optimization (PSO) is a metaheuristic global optimization algorithm, and the inner workings of the PSO make sufficient use of probabilistic transition rules to search very large spaces of candidate solutions in parallell [44]. Compared with GA, PSO has the advantage of simplicity, easy implementation, and few parameters requiring adjustment. PSO does not have genetic operations such as crossover and mutation. Instead, it determines the search based on its speed. Another essential feature of PSO is that particles have memories. The full search and update process of PSO follows the current optimal solution. Compared with GA, PSO may converge to an optimal solution more quickly. For the optimization problem of the microphone array structure in this paper, the gradient of the optimization objective function is difficult to derive. Additionally, the structure optimization of an array with many microphones is a high-dimensional optimization problem. These factors make PSO an effective method to solve the optimization problem in this paper. Therefore, an optimization model based on PSO was constructed to optimize the geometric parameter of the microphone array under different localization scenarios. Optimization Model Based on PSO A swarm of particles which traverse a multidimensional search space are employed in the PSO algorithm to find optima. Each particle is a potential solution and is influenced by the experiences of other particles, as well as its own experiences. Let p j be the position in the search space of the j-th particle, and the number of particles is set as N p . Then, a swarm of particles can be expressed as: where each particle can be denoted as A new fitness evaluation function for the array structure optimization is constructed by the mean squared error (MSE) and the variance (VAR) of the localization results, which can comprehensively consider the localization accuracy and robustness. The fitness function is shown as follows: where φ w is the weight value, φ w ∈ [0, 1]. z p is the final estimated sound source position. MSE(z p ) is the mean squared error of the localization results, which can be defined as where z 0 p is the coordinate of the real source. N s is the number of sources involved in the optimization. VAR(z p ) is the variance of the localization results, which can be defined as: where In Equation (26), the mean squared error can be used to judge the accuracy of sound source location results, and the variance can be used to judge the robustness of sound source localization results. The weight ratio between them can be adjusted according to the requirement of localization scenarios. Then, an optimization problem (minimization) is defined as: where R d is the real number field in d-dimensional space. The PSO algorithm is used to solve this optimization problem. To seek the optimal solution, each particle moves in the direction of its previously best (p best ) position and the global best (g best ) position in the swarm. The expression of p best is and the expression of g best is where k denotes the current iteration number, and I t denotes the maximum iteration number. The velocity V and position p of particles are updated by the following equations: where V stands for the migration velocity of particles, which is common to be set as a boundary to limit particles flying out of the search space. rand(·) are uniformly distributed random variables within range [0, 1]. c 1 and c 2 stand for learning factors, which are positive constant parameters. w is the inertia weight used to balance the global exploration and local exploitation. Shi [45] suggested a solution to determine the inertia weight: where w max and w min are maximum and minimum weight, respectively. PSO Optimization Procedure The optimization procedure for the acoustic array is summarized as follows: Step 1. Initialize PSO parameters including the number of particles N p , the learning factors c 1 and c 2 , inertia weights w max and w min , and the total iteration number I t . Step 2. Initialize the particles' positions with a random distribution p j (0), and the parameters of the each particle (j = 1, 2, . . . , N p ) do not go beyond the boundaries of the search space. Step 5. If f (p j (k)) < f (p best (j, k)), update the best known particle position p best (j, k) = p j (k); if f (p best (j, k)) < f (g best (k)), update the global best position g best (k) = p best (j, k). Step 6. Judge the termination criteria: f (g best (k)) ≤ δ (δ is presented as threshold) or the iteration number reaches the maximum I t with the fitness function converging steadily. If not, repeat Steps 4 and 5. Otherwise, go to Step 7. Step 7. Output the g best (k) that stands for the best optimized result. The flow chart of the optimization procedure is shown in Figure 2. Simulation and Analysis In order to verify the effectiveness of the method proposed in this paper, two kinds of sound source localization scenarios were constructed for microphone array optimization. One scenario was a ring-shaped sound source distribution, and the other was a cuboid sound source distribution. These two scenarios represent some specific sound source localization scenarios in practical applications, such as surround sound sources localization and road traffic flow noise sources tracking. For each specific localization scenario, two strategies of structure optimization were adopted to generate two kinds of optimal structures. In addition, the regular polyhedron microphone array structure proposed by Hu et al. as well as random array structures were used as a comparative study of the performance of sound source localization. The model established in Sections 1 and 2 was edited to code and run on the Matlab platform. Scenario I-Ring-Shaped Sound Sources Distribution In scenario I, sound sources were distributed in a cyclic annular band, here referred to as the ring-shaped sound source distribution. The distribution was controlled by Equation (35), as follows: where C SI is the coordinates of the sound source. R SI is the radius of the cyclic annular band. θ SI is the azimuth angle of the source. h SI is the height of the source. In scenario I, R SI ∈ [6 m, 6.5 m], The source distribution for the array structure optimization under scenario I is shown in Figure 3. The microphone array was located in the center of the ring, in which the reference microphone was located at the origin of the coordinates, and the location of other positioning microphones was obtained by optimization calculation. For the sound source localization based on TDOA, the time difference estimation error is the main influencing factor for localization accuracy. To facilitate the optimization and verification of microphone array structures, the TDOAs were directly obtained by calculating the relative position relationship between the sound sources and the array microphones. A noise component was added to the TDOAs, which was used to represent measurement noise in actual applications. where η i,0 is the time delay estimation noise component. η i,0 is assumed to be a mutually independent, zero-mean stationary Gaussian random process, and the standard deviation of η i,0 is σ. In this simulation, σ was set as 0.01. Five microphones were selected to optimize the microphone array structures, which were compared with the tetrahedral structure array proposed by Hu et al. as well as a random array. One microphone in the array was chosen as the reference microphone, and the coordinates were set as (0, 0, 0). The other microphones' coordinates were set as M i = (l i cos(β i ) cos(α i ), l i cos(β i ) sin(α i ), l i sin(β i )). The array parameters (l i , α i , β i ) were set as the properties of each particle. The parameters of the PSO model were initialized. The learning factors c 1 and c 2 were all set to 1.5. The maximum weight w max was set to 0.8. The minimum weight w min was set to 0.4. The weight value φ w of the fitness function was set to 0.5, which means that the localization accuracy and robustness were equally considered. In this simulation, two optimization strategies were applied to search for the optimal array structure. For the first kind of array optimization (Opt-array I), the distances l i between M i and M 0 were set to the same length 0.7 m, which is comparable with the tetrahedral structure array proposed by Hu et al. and the random array. Then, the azimuth angle α i and the elevation angle β i are the main geometric parameters to be optimized for the microphone array. Therefore, for Opt-array I, the dimension of the particles in the PSO algorithm was eight, since the number of other microphones used in scenario I was four. The constraints of the optimization space were α i ∈ [0 • , 360 • ] and β i ∈ [−90 • , 90 • ]. The number of particles N was set to 250. For the second kind of array optimization (Opt-array II), the radial distance l i was not predefined. The azimuth angle α i , the elevation angle β i , as well as the radial distances l i were all used for the array geometric parameters to be optimized. For convenience, the radial distance l i was limited between 0.2 m and 0.8 m, considering the general array size for sound source localization. Then, the dimension of the particles for Opt-array II was twelve, and the constraints of the optimization space were α i ∈ [0 • , 360 • ],β i ∈ [−90 • , 90 • ], and l i ∈ [0.2 m, 0.8 m]. Given that the optimizing search space is much larger than in Opt-array I, the number of particles N was set to 450 for Opt-array II. Then, the optimization model based on PSO ran on the Matlab platform to obtain the optimal array structure under scenario I. The fitness evolution curves of Opt-array I and the Opt-array II are shown in Figure 4. It can be seen from Figure 4 that the fitness evolution curve tended to be stable after 200 steps of iteration, which indicates that the optimization process was basically convergent. The optimization results are deemed to be the optimal array structures under scenario I. For the manufacturing of a microphone array, an angle precision of 5 • is achievable. Therefore, the angle values of the optimized array were all rounded each 5 • . The microphone coordinates and geometric parameters of the optimal arrays, the tetrahedral array proposed by Hu et al., and a random array are listed in Table 1. It can be seen from Table 1 that the array geometric parameters between Opt-array I and the Opt-array II were different. The average radial distances of Opt-array II were larger than those of Opt-array I. At the same time, there were small differences among the four radial distances of Opt-array II. The array structures of Opt-array I and Opt-array II are shown in Figure 5. In order to verify the performance of the optimal arrays obtained in the simulation, a scenario with sound sources randomly distributed in the cyclic annular band was constructed, as shown in Figure 6. As shown in Figure 6, 200 sources were randomly distributed in the cyclic annular band of scenario I. Opt-array I, opt-array II, the tetrahedral structure array, and the random array were used to locate these sources. The distances between the located sources and the corresponding real sources were counted to measure the positioning accuracy and robustness. Meanwhile, in order to analyze the influence of input noise on array positioning performance, Gaussian random noises η i,0 with four different standard deviations were added to the time delay estimation, namely, σ = 0.005, σ = 0.01, σ = 0.015, σ = 0.018. The statistical chart is shown in Figure 7. In Figure 7, the height of the rectangular bar stands for mean localization error, and the length of the line bar presents the standard deviation of the localization error. It can be seen from Figure 7 that the mean values and the standard deviations of the localization error were enlarged with the increase of the input noise component of the time delay estimation. Under the same input noise amplitude, the mean value and the standard deviation of Opt-array I and Opt-array II were all much lower than that of the tetrahedral array and the random array. The bigger the input noise, the more significant the gap. This means that the optimized arrays by the proposed method could improve the accuracy and robustness of the sound source localization based on TDOA. The results illustrate the effectiveness of the array structure optimization method proposed in this paper. Under four different input noise levels, the mean value and standard deviation of the localization error for the random array were all much larger than that for Opt-array I, Opt-array II, and the tetrahedral array, which illustrates that array optimization-whether the method of this paper or Hu's method-produced a positive effect. The random arrays may achieve excellent positioning performance, but the possibility is tiny. Also, the mean value and standard deviation of the localization error for Opt-array II were lower than for Opt-array I. Considering that there were differences among the four radial distances l i of Opt-array II after array structure optimization, the optimization of the radial distance l i contributes to promoting the positioning performance of the microphone array besides the optimization of the azimuth angle α i and the elevation angle β i . Scenario II-Cuboid-Shaped Sound Sources Distribution In scenario II, the sound sources were distributed in a cuboid space band, here referred to as the cuboid-shaped sound source distribution. The cuboid space band is was 15 m × 6 m × 3 m. The microphone array was located on one side of the cuboid distribution. The location of the reference microphone coincided with the origin of the coordinate system. The constructed scenario II is shown in Figure 8. In the simulation of scenario II, five microphones were also selected to optimize the microphone array structure, which was compared with the tetrahedral structure array proposed by Hu et al. and a random array. The noise component η i,0 with zero-mean Gaussian normal distribution was introduced into the time delay estimation. The standard deviation σ of the noise was the same as in scenario I, namely, 0.01. The parameters of the PSO model were set to be the same as in scenario I. The fitness evolution curve of Opt-array I and Opt-array II under scenario II are shown in Figure 9. It can be seen from Figure 9 that the fitness evolution curve tended to be stable after 150 steps of iteration. At the beginning of the iteration, the fitness function value of Opt-array II was higher than that of Opt-array I. Nevertheless, after numbers of iterative calculation, the fitness function value of Opt-array II was lower than that of Opt-array I when the iterations approached convergence, which means that the optimized structure of Opt-array II may have better localization performance than Opt-array I. Also, the angle values of the optimized array were all rounded each 5 • . The microphone coordinates of the optimal arrays and the tetrahedral array proposed by Hu et al. and the random array are listed in Table 2. In order to verify the performance of the arrays, the scenario of randomly distributed sound sources in the cuboid space band was constructed, as shown in Figure 11. In Figure 11, 400 sources are randomly distributed in the cuboid space band.The Opt-array I, the Opt-array II, the tetrahedral structure array, and the random array are used to locate these sources. The Gaussian random noise η i,0 with five different standard deviations are added to the time delay estimation, namely σ = 0.002, σ = 0.005, σ = 0.008, σ = 0.01, σ = 0.012. The statistics of the distances between the located sources and the corresponding real sources are drawn in Figure 12. It can be seen from Figure 12 that the mean values and the standard deviations of the localization error were enlarged with the increase of the input noise component of the time delay estimation. Under the same input noise amplitude, the mean value and the standard deviation of Opt-array I and Opt-array II were much lower than those of the tetrahedral array proposed by Hu et al. and the random array, and the gap increased rapidly with the increase of input noise. The optimized arrays by the proposed optimization method could improve the accuracy and robustness of the sound source localization based on TDOA. Under five different input noise levels, the mean value and standard deviation of the localization error for the random array were much larger than that for Opt-array I, Opt-array II, and the tetrahedral array, especially when the input noise component was large, which illustrates that array optimization produced a positive effect. Random arrays have little chance of achieving excellent positioning performance under specific localization scenarios. Also, the mean value and standard deviation of the localization error for Opt-array II were lower than those for Opt-array I, which indicates that the optimization of the radial distance l i contributed to promoting the positioning performance of the microphone array besides the optimization of the azimuth angle α i and the elevation angle β i . Moreover, the localization error reduction of Opt-array II under scenario II was more significant than that under scenario I when the input noise component was large. Considering that the difference of radial distance l i of Opt-array II under scenario II was much larger than that under scenario I, the radial distance under scenario II was a more significant factor of the array structure optimization than under scenario I. In addition, compared with scenario I, the mean values and the standard deviations of scenario II were much larger. The standard deviation rose sharply with the increase of input noise. The main reason for this is that the location area and the size of the sound sources in scenario II were much larger than in scenario I, and the sound sources were asymmetrically distributed. For scenario II, increasing the number of array microphones may help to reduce positioning errors and improve positioning robustness. Therefore, another optimization case was applied in scenario II, which is that seven microphones were chosen for the array structure optimization. The octahedron structure array proposed by Hu et al. [40] and a random array with seven microphones were used for comparative study. Two kinds of optimization strategies were also used in the simulation. In the first kind of array optimization (Opt-array-7mic I), the radial distances l i between M i and M 0 were set to the same length of 0.7 m. For the second kind of array optimization (Opt-array-7mic II), the radial distances l i , the azimuth angle α i , and the elevation angle β i were all used as the array geometric parameters to be optimized. The constraints and the initial parameters of the optimization model were set to be the same as in the case of the five microphone array optimization. Given that the optimizing search space was much larger than the array optimization with five microphones, the number of particles N for Opt-array-7mic I and Opt-array-7mic II were set to 400 and 650, respectively. The optimal array structures were obtained after running the optimization model on the Matlab platform under scenario II. The geometric parameters of the optimal arrays, the octahedron array, and the random array are listed in Table 3. Also, the angles of the optimized array were all rounded 5 • . The array structures of Opt-array-7mic I and Opt-array-7mic II are shown in Figure 13. It can be seen from Table 3 and Figure 13 that the array structure between Opt-array-7mic I and Opt-array-7mic II were different. The difference of the radial distances l i of Opt-array-7mic II was much smaller than that of Opt-array II under scenario II. In order to verify the performance of the optimal arrays, the scenario of randomly distributed sound sources in the cuboid space band was constructed, similar to Figure 11. Gaussian random noises η i,0 with five standard deviations were also added to the time delay estimation, namely, σ = 0.002, σ = 0.005, σ = 0.008, σ = 0.01, σ = 0.012. The statistics of the distances between the located sources and the corresponding real sources are drawn in Figure 14. Figure 14 shows that the mean values and standard deviations of localization error for Opt-array-7mic I and Opt-array-7mic II were lower than for the octahedron array proposed by Hu et al. and the random array, which illustrates the effectiveness of the proposed array optimization method. Comparing Figures 12 and 14, it can be seen that the mean values and the standard deviations of localization error for Opt-array-7mic I and Opt-array-7mic II were lower than for Opt-array-I and Opt-array II under scenario II. Considering that the standard deviations of the optimal arrays with seven microphones were significantly lower than those of the optimal arrays with five microphones when the input noise component was large, optimal array structures with more microphones could significantly improve the robustness of the source localization based on TDOA. In addition, the mean values and the standard deviations of localization error for the octahedron array and the random array with seven microphones were also much lower than that for the tetrahedral array and the random array with five microphones, which demonstrates that increasing the number of microphones can greatly improve the positioning accuracy and robustness of the array based on TDOA. Conclusions This paper proposed a method of microphone array optimization for sound source localization based on TDOA under specific localization scenarios, which can be applied to the optimization of arbitrary array structure without prior information. For any number of microphones, a more optimal array structure can be given under any localization scenario. The proposed method is a numerical approach based on the particle swarm optimization algorithm. The mean squared error and the variance of the localization results combined with a weight value are used to construct the fitness function of the optimization model, which can consider both positioning accuracy and robustness. The geometric structure of the microphone array was established in parametric form, which is assigned as particle attributes and substituted into the optimization model to obtain the more optimal results. Two specific localization scenarios were constructed to optimize the array structures. For both specific scenarios, two kinds of array optimization strategies were utilized to obtain two optimal array structures. The optimized array structures were compared with the regular polyhedron structure array under different input noise amplitude. For scenario I, the mean value and the standard deviation of the localization error for Opt-array I and Opt-array II were much lower than for the tetrahedral array and the random array, and the higher the input noise, the more significant the gap. Under four different input noise levels, the mean value and standard deviation of the localization error for the random array were the largest, and those of Opt-array II were the smallest. The results indicate that the array optimization produced a positive effect, and the optimization of the radial distance l i contributed to promoting the positioning performance of the microphone array under scenario I. For scenario II, the mean value and the standard deviation of Opt-array I and Opt-array II were also much lower than those of the tetrahedral array and the random array. Under five different input noise levels, the mean value and standard deviation of the localization error for the random array were the largest, and those of Opt-array II were the smallest. The array optimization and the optimization of the radial distance l i all showed a positive effect on the positioning performance of the microphone array under scenario II. Moreover, the localization error reduction of Opt-array II under scenario II was more significant than that under scenario I. Considering that the difference of the radial distance of Opt-array II under scenario II was much larger than that under scenario I, the radial distance under scenario II was a more significant factor of the array structure optimization than that under scenario I. Under scenario II, the mean value and standard deviation of the optimal array were much higher than those of the optimal array under scenario I. The array with seven microphones was introduced into the optimization under scenario II, compared with the octahedron array and a random array. The results show that under five different input noise levels, the mean value and standard deviation of the localization error for Opt-array-7mic II were the smallest, and those for the random array were the largest. The mean value and standard deviation of the optimal array with seven microphones were lower than those of the optimal array with five microphones; especially, the standard deviation of the optimal array was significantly lower. This indicates that an optimal array structure with more microphones can significantly improve the robustness of source localization based on TDOA. For both specific localization scenarios, the comparison results show that the localization accuracy and robustness of the optimized array structures were better than those of the regular polyhedron array structures proposed by Hu et al. and random array structures, which illustrates the effectiveness of the proposed array structure optimization method. The random arrays may achieve excellent positioning performance, but the likelihood is small. The optimization of the radial distance l i contributed to promoting the positioning performance of the microphone array besides the optimization of the azimuth angle α i and the elevation angle β i , particularly for scenario II. In the future, the efficiency of the optimization algorithm can be further studied, as well as the correlation between the positioning performance of the array and the array geometric parameters.
2019-10-09T13:14:24.639Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "4ab9ad139194c67f18aefc74241dc1f32b762302", "oa_license": "CCBY", "oa_url": "https://res.mdpi.com/d_attachment/sensors/sensors-19-04326/article_deploy/sensors-19-04326-v2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4411eebc5b4b42d46124ae0a66534dda5c7a03e6", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
236966296
pes2o/s2orc
v3-fos-license
Evaluation of D-loop hypervariable region I variations, haplogroups and copy number of mitochondrial DNA in Bangladeshi population with type 2 diabetes The profound impact of mitochondrion in cellular metabolism has been well documented. Since type 2 diabetes (T2D) is a metabolic disorder, mitochondrial dysfunction is intricately linked with the disease pathogenesis. Mitochondrial DNA (mtDNA) variants are involved with functional dysfunction of mitochondrion and play a pivotal role in the susceptibility to T2D. In this study, we opted to find the association of mtDNA variants within the D-loop hypervariable region I (HVI), haplogroups and mtDNA copy number with T2D in Bangladeshi population. A total of 300 unrelated Bangladeshi individuals (150 healthy and 150 patients with T2D) were recruited in the present study, their HVI regions were amplified and sequenced using Sanger chemistry. Haplogrep2 and Phylotree17 tools were employed to determine the haplogroups. MtDNA copy number was measured using primers of mitochondrial tRNALeu (UUR) gene and nuclear β2-microglobulin gene. Variants G16048A (OR:0.12, p = 0.04) and G16129A (OR: 0.42, p = 0.007) were found to confer protective role against T2D according to logistic regression analysis. However along with G16129A, two new variants C16294T and T16325C demonstrated protective role against T2D when age and gender were adjusted. Haplogroups A and H showed significant association with the risk of T2D after adjustments out of total 19 major haplogroups identified. The mtDNA copy numbers were stratified into 4 groups according to the quartiles (groups with lower, medium, upper and higher mtDNA copy numbers were respectively designated as LCN, MCN, UCN and HCN). Patients with T2D had significantly lower mtDNA copy number compared to their healthy counterparts in HCN group. Moreover, six mtDNA variants were significantly associated with mtDNA copy number in the participants. Thus, our study confers that certain haplogroups and novel variants of mtDNA are significantly associated with T2D while decreased mtDNA copy number (though not significant) has been observed in patients with T2D. However, largescale studies are warranted to establish association of novel variants and haplogroup with type 2 diabetes. Introduction Mitochondrion is intricately involved in cellular metabolism that supplies energy for function and growth of cells. On the other hand, mitochondrial metabolism has also been implicated as one of the harmful effects as it causes oxidative stress by generating free radical and mediating apoptosis. Free radicals can also exert impact on DNA leading to mutation. It is now well documented that mutation in mitochondrial DNA (mtDNA) contributes potentially to the pathogenesis of type 2 diabetes (T2D) and insulin resistance (Gordon et al., 2015;Jiang et al., 2017;Liou et al., 2012;Martin and McGee, 2014;Szendroedi et al., 2012;Ye et al., 2013). T2D is a multifactorial polygenic disease which is considered as global health burden. Prevalence of T2D is increasing gradually among world population and about 500 million individuals confirmed with T2D have been reported in 2018 (Kaiser et al., 2018) and by 2045 about 10.9% of the world population may suffer from diabetes (IDF Diabetes Atlas 9th Edition 2019, n.d.). On the other hand, early diagnosis and proper management provide effective health benefits to the patients with T2D. Variations within mtDNA have been shown to augment the production of reactive oxygen species which in turn further deteriorate the pathological conditions in patients with T2D (R€ osen et al., 2001). Many of these variants have been identified within the coding regions of mitochondrial genome with incompatible results (Jiang et al., 2021;Saha et al., 2019;Sun et al., 2019). On the other hand, polymorphisms in the non-coding D loop region have important contribution to the proper functioning of mitochondria (Liou et al., 2010(Liou et al., , 2012. The non-coding control region D-loop from 16024 -16576 nucleotide position comprising of 1124 base pairs contains three hypervariable regions (HV1: 16024-16383;HV2:57-372 and HV3:438-574). The hypervariable regions are hotspots for mtDNA variations (Stoneking, 2000;Tipirisetti et al., 2014). In previous studies, different variants in hypervariable region I (HV1) were found to be associated with T2D. Variant G16390A was weakly associated with T2D in Tunisian population (Hsouna et al., 2015). C16270T and C16320T was found to be significantly associated with increased risk of T2D in Moroccan population (Charoute et al., 2018). The poly C tract (16184-16193) of HV1 had been the prime focus of many association studies (Liao et al., 2008;Meiloud et al., 2013;Mueller et al., 2011;Palmieri et al., 2011;Saldaña-Rivera et al., 2018). T16189C polymorphism, one of the most widely studied D loop region variants, is reported to be associated with the regulation of reactive oxygen species production and mitochondrial DNA copy number (Lin et al. 2005;Liou et al., 2010). Both T16519C and T16189C were found to be associated with T2D in Italian population (Navaglia et al., 2006). A meta-analysis revealed association of T16189C with T2D and cancer (Kumari et al., 2018). D loop region is very important as it is essential for the replication and regulation of mitochondrial DNA copy number. MtDNA copy number, indirect marker of mitochondrial dysfunction (Malik and Czajka, 2013), alterations have been found in several diseases (Filograna et al., 2020). Reduced mtDNA copy number in patients with type 2 diabetes is now well evident (Al-Kafaji et al., 2018;Fazzini et al., 2021;Latini et al., 2020;Xu et al., 2012). Busnelli et al. (2019)) demonstrated indirect relation between reduced mtDNA copy number and oxidative stress along with inflammation. Variants within mtDNA are population specific and are clustered in lineages that in turn define haplogroups. Haplogroup J has been reported to be significantly associated with the increased risk of T2D (Crispim et al., 2006) while European population did not show such association (Chinnery et al., 2007). Thus, studies to find out the association between haplogroups and T2D had produced conflicting results. Our recent finding demonstrated protective role of G10398A polymorphism (within NADH dehydrogenase subunit 3 or ND3 gene) while C5178A variant (within NADH dehydrogenase subunit 2 or ND2 gene) was found to be associated with the risk of T2D (Saha et al., 2019). However, data regarding the association of hypervariable region of mtDNA with T2D with respect to Bangladeshi population are completely lacking. Thus, in the present study, we opted to analyze the HVI segment within the D-loop region of mtDNA to i) investigate the frequency of highly frequent variants and their probable association with T2D, ii) determine and compare haplogroups between T2D and healthy individuals and iii) determine and compare mtDNA copy number between T2D and healthy individuals with respect to Bangladeshi population. Study participants A total of 300 individuals participated in the current study. Out of them, 150 were healthy individuals and the rest of the 150 were patients with T2D. The study was approved by the ethical review committee of the Department of Biochemistry and Molecular Biology, University of Dhaka. Type 2 diabetic patients were diagnosed by determining the levels of fasting blood glucose (>6.9 mmol/L) and HbA 1 C (>6.5%) according to the criteria set by the World Health Organization (WHO). Healthy participants were selected who had no symptoms of infectious disease, liver and kidney disorders along with other noncommunicable diseases. Pregnant women and children were also excluded from this study. Anthropometric and demographic data were also recorded. After obtaining consent from the participants, five mL of blood was collected from each and stored at -80 C for further analysis. This is to mention here that the case and control participants in the present study were not age matched as they were randomly selected. The statistical analyses performed by adjusting the age in regression models account for the age difference between patients and controls. Extraction of genomic and mitochondrial DNA Cellular fraction of blood was used for the extraction of genomic and mtDNA. Genomic DNA was extracted using the organic phenol chloroform extraction methods as described in our previous studies (Huda et al., 2018;Goswami et al., 2021) while mtDNA was extracted and confirmed according to the protocol described by Saha et al. (2019). Determination of mitochondrial DNA copy number MtDNA copy number was determined using qPCR (quantitative polymerase chain reaction). Primers specific to mitochondrial tRNA Leu (UUR) gene were: forward primer: 5 0 -tgctgtctccatgtttgatgtatct-3 0 and reverse primer: 5 0 -tctctgctccccacctctaagt-3'. On the other hand, primers specific to the single copy nuclear gene beta-2 microglobulin (β2M) were: forward primer: 5 0 -cacccaagaacagggtttg-3 0 and reverse primer: 5 0tggccatgggtatgttgtta-3'. Each 10 μL of reaction mixture contained 1 μL of 5 μM forward primer, 1uL of 5 μM reverse primer, 5 μL of 2x SYBR Green SuperMix and 3 μL of 20 ng/μL of template DNA. The PCR reactions were measured in triplicate. The following program was used in the StepO-nePlus™ Real-Time PCR System (Applied Biosystems; Thermo Fisher Scientific, Inc., USA): 1 cycle of 50 C for 2 minutes and 1 cycle of 95 C for 20 seconds followed by 40 cycles of denaturation at 95 C for 15 seconds and 40 cycles of annealing/extension at 56 C for 30 seconds. The mtDNA copy number was calculated using the equation 2 x 2ΔCt, where ΔCt is the difference in Ct values of β2M and tRNA Leu (UUR) gene. Amplification of mitochondrial hypervariable region I and Sanger sequencing The hypervariable region I of mtDNA was amplified using forward primer 5 0 -accagtcttgtaaaccggag-3 0 and reverse primer 5 0 -gtgggctatttaggctttat-3 0 that amplified a stretch of mtDNA from 15911 to 16540 nucleotides. To perform polymerase chain reaction, a total volume of 30 μL was prepared and the condition was set at initial denaturation step of 5 minutes at 95 C, followed by 40 cycles of 30 seconds at 95 C, 30 seconds at 48 C and 40 seconds at 72 C, and a final extension step of 5 minutes at 72 C. All the PCR amplicons were verified using agarose gel electrophoresis (2.5%) and visualized via ethidium bromide. Bands of 630 bp confirmed amplification of our desired sequence of mtDNA. Each amplicon was purified by Wizard® SV Gel and PCR Clean-Up System (Promega, USA) followed by sequencing using Sanger chemistry. It is to mention here that after sequencing, 11 chromatograms were excluded as they generated noise with lower strength of baseline signals. Thus, a total of 289 chromatograms (145 of healthy individuals and 144 of patients with T2D) were analyzed in the present study. Analysis of sequence data and allele frequencies Sequences without ambiguities were obtained between 16017 to 16519 and they were aligned to relate with rCRS (reference, please provide the NCBI reference ID) with the help of Geneious software (Version 2021.0.3). Haplogrep2 was employed to determine the haplogroups. Allele frequencies were obtained using mitomap (htt ps://www.mitomap.org/MITOMAP). Variation in distribution of allele frequencies between healthy individuals and patients with T2D were performed using data from the Human mitochondrial DNA Genome Polymorphism Database available at http://mtsnp.tmig.or.jp/mtsnp/ index_e.shtml. Statistical analysis Demographic data was obtained from structured questionnaire and the quantitative data were compared between patients with T2D and healthy individuals using SPSS v21.0. The results were expressed as mean AE SD for continuous variables and % for categorical variables. Odds ratios with p-value were calculated using epitools (Aragon, 2020) package in R to find probable association of mtDNA variants as well as individual haplogroups with T2D. Haplogroup M was used as reference during odds ratio calculation since it was the predominant haplogroup. Statistical analyses were also performed to find out relationships of SNPs and haplogroups with T2D after adjusting the confounding factors i.e., age and gender. The quartiles of the whole study participants were measured. The mtDNA copy number was stratified according to the quartiles. Group with low mtDNA copy number (LCN) represents those individuals with mtDNA copy number less than lower quartile, group with medium mtDNA copy number (MCN) represents those with mtDNA copy number less than median but greater than or equal to lower quartile, group with upper mtDNA copy number (UCN) represents those individuals with copy number greater than or equal to median but lower than upper quartile and finally, group with higher mtDNA copy number (HCN) represents those individuals with copy number greater than or equal to upper quartile. Normality of mtDNA copy number was determined using Shapiro-Wilk test. Variables not normally distributed were analyzed using Wilcoxon test. Both Shapiro-Wilk test and Wilcoxon test were done using R language. Graphs were plotted in R programming language using the package ggplot2 (Wickham, 2016). Association of the mtDNA copy number with mtDNA variants was analyzed using R language. The differences of the mean mtDNA copy numbers were calculated between the mutant allele and the rCRS allele. For this particular association analyses, conditions i.e., case and control were also considered as confounding factors and hence along with age and gender, conditions were also adjusted. The rationale for adjusting disease condition was due to the decreased mtDNA copy number in patients with T2D compared to healthy individuals. General characteristics of the study participants Among the T2D patients, 73 (48.67%) of them were males and 77 (51.33%) were females. The average BMI of the T2D patients was 26.23 AE 3.37 kg/m 2 and the average age was 52.42 AE 9.77 years. The mean systolic blood pressure for T2D patients was 126.03 AE 6.61 mmHg while the mean diastolic blood pressure was 84.75 AE 6.80 mmHg. In case of healthy individuals, there were 87 males (58%) and 63 females (42%). The average BMI of the healthy individuals was 24.07 AE 2.80 kg/m 2 and the average age was 38.17 AE 12.31 years. The mean systolic blood pressure for the healthy individuals was 120.75 AE 8.20 mmHg while the mean diastolic blood pressure was 80.64 AE 7.89 mmHg. The anthropometric and demographic data of male and female T2D patients and healthy individuals is shown in Table 1. In T2D patients, the estimated mean value of glycated hemoglobin (HBA1c in %) was 8.57 AE 1.50 while in controls this value was 5.51 AE 0.35. The biochemical parameters has also been presented in Table 1. Statistical analysis showed that all the parameters (both demographic and biochemical) were significantly different between healthy controls and T2D patients. Frequency distribution of variants within the D-loop region of mtDNA sequence A total of 147 variants were identified within the D-loop region of mtDNA located within 16017-16525 that also harbors the hypervariable region I or HV I (16024-16383). The Manhattan plot presented in Figure 1 symbolizes the variants in the Hypervariable region I (HV I) of mtDNA D-loop. The plot was constructed using the qqman (Turner, 2018) package in R. The variants were found in 131 nucleotide positions. Among the identified variants and positions, 32 were unique with respect to T2D and 37 were unique to the healthy controls while rest 78 were found in both the groups of participants. These variants were classified into three major groups on the basis of the minor allele frequency: common variants (>/ ¼ 10%); variants with low frequency (>/ ¼ 5% to <10%) and rare variants (<5%). Supplementary Table 1 represents frequency distribution and association analysis of each variants with T2D. Out of 147 variants 132 (89.80%) were grouped as rare variants, 6 (4.08%) were grouped into low frequency and 9 (6.12%) were grouped as common variants. When frequency of the variants were analyzed independently in the healthy controls and patients with T2D, it was observed that out of 115 variants found in healthy controls, 4 (3.48%) were common variants, four were variants with lower frequency while rest of the 107 (93.04%) were rare variants. On the other hand, out of total 110 variants identified in patients with T2D, 2 (1.82%) were common, 7 (6.36%) were less frequent while rest 101 (91.82%) were rare variants. Further investigation revealed that 15 positions harbor two or more nucleotide changes. Among them, three different types of variants were identified at positions 16093 and 16318 that include transition (T-C and A-G) and transversion (T-G, A-C, A-T and T-A) in a total of 41 individuals (Supplementary Table 1), while rest of the 13 positions contain two different types of variations. Interestingly, further observation revealed that out of these nucleotide positions 5 (33.33%) were within the continuous stretch of 10 nucleotide (C enriched) from 16256 to 16265 ( 16256 CCACCCCTC 16265 according to rCRS). This stretch harbors a total of 14 variants (8 unique) present in 36 individuals. Among them, 21 were healthy individuals and 15 were patients with T2D. However, statistical analysis revealed that none of the variants in these positions had significant association with T2D (Supplementary Table 1). Association of various variants with type 2 diabetes Out of total 147 variants identified within the HVI region of mtDNA, highest frequency was observed at positions 16519 (T-C, 69.55%) followed by position 16223 (C-T, 59.52%), 16311 (T-C, 18.34%), 16129 (G-A, 17.99%), 16362 (T-C, 14.19%), 16126 (T-C, 12.46%), 16051 (A-G, 10.73%), 16319 (G-A, 10.38%), 16189 (T-C, 10.03%). Distribution of allele frequency in the present study participants was almost similar to that of reported frequencies found in the mitomap database comprising of 51,836 full length mtDNA sequences. However, it was observed that the frequency of A allele at position 16319 was two -fold, G allele at position 16051 was five-fold and T allele at position 16223 was 1.5 -fold higher in Bangladeshi population compared to that reported in mitomap allele frequency database. On the other hand, frequency of C allele at position 16189 was found to be 2.5-fold lower in the present study population compare to that of C allele frequency reported in mitomap. Out of the common variants, presence of A alleles at positions 16129 and 16048 instead of G alleles showed a protective role against T2D (OR: 0.42, p ¼ 0.007 and OR: 0.12, p ¼ 0.04, respectively) without adjusting the confounding factors i.e., age and gender. On the other hand, after adjusting age and gender out of two reported associated SNPs, only G16129A showed significant protective role against developing T2D (Table 2). Moreover, after age adjustment we identified association of two new variants (C16294T and T16325C) conferring protective role against T2D. These three SNPs G16129A, C16294T and T16325C also conferred protective role against T2D even after adjusting both age and gender (Table 2). Poly-C 16184-16193 tract Analysis of poly-cysteine(C) 16184 -16193 tract within the D-loop of mtDNA sequence revealed that 29 (10.03%) of the total individuals had C Figure 1. A modified Manhattan plot presenting the variants in the hypervariable region I of mtDNA loop. The y-axis shows the -log 10 of p-values and the xaxis shows nucleotide positions in the D -loop. The red line is for -log 10 (0.05). Any points above or on the line are statistically significant (p-value 0.05). A) Modified Manhattan plot without any adjustment (crude). The green points represent statistically significant SNPs. The plum color points represent SNPs which are not statistically significant. B) Modified Manhattan plot after age adjustment. The dark blue points represent statistically significant SNPs. The plum color points represent SNPs which are not statistically significant. C) Modified Manhattan plot after age and gender adjustments. The turquoise color points represent statistically significant SNPs. The plum color points represent SNPs which are not statistically significant. nucleotide instead of T at position 16189. Among them, 12 (4.15%) where found in healthy individuals while 17 (5.88%) were obtained in patients with T2D (Table 3). Further scanning of this region identified substitution of C nucleotide by T in 2 individuals (1 from healthy and 1 from T2D) at position 16184, substitution of C nucleotide by T in ten individuals (6 from healthy individuals and 4 from T2D) at position 16185, substitution of C nucleotide by T at 16187 (2 from healthy individuals), 1 healthy individual had T instead of C at position 16188, five individuals had T instead of C (3 from healthy and 2 from T2D) at position 16192 while 2 individuals (1 healthy and 1 T2D) had T nucleotide instead of C at position 16193. Nucleotides at positions 16186, 16190 and 16191 was found to be conserved in all individuals who took part in the present study. It is evident from Table 3 that the poly-C tract is almost evenly distributed between the two groups of study participants and no significant association of C16189T was observed. Even after adjusting the confounding factors (age alone along with both age and gender), the frequencies of variants within the poly-C tract did not show any significant association with T2D. Haplogroup analysis After analyzing D-loop region, a total of 19 haplogroups (A, B, D, E, F, H, J, L, M, N, O, P, R, S, T, U, X, W, Z) were identified. Among these, haplogroups B, E, J and O were only detected in case of healthy individuals while haplogroups P, S and X were found in patients with T2D. The frequencies of the haplogroups found in study participants have been presented in Table 4. Among these, 40.48% of the study participants belong to M macrohaplogroup followed by 14.18%, 10.73%, and 7.2% which belong to H, U and R haplogroups, respectively. Further analysis revealed that other than haplogroups unique to healthy individuals or to patients with T2D frequency of common haplogroups were almost evenly distributed. However, frequencies of N and T haplogroups were found to be 3-fold higher in healthy individuals while haplogroups A, D and L were found to be 6-fold, 3-fold and >2-fold higher in patients with T2D compared to their healthy counterparts, respectively. Logistic regression analysis demonstrated that haplogroup A is significantly associated with the risk of T2D in our population (OR: 5.61, CI: 0.89-148.49, p ¼ 0.05). After adjusting age and gender, both haplogroup A and H showed association with the risk of T2D (Table 4). mtDNA copy number in healthy individuals and patients with type 2 diabetes The median mtDNA copy number of healthy individuals was 334.37 (Inter Quartile Range or IQR ¼ 506.24) that did not differ significantly from that of T2D which was 287.73 (IQR ¼ 421.70) while computed by Wilcoxon test (p ¼ 0.16). More variations of the mtDNA copy number was observed in case of healthy individuals compared to T2D as indicated by the IQR (Figure 2). The mtDNA copy number of the study participants was not normally distributed (p < 2.2 eÀ16 according to Shapiro Wilk test). The normality was checked using Q-Q plot (Supplementary Figure 1). The lower quartile, median and upper quartile of mtDNA copy number for the study population were 147.37, 314.24 and 603.65, respectively. LCN group had X < 147.3732, group MCN had 147.3732 X < 314.2396, group UCN had 314.2396 X < 603.6459 and group HCN had X ! 603.6459 (where X represents mtDNA copy number). The distributions of mtDNA copy numbers of healthy individuals and patients with T2D in different groups have been portrayed in Figure 3. Even when median mtDNA copy number of healthy individuals and patients with T2D were stratified into Groups L, M and U, no significant variation was observed according to Wilcoxon test (p ¼ 0.08, 0.80 and 0.55, respectively). However, the median mtDNA copy number between the two groups of study participants varied significantly in Group H (p ¼ 0.03). The mtDNA copy number of healthy individuals in group H (IQR ¼ 380.01) varied more compared to that of patients with T2D in that group (IQR ¼ 228.99). Variants G16048A and G16129A had significantly increased mtDNA copy number without adjusting age gender and conditions of the participants (Table 5). On the other hand, variants T16126C, C16234T, T16311C and T16519C were found to have reduced mtDNA copy number before adjustments. After adjusting the age, gender and conditions, variants G16048A, T16126C, C16234T, T16311C and T16519C remained significantly associated with mtDNA copy number. A new variant C16291T was found to have increased copy number after adjusting the confounders as shown in Table 5. Discussion Mitochondrial DNA is an excellent tool in forensic and geneological studies owing to its high stability, high copy number, uniparental maternal inheritance and high mutational rates (particularly in the hypervariable regions). Many studies confirmed the association of T16189C of HV1 with the risk of T2D in different population (Bhat et al., 2007;Khogali et al., 2001;Kumari et al., 2018;Liao et al., 2008;Mueller et al., 2011;Palmieri et al., 2011;Park et al., 2008;Saldaña-Rivera et al., 2018;Tang et al., 2006). However, the T16189C variant was not significantly associated with T2D in our study though the variant was prevalent among the patients compared to healthy individuals (OR:1.48, Table 3). This finding is in concordance with other previous studies containing large number of cases (Chen et al., 2009;Chinnery et al., 2005;Hsouna et al., 2015;Meiloud et al., 2013;Mohlke et al., 2005;Saxena et al., 2006). A meta-analysis performed on Asian population conferred association of T16189C with the increased risk of T2D while such association was not manifested in European Finnish and British populations (Mohlke et al., 2005;Soini et al., 2012) as well as in North African Tunisian and Mauritanian populations (Hsouna et al., 2015;Meiloud et al., 2013) Interestingly, the frequency distribution of T16189C was similar in Asian and North African populations (Hsouna et al., 2016). It clearly indicates that genetic make-up of Asian, North African and European populations may contribute to such incompatible association of the T16189C variant with T2D. Two variants G16048A (OR: 0.12, p ¼ 0.04) and G16129A (OR: 0.42, p ¼ 0.007) were found to play protective role in T2D in The hybrid boxplot representing the mtDNA copy number after stratifying according to quartiles. LCN: mtDNA copy number less than lower quartile, MCN: mtDNA copy number less than median but greater than or equal to lower quartile, UCN: mtDNA copy number greater than or equal to median but lower than upper quartile, HCN: higher mtDNA copy number greater than or equal to upper quartile. Corn flower blue and crimson respectively represent healthy individuals (HI) and represents patients with type 2 diabetes (T2D). Statistical analyses revealed that mtDNA copy number was significantly lower (p ¼ 0.03) in patients with T2D compared to that of HI only in case of HCN. The median and intra quartile ranges (IQRs) of each group have been presented. The black circles column of HCN represent outliers of that group. Bangladeshi population. The variant G16048A (3.11%) was more frequent in the current study population compared to that reported in Mitomap (0.264%). G16129A is one of the ancestral SNPs ('RSRS50') as reported by Mitomap. None of these two variants were previously reported to confer risk or play protective role in T2D. Certain variations were found to be more frequent in our population compared to that reported in Mitomap. Variations G16319A, A16051G and C16223T were more prevalent in our population. In a previous study involving Bangladeshi population, variants G16319A, G16129A and C16223T were more frequent which concords with our findings (Sultana Figure 3. The hybrid boxplot of the mtDNA copy number in healthy individuals (HI) and patients with Type 2 diabetes (T2D). Corn flower blue and crimson respectively represent HI and patients with T2D. The median mtDNA copy number 334.37 of healthy individuals (Inter Quartile Range or IQR ¼ 506.24) did not differ significantly (p ¼ 0.16) from that of T2D (median: 287.73; IQR ¼ 421.70). More variations of the mtDNA copy number was observed in case of healthy individuals compared to T2D as indicated by the IQR. The black circles indicate outliers of mtDNA copy number of the two groups of study participants. aþcþg ¼ after adjusting age, condition (healthy individuals or T2D) and gender. * Difference ¼ mean mtDNA copy number with respect to mutant allelemean mtDNA copy number with respect to rCRS allele. , 2014). T16519C was the most frequent variant according to this study. This variant was found in 69.55% of the study population, which is nearly similar to the frequency reported in the Mitomap (62.94%). This change was slightly higher in patients with T2D (OR: 1.2) than healthy individuals but the association was not significant (p ¼ 0.47). The variation T16189C was less frequent in Bangladeshi population (10.03%) compared to the frequency in the database (25.37%). A total of 147 variations were found in this study of which 120 were transitions, 20 were transversions, 2 were deletions and 5 were insertions. According to Supplementary Table 1, 38.3% of the transitions were C-T (46 out of 120) and 40% of the transversion were A-C (8 out of 20). However, G-A and C-G were the most frequent transition and transversion, respectively reported by Sultana et al. (2014). The frequency of a particular insertion C16151CC (an insertion of C at 16151) was within the low frequency group (5.88%) while insertion of C at 16083 (C16083CC) was among the rare variant (1.79%) found in our study population. No such insertions have been reported in the mitomap database. Surprisingly, a stretch of 10 nucleotides (16256-16265 in rCRS) was found to harbor 14 variants of whom 8 were unique (Supplementary Table 1). This region seemed to be quite flexible to accumulate mutations in Bangladeshi population. The variants G16048A, G16129A, T16189C, C16294T and T16325C were further compared to the data in HmtVar (https://www.hmtvar. uniba.it/). The variant G16048A was only found in 0.31% of healthy Asians according to HmtVar while in our population the prevalence is 3.11%. The prevalence of G16129A in Asians is 0.38% (0.21% for healthy individuals, 0.17% for diseased individuals) according to HmtVar. However, this particular variant was found to be more frequent (17.99%) in our study population (12.11% in healthy and 5.88% in T2D). The highly studied variant T16189C has a frequency of 49.3 % in Asians (17.6% for healthy and 31.7% for diseased) according to HmtVar. In our study, the frequency of this variant is much less compared to HmtVar (10.03%; healthy ¼ 4.15% and T2D ¼ 5.88%). The frequency of the variant C16294T is 5.9% for Asians in HmtVar (4.6% for healthy and 1.3% for T2D). Whereas, in our study the particular variant had a frequency of 2.43% (healthy ¼ 2.08% and T2D ¼ 0.35%). The variant T16325C had a frequency of 6.1% for Asians according to HmtVar (healthy ¼ 1.5%, T2D ¼ 4.6%). The same variant had a frequency of 2.42% (healthy ¼ 1.38%, T2D ¼ 1.04%) in our study participants. Our results concord with HmtVar for the multiple variant bearing stretch 16256-16265 except for the variant C16261T. HmtVar has a frequency of 13.7% for this particular variant in Asians (5.8% for healthy and 7.9% for diseased) while we found the variant in 3.11 % of the study population (2.42% for healthy and 0.69% for T2D). The involvement of haplogroups to the pathogenesis of type 2 diabetes is rather controversial. No association between diabetes and European mtDNA haplogroups was reported (Chinnery et al., 2007;Hsouna et al., 2015) while haplogroup N9a was found to be associated with T2D in Southern Chinese population (Fang et al., 2018). On the other hand, the same N9a haplogroup was reported to play a protective role in Japanese population (Fuku et al., 2007). Haplogroup J and T indicated association with T2D in Caucasians-Brazilian population of South Brazil (Crispim et al., 2006). Also, haplogroup J was found to confer risk of T2D in Finnish population (Mohlke et al., 2005). In another study involving Chinese population, Haplogroup M9 was found to confer risk to T2D (Liao et al., 2008). We found a total of 19 haplogroups in our study of which haplogroup M was the most frequent (40.48%) followed by H (14.18%), U (10.73%) and R (7.2%), respectively. The haplogroup frequency pattern is similar to that reported previously involving the Bangladeshi population (Sultana et al., 2014). Haplogroup A was found to be marginally associated with the risk of T2D in this study (OR ¼ 5.60, p ¼ 0.05) before considering confounding factors (age and gender) while after adjustment along with haplogroup A, a new haplogroup H was found to be associated with the risk of T2D as shown in Table 5. Africa specific haplogroup L (all were L3) was also found in 16 individuals (5.53%) of which 11 were patients with T2D (OR: 2.27). However, the result was not significant (p ¼ 0.13). Haplogroup L3 is the immediate ancestor of haplogroups M and N. Soares et al. (2011) and Cabrera et al. (2018) suggested that L3 most likely expanded from East Africa into Eurasia. Eurasian-distributed M and N derivative clades are considered to be originated from L3 leading to "Out of Africa" migration (Soares et al., 2011). On the other hand, Cabrera et al. (2018) suggested back migration of females carrying L3 from Eurasia to East Africa. Thus, genetic admixtures due to forth and back migration could be the reason of having the presence of haplogroup L in Bangladeshi population though Sultana et al. (2014) as well as Rishishwar and Jordan (2017) did not report haplogroup L while analyzing mtDNA sequences of 108 and 86 Bengali speaking Bangladeshi population, respectively. Further investigation of the maternal lineage of those 16 individuals could reveal the reason behind the presence of such haplogroup. Also, haplogroups B, E, J and O were only detected in case of healthy individuals while haplogroups P, S and X were found in patients with T2D (Table 4). The hypervariable regions are the non-coding control regions which constitutes the D-loop. This region plays a crucial role in mitochondrial DNA replication. Certain mutations in the hypervariable regions can lead to imbalance in regulation of mtDNA replication resulting in mtDNA copy number alterations. In the present study, mtDNA copy number was stratified into 4 groups based on the quartile of mtDNA copy number. In all the 4 groups, median mtDNA copy number of the healthy individuals were greater than that of patients with T2D. However, only for group with higher mtDNA copy number, the difference of median varied significantly (p ¼ 0.03) between T2D and healthy individuals. The variation of mtDNA copy number in patients with T2D (IQR ¼ 421.70) was less than that of healthy individuals (IQR ¼ 506.24). All the groups had similar number of T2D patients and healthy individuals. MtDNA copy number was also found to be decreased in Bahrain and Italian populations (Al-Kafaji et al., 2018;Latini et al., 2020). A cohort study in Korean population also reported decreased mtDNA copy number (Lee et al., 1998;Song et al., 2001). Decreased mtDNA was found to be associated with metabolic syndrome and T2D in Italian and German populations (Fazzini et al., 2021). Mitochondrial dysfunctions in T2D were reported to be linked with mtDNA copy number reduction (Rolo and Palmeira, 2006). On the other hand, a study with Bangladeshi population found elevated mtDNA copy number in T2D patients with nephropathy (Malik et al., 2009). However, this study was conducted only in 65 Bangladeshi individuals. When these individuals were stratified into diabetic nephropathy, healthy control and diabetics without nephropathy for comparative analysis with respect to mtDNA copy number, the statistical inference became rather weaker. Also, increased mtDNA was found in patients with T2D in a Mexican population (Cataño Cañizales et al., 2018). In our study, variants G16048A and C16291T had significantly increased while variants C16234T, T16311C and T16519C had significantly decreased mtDNA copy number after adjusting the confounders i.e., age, gender and conditions. The increased copy number associated with the variants G16048A and G16129A (Table 5) may be one of the reasons behind the protective role of these variants against the development of T2D as demonstrated in Table 2. In conclusion, our study revealed protective role of three novel variants against the development of T2D, association of haplogroups A and H with the risk of T2D in Bangladeshi population. Mitochondrial DNA copy number was found to be significantly lower in patients with T2D compared to healthy individuals in HCN group and six mtDNA variants were recognized to be significantly associated with mtDNA copy number in the participants. Also, unique insertion of C was observed at positions 16083 and 16151 not reported in any other population yet. In this study, inclusion of HV2 and HV3 regions could have generated a comprehensive variant landscape of the D-loop region in Bangladeshi population. The control and cases randomly included in this study were not age matched though adjustments of the age in regression models account for the age difference between the two groups. Thus, the replication of this study in the form of a larger cohort using age matched control and cases can give further insight into the variations in T2D and validate our findings. Whether inheritance of these three variants would confer or delay the onset of T2D will be a great interest of further research. Declarations Author contribution statement Sajoy Kanti Saha: Performed the experiments; Analyzed and interpreted the data; Contributed analysis tools or data; Wrote the paper. Abdullah Al Saba and Md. Hasib: Performed the experiments; Analyzed and interpreted the data; Wrote the paper. Razoan Al Rimon and Imrul Hasan: Performed the experiments. Md. Sohrab Alam and Ishtiaq Mahmud: Contributed reagents, materials, analysis tools or data. A.H.M. Nurun Nabi: Conceived and designed the experiments, Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. Funding statement This work was supported by the Ministry of Education; Ministry of Science and Technology, Government of the People's Republic of Bangladesh. Data availability statement Data included in article/supplementary material/referenced in article. Declaration of interests statement The authors declare no conflict of interest. Additional information Supplementary content related to this article has been published online at https://doi.org/10.1016/j.heliyon.2021.e07573.
2021-08-11T05:24:39.224Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "bc15f65926cac46207875f8807bbaced7b545d91", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2405844021016765/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bc15f65926cac46207875f8807bbaced7b545d91", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267175147
pes2o/s2orc
v3-fos-license
Synthesis, Structural, Morphological Characterization, and Cytotoxicity Assays of Metal Complexes Decorated SiO 2 Nanoparticles Against Breast Cancer Cell Lines (MDA-MB-231) This study examines the new synthesis of Pt (IV) and Au (III) Mannich base complexes derived from ciprofloxacin. The complexes were then used as precursors to prepare SiO2/PtO2 and SiO2/Au2O3 nanoparticles by depositing the synthesized complexes on porous silica nanoparticles. Elemental analysis, FT-IR, UV-vis, molar conductivity measurements, and melting point were used to characterize this ligand and its metal complexes. Elemental analysis data show that the general formula of the metal complexes formed is [M(L)2Cl2] nCl.H2O, where L = Mannich base ligand and M = Au(III) and Pt(IV), and n = 1,2 respectively with octahedral structure. The chemical structure and morphology of the metal oxide nanoparticles are investigated using FT-IR, XRD, AFM, TEM, and SEM. In the next step, the ligand and its complexes, SiO2/PtO2 and SiO2/Au2O3 nanoparticles were examined to investigate their toxicity (in vitro) as an anticancer agent to MDA-MB-231 cell lines by using different concentrations (50, 100, 200, and 400 µg /mL). Based on the results obtained from the cytotoxic activity, it can be concluded that the synthesized compounds are promising as new cancer candidates in the future, especially in high concentrations. Introduction The field of nanomedicine is a constantly evolving area of nanotechnology that has numerous applications in the biomedical field 1,2 .Nano therapeutics of the nanoparticle class have been shown to have a higher desired effect compared to conventional medications.This is due to surface functionalization, which can enhance the solubility, biocompatibility, and specific targeting capacity of nanoparticles.Metal and metal oxide nanoparticles can be synthesized and modified with a variety of chemical functional groups, allowing for a wide range of applications.By employing the necessary functionalization techniques, nanoparticles can be linked with biological molecules such as antibodies, nucleic acids, peptides, targeting ligands, DNA binding and even anticancer drugs [3][4][5] .Inorganic nanoparticles, such as mesoporous silica nanoparticles (MSN), have been extensively researched for their potential use in the delivery of drugs and other biomolecules, including proteins, Published Online First: January, 2024 https://dx.doi.org/10.21123/bsj.2024.8834P-ISSN: 2078-8665 -E-ISSN: 2411-7986 Baghdad Science Journal peptides, and nucleic acids.While antibiotics have traditionally been used to treat infectious diseases, the need for high in vivo drug dosages and the associated links to resistance have become a primary concern.As a result, there has been growing interest in the use of antimicrobial peptides (AMPs) as an alternative class of antimicrobials. Antibiotic delivery is particularly prone to proteolytic degradation at infection sites, which greatly impairs its activity.Additionally, direct delivery of antibiotics can lead to unwarranted toxic effects.However, nanotechnology can effectively address these issues by providing high loading capacity, site-directed delivery, and in some cases, triggered medication release.One promising approach is the use of drug delivery systems with a silica-gold core nanoshell, which offers numerous benefits over traditional dosage forms 6 .The use of Silica-Gold core nanoshells and PtNPs-based platforms for targeted drug delivery represents a promising area of research for the development of more effective cancer treatments.With further research and development, these technologies have the potential to revolutionize cancer treatment and improve patient outcomes 7,8 .Additionally, the potential for regulated transport of drugs may further reduce systemic exposure by controlling the release of drugs at the target site 9,10 .According to several research, using nanoparticles as part of synergistic therapy for the treatment of cancer not only enables cellular targeting but also lowers the risk of side effects, improves therapeutic effectiveness, and enhances the patient's long-term prognosis 11,12 Ciprofloxacin (99.5%), 2-mercapto benzimidazole, formaldehyde, solvents, and metal chlorides (analytical grade) were obtained from Merck (Schnelldorf, Germany).Using an AA-6880 Shimadzu atomic absorption flame spectrophotometer (Shimadzu Corporation; Tokyo, Japan), the metal content was measured.A Bruker Avance 300 spectrometer (Bruker BioSpin GmbH, Rheinstetten, Germany) .To measure the ultravioletvisible (UV-vi's) spectra in ethanol, a Shimadzu UV-1601 spectrophotometer (Shimadzu Company; Tokyo, Japan) was used.The FT-IR 8300 Shimadzu spectrophotometer (Shimadzu Corporation; Tokyo, Japan) was used to record the Fourier transform infrared (FTIR) spectra.Direct Probe captured mass spectra.The melting points in open glass capillaries were examined.Using EA-034.The myth, and the elemental analyses (C.H.N.S.) were obtained.Measurements of conductivity were performed using a Corning conductivity meter 220, and they were done in an ethanol solvent with a concentration of (10 -3 M).Field emission scanning electron microscopy (FESEM) images were recorded using a Tescan MIRA3 LMU instrument (Tescan Orsay Holding; Brno-Kohoutovice, Czech Republic).FT-IR was recorded with a PerkinElmer BX spectrometer (4000−400 cm −1 ) in KBr pellets.Whereas the powder XRD data were recorded on a diffractometer (X-ray tube target: CuKα (λ = 1.5406 nm).The AFM measurements were recorded by the instrument Veeco's Atomic Force Microscope and JEOL-(JEM) 12 30 transmission electron microscope.The surface area of the nanoparticles was evaluated using the Brunauer-Emmett-Teller (BET) method.Accurately weighted nanoparticles were degaussed at room temperature for 24 h to obtain 2 μmHg pressure.The surface area was determined via the multipoint nitrogen adsorption method (ASAP 2000, Micrometric, Norcross, GA, USA) Instrumentation. Preparation of metal Complexes The synthesis of metal complexes using a Mannich base ligand (L) and two different metal ions, Pt(IV) and Au(III).The procedure involves dissolving (0.493 g, 2 mmol ( of the Mannich base ligand in 10 mL of absolute ethanol, followed by the addition of 5 ml of the metal ion (0.409g H 2 PtCl 6 .6H 2 O and 0.354 HAuCl 4 .6H 2 O 1mmol).The resulting mixture is refluxed for 2 h, during which time the color of the solution changes.This change in color is likely due to the formation of the metal complex.After refluxing, the solvent is evaporated to yield a precipitate.This precipitate is then recrystallized from ethanol to purify the complex and finally dried to give the pure metal complex. Scheme 1. Synthesis of ligand (L) and its complexes. Synthesis of silica/metal oxide nanoparticles The silica nanoparticles were subsequently functionalized by 0.03 g particles dissolved in ethanol followed by the addition of 0.05 g of Pt (IV) and Au(III) complex.The mixture was vigorously stirred at room temperature for 24 h to promote the covalent binding of Pt and Au onto the silica particles.The functionalized silica particles were centrifuged on R-24 refrigerated centrifuge (REMI) at 2000 rpm for 1 h and dried in a hot air oven at 60•C.After then the precipitates were furnaces at 600 • C. Biological activity The cytotoxicity of ligand and its complex, SiO 2 /PtO 2 and SiO 2 /Au 2 O 3 nanoparticles was studied against MDA-MB-231 cell lines by in vitro MTT cytotoxicity assay 14 .Cell lines were evaluated 24 h after being exposed to the compounds at https://dx.doi.org/10.21123/bsj.2024.8834P-ISSN: 2078-8665 -E-ISSN: 2411-7986 Baghdad Science Journal various concentrations.Results from the MTT testing utilizing a desiccator were shown for ligand and its complex, SiO 2 /PtO 2 and SiO 2 /Au 2 O 3 nanoparticles.All the compounds produced were characterized using spectroscopy, analytical, and physical methods, as shown in Table .6.Various concentrations (400, 200, 100,50µg/ml) were compared to the untreated negative control culture medium. Statistical analysis Data were analyzed using IBM SPSS software version 91 th ed.SAS.Inst.Inc. Cary.N.C.USA. The Statistical Analysis System-SAS (2018) program was used to detect the effect of different factors in study parameters.The least significant difference -LSD test (Analysis of Variation-ANOVA) was used to significantly compare between means in this study. Characterization of the ligand and their complexes The data in Table 1 suggest that ligand (L) and its metal ion complexes are in agreement with calculated values.The suggested molecular structure is formulated and characterized by a subsequent spectral as well as magnetic moment. Table1.Color, melting point, yield, and elemental composition of ligand and its metal complexes. Spectral Analysis The FT-IR spectrum of ligand (L) mannich base (2-mercaptobenzimidazole) derivative from ciprofloxacin Fig. 1 is concerning because there are several groups with overlapping regions, but a few of the bands are chosen to demonstrate the complex nature.Table 2 lists the principal IR bands of the free ligand and its metal complexes.The spectrum of ligand shows stretching frequency of ν (CH 2 -N), ν (C=N), at (2964-2839), (1552), cm -1 respectively, the other bands appeared in 3531,1708,1627,1051,1361,1271,1137,738 cm -1 are assigned to stretching frequency of ν OH of COOH group, ν C=O, ν NCS, ν NCN, ν CNC, ν CSC, and ν CS respectively.The FTIR spectrum of the Au Complex Fig. 2 shows, the frequencies at 1707 cm -1 and 1627 cm -1 , respectively, are ascribed to the v (C=O) of the carboxylic and carbonyl groups.In comparison to the free ligand, these vibration bands occur at the same frequencies (1708 cm -1 and 1627 cm -1 ) 15 .These results showed that no carboxylic and carbonyl groups of oxygen atoms participated in the coordination of metal ions.In the Infrared spectra of complexes, the ν NH, bands did not change in intensity and position when comparing the same bands of the ligand, this proves the amine does not coordinate.The bands at 2964 2839 cm -1 which were attributed to the ν (CH 2 -N) of the ligand mentioned previously were shifted to higher wave numbers in both complexes about 6-15 and 1-13 cm -1 , while the band in 1552 which to the ν (C=N) of the imidazole ring shifted to lower wave number in both complexes about 9-44 cm -1 as shown in Table 2 .This indicates that the ligand acts as a https://dx.doi.org/10.21123/bsj.2024.8834P-ISSN: 2078-8665 -E-ISSN: 2411-7986 Baghdad Science Journal neutral bidentate through the N atom of the Munich base and through the N atom of the imidazole ring.The weak absorption bands present at frequencies below 500 cm -1 are assigned to the coordination bonds ν (M-N) 16 between the metal ion and nitrogen atom of the Munich base derivative from ciprofloxacin.The complex spectra exhibited new weak bands at a frequency range 347-368 cm -1 assigned to stretching frequency of (M-Cl) 17 for pt(IV) and Au(III) complexes.In two complexes appeared bands varying between 3383-3487 cm -1 which refers to the stretching band of water incoordination. Electronic spectral Electronic spectral studies of the ligand and both complexes were carried out in ethanol.The electronic spectrum of the Mannich base ligand shown in Table 3, Fig. 3 generally exhibited four main bands.The first and second absorption bands appeared at 30211, 32573, 35460 cm -1 due to interaligand (n → π*) transition of the carbonyl and -N=C-groups of imidazole in addition to the brazen ring . .The third absorption band located at 40322 cm -1 attributed to (π → π*) electronic transition of the aromatic rings 18 . [PtL] Electronic spectrum of the prepared behavior dark yellow Pt (IV) complex Fig. 4 showed four bands at 98135,2777,32879 and 45248 cm -1 which are assigned to the transitions 1 A 1 g → , 3 T 1 g, 1 A 1 g → 1 T 1 g , 1 A 1 g → 1 T 2 g and (L) → Pt (C.T) respectively 19 .The magnetic moment of the present complex is (0.0 B.M) of the present pit(IV) complex (d 6 ) configuration agrees with the octahedral configuration, this result indicates a diamagnetic.The conductivity measurement in ethanol showed that the complex was conducting, therefore the two (Cl -) ions are located outside the coordination zone.From the analysis of data and spectroscopy techniques, and from all results, the octahedral prepared for this complex can be suggested. [AuL] UV-vi's spectrum of orange Au(III) complex Fig. 5 showed two bands at 24570 and 35333 cm -1 assigned to 3 A 2 g→ 3 T 2 g, 3 A 2 g → 3 T 1 g transition respectively and other bands which appeared at 41152 cm -1 and 46296 cm -1 which could be due to the L→Au CT.The value of the Racah parameter B' has been calculated by fitting the ratio ν 2 /ν 1 to the Tanabe-Sugano diagram for the octahedral d 8 system.The Dq/ B'=2.80; therefore, B' will be 842.The value of the constant field splitting Dq=2392.3 cm - 1 and 10 Dq will be 23923 cm -1 which is in agreement with the octahedral environment reported 20 .So the third transition was calculated theoretically from the equation 15B\= v 3 +v 2 -3v 1 , and found to be 51613 cm - 1 , attributed to 3 A 2 g → 3 T 1 g (p) transition.The conductivity measurement for this complex shows to be ionic in nature.From the analysis of data and spectroscopy techniques, and from all results, the distorted octahedral prepared for this complex can be suggested.Comp.The mass spectrum is a technique used to calculate the molecular weight of the prepared compounds and determine the fragmentation that belongs to the compounds under study.The mass spectrum of the prepared ligand Fig. 6 was consistent with the proposed structural formula C 25 H 24 FN 5 O 3 S.The bands were recorded for the ligand in their spectrum, one of them was related to the molecular ion and observed at 494.4 m/z for ligand.Additional distinct peaks revealed in the mass spectra for each ligand were resulting from the successive fragmentation. X-ray diffraction (XRD) pattern In this study, XRD data was utilized to not only confirm the formation of different phases but also to calculate the particle size of each specimen.By analyzing the main peaks of each sample, the Debye-Scherrer equation was employed to determine the average particle size 21 : The X-ray diffraction analysis of the Au complex revealed interesting peaks that were compared to the standard d-values.The graph in Fig. 7 displays the index 2θ values for each peak, and it can be observed that there is good agreement between the 2 θ and d numbers.The diffraction peaks at 2θ values of 24.30, 27.733, 31.266, and 39.768 ° was identified as (101), ( 111), (002), and (211), respectively, in accordance with the Joint Committee on Powder Diffraction Standards requirements (JCPDS no.04-0784) 22 .Table 4 presents the X-ray diffraction data for the Au complex, indicating the powder's moderate crystallinity. Meanwhile, Fig. 8 displays the X-ray diffraction of a synthetic Pt complex, which exhibited distinct peaks at 2θ values of 11.7287, 13.845, 16.666, 17.713, 19.029, 24.587, 27.845 °.The metallic platinum-induced XRD patterns were compared to those of the JCPDS PDF card no.04-0802 standard card 23 , which showed similarities with the (111), (200), and (220) planes, respect.Based on the highest distinguishable peaks, the Au(III) and platinum(IV) complex grain sizes were estimated to be within 42 and 28.81 nm respectively. Characterization of SiO 2 / metal oxide nanoparticles FT-IR spectra The FT-IR spectrum of SiO 2 nanoparticles showed peaks appeared at around 3442 cm -1 and 1620 cm -1 which were attributed to molecular water and -OH bonding vibrations respectively 24 .Peaks that appeared at nearly 1105 cm -1 are attributed of stretching and out-of-plane vibrations of Si-O-Si bonds 25 .The band at 796.5 cm -1 is due to vibrations of SiO 4 .The band that appeared at 474cm -1 is due to the out-of-plane deformation of Si-O 26 .In the spectrum of the SiO 2 /PtO 2 and SiO 2 /Au 2 O 3 , Fig. 9 and 10, the intensity of Si--O--Si and Si--OH peaks has been reduced significantly.This indicates the presence of PtO 2 and Au 2 O 3 in silica particles.The weak absorption bands that appeared below 500 cm -1 are assigned to the coordination bonds of ν (Pt-O) and ν (Au-O) 27 X-ray diffraction (XRD) pattern Based on the information provided, Fig. 11a shows the X-ray spectrum of SiO 2 nanoparticles, which due to their amorphous nature, only show a broad band centered at 22°, which is typical for amorphous SiO 2 .No distinguishable peaks in the diffraction pattern can be seen for them, except for that one.The results show no impurity peak for SiO 2 when compared to the JCPDS Card No. 850335 for SiO 2 28 .In contrast, Atomic force microscopy (AFM) The surface morphology and roughness of the produced nanoparticles have been characterized using the AFM. BET Surface Area Determination The surface area and pore structure of the SiO 2 /PtO 2 and SiO 2 /Au 2 O 3 nanoparticles were calculated using the nitrogen isothermal adsorption method depicted in Figs.18 and 19.SiO 2 /PtO 2 and SiO 2 /Au 2 O 3 nanoparticles isotherm profiles both displayed a modest hysteresis loop that might be categorized as type IV.nanoparticles was calculated using the BET method from nitrogen adsorption/desorption measurements.The nitrogen adsorption isotherms at P/PO = 0.9 were used to calculate the BET surface area and pore volume.The Barrett-Joyner-Halenda (BJH) method was used to measure the size and volume of the pores, as shown in Table 5. Cytotoxic activity The against-cancer impact we tested the cytotoxic activity of synthetic free ligand and its complex, SiO 2 /PtO 2 and SiO 2 /Au 2 O 3 nanoparticles against MDA-MB-231 cell lines using the MTT assay after incubating the samples for 24 hours at 37°C and using doses of 50, 100, 200, and 400 µg /mL.The chosen compounds inhibited the MDA-MB-231 cell line growth in a variety of ways, and Table 6 comparisons of the percent inhibition of cell growth to the control provided determines the amount of the harmful effect.According to the cytotoxicity results, all tested compounds showed strong cytotoxicity against MDA-MB-231 cancer cells.The gold (III) complex showed the highest cytotoxicity effect with an LSD value of 9.53, followed by the platinum (IV) complex with an LSD value of 8.13, The ligand showed the less cytotoxicity effect with a value of 7.16.As the concentration of the compounds increased, cell viability decreased for MDA-MB-231 cancer cell lines.This is demonstrated in Fig. 20.The SiO 2 /Au 2 O 3 inhibited tumor cell death with an 87% cytotoxic efficacy.While MDA-MB-231 cell lines were suppressed by SiO 2 /PtO 2 to an extent of (82%) at a dose of 400 µg /mL.This finding demonstrated that SiO 2 /Au 2 O 3 nanoparticles significant cytotoxic activity was caused by an increase in an Au 2 O 3 surface area following homogeneous deposition on porous SiO 2 .These results suggest that these compounds have potential as anticancer agents and warrant further investigation at a concentration of 400µg /ml. Conclusion The new Munich base ligand and its complex, SiO 2 /PtO 2 and SiO 2 /Au 2 O 3 nanoparticles have been synthesized, deposited on porous SiO 2 , and characterized the structure for the ligand and its complexes, the analysis and spectroscopy technique also morphological were determined using FT-IR, XRD, AFM, TEM, and SEM.It can be concluded from biological activity that ligand and its complex, SiO 2 /PtO 2 and SiO 2 /Au 2 O 3 nanoparticles have good cytotoxic properties and selectivity against MDA-MB-231 cell lines.Cell viability and cytotoxicity assays were performed on the nanoparticles using MDA-MB-231 cell lines.It was discovered that SiO 2 plays a crucial role in dispersing Au 2 O 3 nanoparticles across a large portion of its surface area and in preventing metal oxide nanoparticle aggregation. Table 5 provides information about SiO 2 /PtO 2 and SiO 2 /Au 2 O 3 nanoparticles, surface area, average pore diameter, and total pore volume.It is believed that the SiO 2 /Au 2 O 3 nanoparticles have high surface energy is what causes nanoparticle aggregation or the formation of larger nanoparticles.The SBET of SiO 2 /PtO 2 and SiO 2 /Au 2 O 3
2024-01-24T16:51:45.777Z
2024-01-19T00:00:00.000
{ "year": 2024, "sha1": "5f5c648b1e0fe4e2296ad48c9c5d792409d5dc21", "oa_license": "CCBY", "oa_url": "https://bsj.uobaghdad.edu.iq/index.php/BSJ/article/download/8834/4651", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "126a64a48b7647431a25f71bdab0d350be206b51", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [] }
235650985
pes2o/s2orc
v3-fos-license
Overcoming the business model transformation dilemma: exploring market shaping and stabilizing strategies in incumbent firms Purpose – The purpose of this paper is to extend the knowledge on business model transformation (BMT) by developing an integrative framework for BMT dilemmas, including strategies for shaping and stabilizing market structures. Design/methodology/approach – The study uses a case-based approach, with data from the Swedish electric utility industry. Findings – The findings uncover practices related to both shaping and stabilizing market structure. The study contributes with insights for firms to overcome the BMT dilemma. Shaping strategies involve disruptive innovations while stabilizing strategies concerns incremental improvements in existing structures; by balancing these efforts, firms can find ways toward successful BMT. Originality/value – With a focus on incumbent firms and the balancing act of BMT in a network, the study covers areas that have scarcely been addressed in the existing literature. Even though most business model literature has focused on shaping consumer markets, the need to consider BMT as a dual-directional process in an industrial context is emphasized in this study. Introduction In top strategic marketing and innovation-centric management journals, there is increasing interest in business model transformation (BMT)moving from one business model (BM) to anotheras an imperative for achieving a competitive edge (Foss and Saebi, 2018). In the industrial marketing literature, the re-conceptualization of BMs is seen as a key driver for competitive advantage (Matthyssens, 2019). Simultaneously, the BM's network aspects are increasingly acknowledged as essential aspects of this transformation (Jocevski et al., 2020;Klimanov and Tretyak, 2019;Palo and Tähtinen, 2011). While market structures are disrupted, incumbents frequently experience situations where their BM, once wellfitted to the old market structure, loses its relevance (Hacklin et al., 2018;Pateli and Giaglis, 2005). Recent research has found that common responses to a lost relevance include either a complete replacement of the firm's old BM or an endeavor to incrementally add extra layers to the old BM while continuing "business as usual" in parallel (Hacklin et al., 2018). Despite the strategy chosen, BMT is always a significant risk, especially for industrial firms, as it means disruption to the firm's normalized practice, as well as organizational tensions from ambidexteritythe ability of an organization to manage the present market situation while at the same time explore new opportunities and respond to changes in the environment (Eltantawy, 2016;Koryak et al., 2018;O'Reilly and Tushman, 2008). Both the complete leap and the incremental strategy could, nevertheless, be successful to some extent and prior work has generated important insights about sound BMT strategies both on developing existing business and radically inventing new ones (Aspara et al., 2013). Recent works have also recognized how proactive BMT can change and disrupt the dominant market structure in different industries (Bidmon and Knab, 2018). Hence, it is shown that BMT can be proactively executed to induce structural market change and reactive as in response to stabilizing the existing market structure. However, while much research emphasizes BM innovation (Chesbrough, 2010;Sosna et al., 2010) less is known about the BMT dilemma of balancing between the paradoxical tension of shaping a new market and, often in parallel, attempting to stabilizing already captured market positions (Kjellberg et al., 2015). For incumbents, this strategic dilemma appears in how to accelerate the transformation toward something new and potentially more viable for the future while remaining profitable throughout the transformation process and without ruining the investments made. Conversely, a framework that expands and integrates the knowledge of the BMT dilemma is still missing. More critical, from a practical standpoint, is that the balancing dilemma in BMT has not yet been explicitly linked to the overall strategy of the firm. In other words, facing different starting points, strengths and firm goals, executives have little strategic guidance from the literature that focuses either on accelerating transformation processes or squeezing profits from existing assets without tackling the dilemma of doing both in parallel. Hence, this paper aims to extend the knowledge of BMT by developing an integrative framework for BMT dilemmas, including strategies for shaping and stabilizing market structures. The framework is developed by a case study that describes three attempts to move from one BM to another in the electric utility market. This in-depth, multi-source case study is based on both primary sources, including observations, field studies and interviews and secondary data such as annual reports, sales presentations and newspaper articles. By doing this, we contribute to a better understanding of how the two transformative movements are mutually important and dependent, especially for incumbents in established but changing markets. This study makes at least two contributions to the ongoing discussion on BMT. First, it develops a framework for balancing the dilemma by adopting marketshaping strategies and market stabilization strategies. Second, it provides managers with practical knowledge on how to coordinate resources in times of structural changes. The article continues as follows: first, the theoretical background, framework and methodology are presented. Then, the analysis and conclusions are discussed. Frame of reference Even if the theoretical conceptualization of the BM is fragmented (e.g. a vision, a management idea, a strategy and the blueprint of the firm), the BM is a widespread concept, both theoretically and practically, as a metaphor of the firm's modus operandi (Osterwalder and Pigneur, 2010)containing its resources and competencies, the organizational structure and the value proposition to the market (Demil and Lecocq, 2010). A BM: [. . .] describes the design or architecture of the value creation, delivery, and capture mechanisms [a firm] employs. The essence of a business model is in defining the manner by which the enterprise delivers value to customers, entices customers to pay for value, and converts those payments to profit (Teece, 2010, p. 172). Traditionally, the BM has been viewed as a rather static concept, like a blueprint of the firm (Demil and Lecocq, 2010) and as a belonging of a single firm (Mason and Spring, 2011). Recently, however, the research interest has shifted toward a view of the BM as dynamic (Bohnsack et al., 2014;Foss and Saebi, 2018;Nyström and Mustonen, 2017) and interconnected (Jocevski et al., 2020). Hence, the BM is in this work characterized by "a meso-level value architecture that describes the value flow and dynamics of value creation, delivery and capture mechanisms at a network level" (Jocevski et al., 2020(Jocevski et al., , p. 1062. The set of network actors includes customer, partners and other stakeholders in the system. However, it is also possible to understand BMs in business networks as subsets of larger systems. In nested or encapsulated networks, actors can be limited to a set of multi-lateral actors engaged in the utilization of complementary resources (Prenkert, 2017). A dynamic view of business models A dynamic approach to the BM accounts for the evolutionary characteristics of the BM, as well as adding a variety of notions to the concept such as BM innovation (Bolton and Hannon, 2016;Chesbrough, 2010;Foss and Saebi, 2018), BM learning (Teece, 2010), BM evolution (Demil andLecocq, 2010), BM erosion (McGrath, 2010), BM lifecycle (Morris et al., 2005) and BMT (Aspara et al., 2013). A dynamic approach also considers the BM in relation to the network (Jocevski et al., 2020). Regardless of the various notions added, the core of the dynamic approach is that a firm needs a constant transformation of the BM to stay competitive (Teece, 2010), which is based on technological shifts (Tongur and Engwall, 2014) and changing market behavior. BM innovation is concerned with novel ideas of performing business, whereas BMT changes an existing business. To illustrate the slight and subtle difference, Markides (2006, p. 20) defines BM innovation as "the discovery of a fundamentally different BM in an existing business," while Aspara et al. (2013, p. 460) address how BMT is characterized by "a change in the perceived logic of how value is created by the corporation [. . .] from one point of time to another." Thus, the latter indicates how incumbent businesses manage changes in BMs over time and the value that migrates between different units and initiatives. Several studies have a focus on developing new market structures through BM innovation or open innovation (Chesbrough, 2010). Radically new BMs have been seen to have the potential to disrupt market structures, as these are based upon the connection between different actors and interrelate to both the production and consumption side of business (Matthyssens et al., 2006;Sabatier et al., 2012). For a mature market experiencing technological uncertainty, new entrants' BMs initially seem to align to the dominating logic of the market. However, at later stages, they can reshape established market foundations (Sabatier et al., 2012). Nevertheless, more integrative approaches, considering BMT aiming to both shape and stabilize existing market structures, do exist (Koryak et al., 2018;O'Reilly and Tushman, 2008). This balance considers the management of increasing productivity and incremental improvements in the existing business and the entrepreneurial, novel and often more longterm way of thinking. Business model transformation As incumbent actors have invested a lot in existing technology and infrastructure, a BMT strategy aiming at consolidating existing market structure typically coexists with disruptive strategies where a firm uses a more proactive strategy to actively reshape the market structure (Ottosson and Kindström, 2016). Storbacka et al. (2013) argue that BMT takes place gradually instead of in radical leaps, as often is the case in BM innovation. Different mechanisms drive the incumbents' BMT. They mutate, often from an existing shape, as an effect of coevolutionary relationships between the firm and the market (Tikkanen et al., 2005). With its actors and roles, the market network is increasingly seen as key elements of the BM (Shafer et al., 2005), where generated wealth and revenues should be geared to the owner and a broader range of stakeholders as well. Still, the literature that examines the actors' roles surrounding the focal firm is weak (Palo and Tähtinen, 2013). Theoretically, a networked BM approach geared toward a broader range of actors is addressed by several scholars (Bankvall et al., 2017;Palo and Tähtinen, 2013). BMs of incumbents often become outdated in markets characterized by rapid change and changing value landscapes (Hacklin et al., 2018). One might, hence, argue that the BM is always in a transformation process. However, there is always a significant short-term economic risk related to abandoning the existing BM. To stay competitive in the long run, organizations can instead consider dual structures (O'Reilly and Tushman, 2008). Still, balancing efficiency and innovation is a difficult management task. O'Reilly and Tushman (2008) suggest separating aligned organizational architectures (e.g. BMs), hence having the possibility to use the resources needed for explorative businesses without being overtaken by the mature businesses. Overall, BMT can be seen as an evolutionary process over time, with both strategies aiming at disrupting and dissolving the existing industrial market structures through marketshaping activities, with activities aiming at stabilizing and incrementally developing the existing market structure. Firms need to balance the ambidexterity in the BMT processes. The core of the dilemma is the strategic intent of stabilizing the existing market structure while simultaneously shaping new market structures and practices in a favorable direction. Methodology The paper adopts an explorative, qualitative approach to provide theoretical and managerial insights on BMT mechanisms. We have performed a case study of a business network composed of five firms within the electric utility industry. Case studies are frequently used by scholars interested in business networks (Dubois and Gadde, 2002) to develop an in-depth understanding of the phenomenon (Eisenhardt, 1989) and to reveal the complex phenomena embedded in the contextual setting (Eisenhardt and Graebner, 2007). The chosen approach enabled us to capture contextual aspects and the empirical richness required. Theorizing from case data is presumed to generate accurate, interesting and testable ideas (Eisenhardt and Graebner, 2007), which is in line with this paper's aim. Within the case, we have identified three illustrative examples of recent attempts to initiate BMT. These are used to substantiate our conceptual claims in this paper. The selection of the three illustrative attempts is based on unique access to a network of firms within the electric utility market (Table 1). These are hereafter named J-Energy, Trading, K-Energy, L-Energy and Development. The three illustrative examples discussed in this paper were purposely selected from a larger empirical investigation (Eisenhardt and Graebner, 2007). The selected examples were particularly interesting to investigate, as they illustrate both the shaping strategy and stabilizing strategy of BMT in a changing market context. Data collection and analysis Case studies often rely on multiple sources of support (Yin, 2003). The exploratory fieldwork was informed by key concepts discussed in previous research and a range of different data collection methods was used, to get a rich understanding from multiple perspectives. Many marketing researchers interested in networks tend to view different business phenomena as complex and multifaceted, with the ambition of capturing the contents of interactions and relationships in thick, rich descriptions (Dubois and Gadde, 2002). The fieldwork consisted of a mix of face-to-face interviews, observations (firm-internal meetings, round table discussions and workshops) and collection of secondary data, where the different data collected complemented each other and gave us a better understanding of both challenging mechanisms and defending mechanisms of BMT in a changing market context. Managers from five cooperating firms were used as the interviewees (Table 2). In total, 25 interviews were performed between 2017 and 2020. The interviewees had qualified knowledge of the overall, strategic, market and operational parts of the businesses. An interview guide containing different themes, such as industry change, strategy, market trends, offerings, current BMs, transformation processes, strengths, weaknesses, opportunities and threats as perceived by the firms and ongoing collaboration projects, was prepared in advance. The interview guide has a clear anchoring in the literature discussed. The questions applied an open-ended question approach to ensure the interviewees could speak freely about each theme, depending on their positions. The interviews lasted between 30 and 100 min. All interviews (except one) were recorded using a digital voice recorder and transcribed verbatim. Most interviews were performed with J-Energy, as this is the firm in the network that has a more prominent role than the other firms in all three examples. During the majority of all interviews, two researchers were present. A significant amount of time has been dedicated to observing and participating in firm-internal meetings and site visits (36 h, during 2018), round table discussions (16 h, during 2016-2019) and workshops (16 h, during 2019). Detailed field notes were made during all meetings. When possible, recordings were also made. Participations during interviews gave the authors a better understanding of all actors in the network, which was very important when observing the interactions between the firms during different meetings. The interviews made it possible to ask questions regarding things that were discussed during meetings. The purpose of the collected secondary data was mainly to increase our understanding of contextual factors and changes in the surrounding ecosystem. We collected and studied documents such as strategic plans and reports provided by the firms, annual reports, press releases, press articles and industry reports. Hence, the secondary data included internal and official records of the focal firms. All secondary data has been collected in a document management system to ensure better structure and accessibility for the researchers involved. The parts of the collected data with relevance to the present paper were coded and related to the analysis's various points. The analysis was guided by three different dimensions that is technology, customer and offering. These dimensions were particularly interesting to investigate in the changing market context. The dimensions were linked to the different strategies, categorized as either shaping or stabilizing. The transcribed interviews and field notes from observations that could be sorted into one of the three dimensions were copied into one document. Initially, each author conducted this step individually; this was followed by several meetings to discuss potential findings. Findings were then synthesized in the light of previous research to ensure the theoretical contribution of this paper and to narrow down the findings. Thus, the development of the framework (Figure 1) was largely empirically driven. The results of this analysis are presented in the upcoming section. Business model transformation in the electric utility industry Looking at a market that is undergoing dramatic changes, the 2020s are expected to be the most disruptive period in the history of electric utilities (PA Consulting, 2016;PwC Reports, 2016;Sioshansi, 2014). The conventional BMs are based on large-scale distant generation (e.g. nuclear power and hydropower) and grids that distribute electricity great distances to serve ratepayers attached to meters. However, with a set of recent and approaching innovations, the dominant modus operandi, in the near-term, is challenged by a more decentralized, networked, self-supporting way to operate. This impending shift toward a multifaceted market is expected to overturn the electric utilities' traditional roles and drive them to transform their positions and BMs (Brown et al., 2015). The anticipated challenge is made possible from megatrends, including distributed electric generation, smart microgrids and new energy storage methods (Overholm, 2015;Saba, 2014). Transformations can be seen along three main dimensionstechnology, customer and offeringand represent the BM's central dimension. The rest of this section covers three different but recent endeavors by incumbent utilities to create new value propositions while at the same time not to lose or throw away what they already have. These three are hosted in a business-tobusiness network revolving around a middle-sized utility (here referred to as J-Energy) that has a subsidiary (Trading) and smaller partners (e.g. K-Energy, L-Energy and Development) ( Table 1). The charging network Still today, electricity subscriptions are linked to a particular residence rather than a person or a family. Families who own multiple residences need multiple subscriptions. This has long been the case but may change as part of the disruption. To date, with the introduction of electric vehicles (EVs), the need for considerable electricity consumption away from home increases. In response, several competing players, in Sweden and in many other countries, are building public charging network (CN) stations, such as at rest areas along highways, at gas stations, at supermarkets, at car dealerships and at restaurants. Typically, CN stations are either owned by a car manufacturer (e.g. Tesla), an electric utility, independents (e.g. a supermarket that offers free charging to customers) or are co-owned. While car manufacturers typically exclude other brands of cars from its CN, public CN stations owned or co-owned, by electric utilities are typically open for everyone who is ready to pay for electricity. To this end, a set of local and regional electric utility firms in Sweden cooperate in national CN stationsin this paper referred to as Charging (Table 1 and Figure 1). While a joint venture by three regional utilities, where J-Energy is one of the founding owners, the network includes a large set of smaller, local partnering utilities (and restaurants, supermarkets, etc.) that together form the largest public CN in Sweden, offering charging facilities throughout the country. In other words, by cooperating and pooling local resources for national coverage, Charging's value proposition to EV drivers is convenient charging all over Sweden with a single access card. K-Energy's marketing manager laid out his thoughts: [Pooling local resources] is increasingly important. Because we cannot solve everything in K. But our customers want a simple and complete service to buy. So, to be able to solve the customer's future needs, you have to be more flexible to collaborate in building your own offerings [. . .] Charging takes care of the customer interface with payment system and such [. . .] It has something similar to a fuel card that you use to recharge the car at charging stations. Besides selling and installing charging posts to firms and homes, Charging is building a nation-wide public network at strategic positions (e.g. restaurants, shopping malls and railway stations). However, charging itself does not own the charging stations after selling and helping out with the installation; the many local partners that also take title to the electricity from that charging station do. For local electric utilities, being part of the CN is an opportunity to compete on equal ground with the largest competitors with their own nation-wide networks. By building, owning and caring for a local subset of charging posts in the municipalities of L and K (23,000 and 30,000 inhabitants, respectively), local utilities L-Energy and K-Energy are typical partners of the CN. L-Energy and K-Energy might compete in the electricity retail market but work closely together in other aspects. Their regional character and smallness, each owning a distribution grid that covers less than 1% of Sweden's area, makes it impossible for them to offer nation-wide charging facilities on their own. So, while each charging pole is owned and maintained locally, they are also cobranded with charging and part of the larger network. Despite doing some traditional marketing and cooperating with a car leasing provider, charging reaches new customers through the already established customer relationships of each such local Figure 1 The BMT model Upon charging, the EV driver who needs a refill uses a special card to access the charging post and for the firm to track the consumption. While not actually owning the infrastructure, the co-owned charging handles end-user relations, several different subscription plans and invoicing. Hence, customers of, for instance, K-Energy, that drive an EV and charge through the CN get (at least [1]) two separate electricity billsone for the household electricity (from K-Energy) and one for the EV charging away from home (from charging). Still, the electricity subscription with an identification card, which enables charging an EV anywhere in the country, as well as in partners' networks in other countries, opens a possible future where electricity subscription is not limited to one particular building but rather linked to a person who has a consumption plan (similar to a cell phone plan) that allows access to electricity wherever and whenever the person needs it. The interviewees outlined future ideas to integrate the bills to include also charging away from home. However, for the time being, customers have dual relationships, which to date favors the promotion of charging more than charging helps bring in new customers to the local utilities. The aforementioned marketing manager described: It is not a dealbreaker. For us right now, it is primarily about helping the city move into the future [of clean energy] by getting the right infrastructure in place. [At least] we get the electricity retail deal on the charging pole, but besides that, we are mostly a middleman [in the supply chain]. And, so, we got some kind of co-brand: like a sticker on the post saying, 'Powered by K-Energy'. Shaping strategies, in this case, relate to building new technology and infrastructure to form new sustainable alternates for the existing fossil-fueled market. In the new market structure, the collaboration pattern between electric utilities is changed: the interactions between them are deeper and the whole idea of the CN is built on a nationally trustful network. Besides collaboration, network orchestration is an essential shaping strategy mechanism, where the focal firm influences the network roles. Finding new ways of interacting and building alliances provides a basis for market disruption and paves the way for new market configurations. Moreover, the illustration of Charging indicates a shaping strategy toward a platform-based business where focus shifts from the single firm's creation of value toward the valuecreating network and more specifically on resource integration and value-in-use for the customer. The shaping strategy is not limited to the firm's boundaries; instead, it is the network that the firm is situated within that defines the boundary. Stabilizing strategies, in this case, relate to the firm's ambition to keep the local energy customers by providing something that can offer a nation-wide solution within a local market offer, and hence, make use of local resources by combining such resources in a network of local utilities. This strategy reuses the old structure of the existing market (the transmission and distribution grids) and only strives for incremental improvement. Another stabilizing strategy is illustrated by the re-bundling of existing market offerings instead of completely new ones. Coming from a history of selling a commodity (Storbacka et al., 2013), the provider now integrates more services in addition to the basic commodity. As it is not a matter of a new offering typology but an extension of an already market-accepted offering, it can be seen as a natural and modest way of developing the existing market logic without very radical, as it still is based on the same fundamental premises as the long-existing business. The surplus storage Traditionally, electricity consumers are characterized by passivity; electricity is produced in power plants and transmitted over the grid to a household attached to a meter that tracks consumption. A traditional customer rarely has any contact with the electricity firm other than via the invoice. This is, however, starting to change. Recently, residential solar cell prices have been dropping and growing numbers of Swedish households and firms have, spurred by authorities and subsidies, become "prosumers," meaning an actor that both produces and consumes electricity. The more solar cell panels produced, the lower the price of production; "Swanson's law" (Carr, 2012) states, that with every doubling of solar cell deployment, there has been a 20% reduction in cost since the 1970s. Hence, the home fabrication of solar electricity is seemingly becoming a better business every year. Nevertheless, to handle the time asymmetry between when the electricity is generated and consumed, it must be accompanied by a large and still rather expensive battery and/or access to the main power grid. From the electric utilities and grid owners' point of view, the growing number of "prosumers" might have a large impact on their core BM because if a large bunch of customers only request grid access to cope with peaks in consumption and production dips, the grid owner still had to uphold the same grid capacity and high maintenance costs as today for just the peak hours, but will not be able to transmit as large sums of electricity in total over the year. This threatens the grid owners' fixed traditional per-kilowatt-hour price model. Being an owner of hydropower plants and an electricity supplier to more than 150,000 households, J-Energy (Table 1) recently launched an attempt to answer this challenge. It offers property owners help to install solar cells and sets up deals so that customers can swap all that they overproduce in the summer (when there are many hours of sunshine and little need for heating of houses) for kilowatt-hours out of the main grid in return whenever they need them in the winter months (when it is dark and requires much warming). For J-Energy, every kilowatt-hour its customers' solar cells deliver to the common power grid during the summer month means one less they have to produce in their hydropower plants. That means, in turn, higher levels in their water reservoirs and more potential energy stored until the winter for production. In other words, the water-sharing solution offered stores the prosumers' production for later use just like a battery can do, but instead of each household investing in its own battery (with all its limitations and costs), J-Energy makes use of its large infrastructures (water reservoirs) that they already have invested millions into; no batteries of limited capacity at homes are needed. In other words, this case shows that in this way, an incumbent (J-Energy) has found a new use (storage) of an old resource (water reservoirs) to deliver value (answering seasonbased supply-demand asymmetry) to a growing customer segment (prosumers). The Head of New Businesses at J-Energy discussed the rationales to why it launched the new service: [Customers] might be more attracted to the sharing economy, and with the products we have today, we are starting to move [our BM] in that direction [. . .] The value for the customer is that electricity prices [in Sweden] are generally low in the summer. So that's partly about not having to sell electricity cheaply in the summer and buy it expensively in the winter, without being able to [as with storage] profit from that value difference. Then there is the nice feeling of being able to use one's own production to a greater extent. In providing this service, the prosumer's local grid owner must also be involved. Thus, in comparison to a local storage solution with batteries close to the production site, the prosumer cannot go off-grid but must stay connected to the main transmission grid, as well as that the customer with this service needs to buy and sell electricity through a retail contract with J-Energy, which helps raise higher exit barriers. Surplus storage (SP) is a complimentary service that must be combined with a two-way subscription with J-Energy. Thus, to reach beyond the few prosumers currently in contract with J-Energy, others must either first turn prosumers or be recruited from competing utilities. Currently, J-Energy customers are reached through its own channels, while other potential clients are harder to reach, but J-Energy works with a set of direct and indirect channels to create awareness. For those households that turn prosumers by investing in a rooftop solar package from J-Energy, the firm runs campaigns where the SP deal is included. While offering this service partly as a way to build stronger ties and add value to prosumers that chose J-Energy for their electricity subscription plan, the offering itself is not profitable on its own merits. Receiving kilowatt-hours "for free" in the summer when marginal prices normally are relatively low under the condition of returning them in the winter when marginal prices often are higher does not, analyzed in isolation from the total BM, add positively to the revenues. However, J-Energy adds a subscription to the service, covering their cost to hedge the prices and avoid losses from the offering. Reflections from the Head of New Businesses at J-Energy: It started primarily as a PR thing. We didn't even think it would be as big as it turned out. We have actually won a lot of new customers from it, and it has added a positive image to the brand within this special target group. Previously, we were allowed to offer net charge [meaning you charged prosumers only for the net between consumption and production], but it became forbidden for tax reasons. So, this was an attempt to get as close to net charge as possible, but on the right side of the regulations. Overall, if the offering helps attract new prosumers or gets current prosumers to shift to J-Energy for their subscription plan or if it keeps prosumer attached to their distribution grid to share the costs for grid maintenance, then J-Energy's BM can benefit. Moreover, this service can also act as an exit barrier. When the old-fashioned subscription plan is all about low cost, and there is almost nothing that holds the customer, prosumers that also have a grid-connect storage plan like this cannot as easily change to another supplier. The Head of Customer Relations at J-Energy explained that, so far, it mostly sells the storage service to those that already have installed rooftop solar or as an add-on service to new customers: When we sell the solar panels, the rooftop hardware, we offer the storage service for free the first year to sell a package and [. . .] build relationships. Shaping strategies, in this case, are related to the fostering of new customer behavior (the prosumer that simultaneously is both producer and consumer). Previously, the customer has been a passive consumer; meanwhile, J-Energy both changes and challenges the existing value offering in the market (related to instant consumed electricity). Instead, the consumer can now store energy in large reservoirs as long as they are connected to the grid. J-Energy showed how the transformation toward a dynamic and iterative BM had taken place: replacing the passive receiving customer with an active prosumer. The prosumer's entrance in the BM of J-Energy forces the BM to become more dynamic, as it is not only a matter of describing the firm's business but also a formula for continually supporting the customer in a collaborative manner. Stabilizing strategy can also be seen here, as it is the existing infrastructure that enables the new business idea. Therefore, incremental improvements need to be made in existing technology (grid upgrades). Furthermore, no new infrastructural investments need to be made by the firm, as the large reservoirs already exist; instead, it is about a new offering bundling and fostering a new mindset of the customer. The solar farm The third example of BMT covers network aspects. A BMT initiated from the incumbent electric utility firm offers consumers who lack possibilities to have a photovoltaic system (PVS) on their own building an alternate: shares in a cooperatively owned solar farm (SF). The incumbent utility can in this BMT use its accumulated expertize in energy production and power plant maintenance, while the consumers receive the equivalent of their own solar cells. Moreover, to build large-scale PVS outside of the cities on flat ground and adjusted to maximize the insolation from the sun is more cost-efficient than small residential PVS on top of already existing buildings. For consumers who rent a flat and consumers who need very little electricity, smaller shares in a cooperatively owned SF can still be profitable as many shareholders split the fixed cost of installation work. Both L-Energy and J-Energy have each recently built an SF in which consumers can buy shares to generate "their own" electricity cooperatively. First, J-Energy partnered with a local real estate firm that bought 50% of the farm shares to support apartment tenants with locally produced, clean electricity, while the other 50% was intended directly for the consumer market. As the Head of Partner Relations at J-Energy recalled: [The value proposition of the SF shares] addresses those who cannot, do not want, do not have the finances to install a facility on their own roof, or do not have the right kind of roof. Then you can still own solar power, but in a simpler way and to a smaller initial investment [. . .] What we want to deliver is a positive vibe of belonging to something, a movement, a feeling of being a self-producer without having to put it on the roof; that you participate in real production. communication channels to reach existing customers. The Head of New Businesses at J-Energy stated: To raise awareness, it was primarily communicated via our customer newsletter to current customers and through our website. I think 90% [of those that bought it] are local residents in the region, and probably 8 or 9 out of 10 were already grid or retailing customers. The L-Energy setup is slightly different. The customers do not own the solar cell in that case but rent 300 W panels for about EUR 67 (SEK 680) each for three years. Once a customer has signed up for a rental agreement, it becomes an exit barrier, meaning that a consumer that controls shares in the L-Energy park can also use L-Energy as its electricity retailer for as long as the rental lasts. In practice, a shareholder pays for its total consumption, but L-Energy reduces the bill by the value the renter's panels have produced. The communication manager at L-Energy introduced its new offer as below: We have chosen not to offer a cooperatively owned SF. Instead, we offer solar power subscriptions: it means that you can rent a small section of the SF [. . .] and you get a refund every month that represents the weighted market value produced by your panels. When asked about how the consumer price of the rental agreement compares to per-kWh deals, the interviewees at J-Energy and L-Energy wanted to get around a head-to-head price comparison by introducing other values. The aforementioned communication manager at L-Energy stated: There is, of course, a calculation. But you will never go break even if you only care about costs. So, we highlight other values. What we really are pushing for is that you make a good effort for sustainable production if you choose locally produced solar; that clean, carbon-free energy from the sun is for everyone to enjoy. Not only those who can afford to install it on their own roof. This is also for apartment owners to participate in. As solar power is prioritized when Sweden is transforming to net-zero carbon emissions, both these SFs received around 20% installation cost subsidies (based on installed kWs) from the state, about the same subsidies as rooftop PV systems get. However, while rooftop installations only pay tax and grid transmission fees for the net trading of kWhs, consumers that own production away from where they consume must pay tax and transmission costs for every kWh; they then get a deduction on the bill for the value of what is produced by its share in an SF. In other words, while the installation cost per kW in a largescale SF is substantially cheaper than on a rooftop, the overall cost comparison is also dependent on the current tax legislation, which might change. The Head of Partner Relations at J-Energy discussed the uncertain calculations of payback time: At a customer's first glance, it does not look good, but it can be very different over the coming years. We don't know about regulations and policies. To date, for comparison, you avoid some costs if you put it on your own roof instead, such as network fees and taxes. Currently, the SFs of J-Energy and L-Energy are more experimental and promotional ventures to try out the potential and show that they want to be frontrunners in the green transformation. Both firms can accept if the SF is not yet cash cows. From the firms' horizon, experience, environmental benefits and the ability to lock in retail customers in long-lasting relations are currently the most important reasons to build and maintain cooperatively owned SFs, not the least to show direction intra-organizationally to personnel and stakeholders, as addressed by the Head of New Businesses at J-Energy: We worked with "ambassadorship." So, everyone employed at J-Energy was involved in building the park as a one-day event. It was to get everyone to know the fundamentals of how it works, and it generated quite a lot of attention in social media from what all of us [the employees] shared. You need to know that solar power is still the "new" technology internally. It is still more profitable [in the short term] to build wind power. But in this case, we built it on the premise that there was a high-interest thing to do: an opportunity to showcase us and the technology. However, for electric utilities, if (or when) consumercooperative SFs become big business, electric utilities might transform their BM to become more like real estate firms that build, manage and deliver services to their shareholders instead of owning the means of production. In that way, it would be a "game-changer" for their BMs if the utility project and deploy production means, but not lock in millions of dollars for the entire life of the facility. Moreover, instead of per kWh deals, ownership and tenancy arrangements transfer the risks related to, for instance, loss of production (such as lack of sunshine). The Head of Partner Relations at J-Energy explained: The risk for the customer is, like this year [2020], with extremely low electricity prices in the summer, if the price calculation breaks, it's the shareholder, not us, who bears the risk. Also, of course, if the solar farm produces less than calculated. The SF example also illustrates shaping strategies in terms of changing the relationship between the firm and the customerfrom the customer as the passive receiver of electricity to a partner who owns the means of production. In terms of technology, the large-scale PVS indicates a technology leap, moving away from traditional electricity sources such as hydropower and biopower plants. Stabilizing strategies can be seen, as the electricity provider still wants to be central in the existing market structure: to be "the spider in the grid," as this provides possibilities to sell addon services, such as maintenance services. Meanwhile, keeping the focal market position is necessary to protect the grid operation business and the already-existing customer relationships as it adds high exit barriers that keep the customer in the relationship. Incremental improvements of an existing technology, already accepted in the market, is a BMT that builds on the shared understanding of what the market is and how it should be treated. Through maximizing material and energy efficiency, the BM is transformed without any radically new technology (Bocken et al., 2014). In the electric utility sector, firms fine-tune their already-existing technology (for example, improve capacity in the electricity distribution network). This is also done to secure the mature business in the electricity utility firms. Analyzing shaping and stabilizing strategy Based on our study, BMT can be seen as a process that contains both a proactive, strategic intent of shaping a new market structure and a defensive strategy of preserving an existing market structure. We have seen how both these strategies co-exist and benefit each other (Table 3 and Figure 1). A more proactive BMT strategy is to challenge existing market logic with the potential to disrupt and dissolve the existing market structure and shape a radically new type of market practices and innovate the way value is perceived (Matthyssens, 2020). This strategy aims to change the dominating modus operandi of the market and involves several mechanisms: the development of radically new technology (Pateli and Giaglis, 2005), the creation of new philosophy and mindset (Ringberg et al., 2018), the development of new collaboration patterns in the network (Mustak, 2014) and the questioning of norms and the very foundation of the market in the form it is shaped today. The new market configuration in the electric utility market implies several advanced technical innovations (i.e. battery capacity), a shift from large-scale to small-scale revenue (i.e. a new mindset among the market actors), more flexible structures for electricity production (i.e. new collaboration patterns) and new ways of configuring value co-creation among the actors in the ecosystem (i.e. new norms and practices for the creation of value). Stabilizing the existing market structure implies a stabilization and normalization of market activities (Teece, 2010) that is trying to keep the market's status quo with the given order of market practice among different actors in the network. This can be seen in the following mechanisms, namely, improving existing technology incrementally, embedding already-existing customers and integrating services in the existing market offering. These types of BMTs aim to keep the same level of revenues in the already-installed base of grid, machinery and customers. Another reason is to reach standardization that can reduce operational costs (improving existing technology), while a third goal is to integrate the value chain with more advanced service offerings and higher customer loyalty (integrating services). Discussion and conclusions We recognize a pattern of BMT containing two parallel strategies dealing with stabilizing existing market structures and shaping new market structures. This contrasts with the conventional BMT literature, mainly concerning how new markets are being shaped through radical innovation (Chesbrough, 2010). Instead, it follows the analysis by Storbacka et al. (2013), suggesting a more gradual transformation process. By advancing research on BMT (Aspara et al., 2013), we have identified the mechanisms behind the strategies and discussed the balance between the parallel processes of stabilizing and shaping markets. The findings emphasize BMT as embedded in the network, and hence, part of an evolutionary pressure previously recognized by Tikkanen et al. (2005). The BMT dilemma deals with the duality of stabilizing and shaping, and this study has identified two major strategies and described them and the involved mechanisms based on empirical insights. A balance between a shaping strategy of disintegrating existing market structure and a stabilizing strategy of consolidating existing market structure incrementally through BMT is pivotal for incumbents. By acknowledging the dual aspect of BMT, the firm can allocate resources for both to co-exist ( Figure 1). However, to date, many previous conceptualizations of innovation and transformation of extant BMs given in the current body of literature do not seemingly obey the need of incumbent network actors of balancing strategies for stabilizing (i.e. gradual and incremental transformation) and shaping (i.e. more radical change). Thus, we can argue that the inconsistency in addressing these two together in an integrative framework and how to balance them results in a lack of theoretical coherence and managerial relevance in the resultant conceptualizations of the BMT process. Hence, our suggested framework fills this scholarly void. We acknowledge, despite this, that the novel framework does share some key features with other change-focused streams within the wider management literature, for example, dynamic capabilities (Cabanelas et al., 2013;2018;Eltantawy, 2016;Wang and Hsu, 2018) and narrated concepts such as innovation capabilities (Cheng and Chen, 2013;Santos-Vijande et al., 2013) and market-shaping capabilities (Windahl et al., 2020). We argue that above mentioned approacheswith a focus on skills, procedures and tacit dispositionsonly to a limited degree can explain how the BM is being varied. While dynamic/ innovation capabilities can represent an approach, which includes adapting and re-bundling resources and competencies within the focal organization to match changes in its environment, our addressed approach is mostly focused on the process and outcome at the network level. This includes implications of change for all the different features of a BM. While the dynamic/innovation capabilities to initiate, create and support BMT, of course, can be critical human assets within a firm (along with, for example, physical, intellectual and financial resources), these dispositions are not per se the focal object of study in our framework. Rather, we see market-shaping capabilities (Windahl et al., 2020), as well as strategic capabilities (Huikkola and Kohtamäki, 2017), in the role of enablers and constrainers for innovation and change, as potential complements that could give our integrative framework further detail and as a promising avenue to more strongly bridge the gap between overlapping theoretical traditions. Theoretical and practical contributions By considering BMT as a process with dual strategies we contribute to the research by empirically identifying strategies for this dual-directional process aiming to overcome the BMT dilemma. While most prior BMT studies have focused on the challenges of developing new business units as a response to a changing market landscape, not enough attention has been paid to the balancing and interrelated process of both disruption and stabilization in a business network. This study advances knowledge about strategies involved in the BMT processes, while proposing an integrative approach for BMT, hence balancing between market shaping and market stabilization. This contributes to the discussion of Murmann and Frenken (2006), where the emergence of a dominant design at a higher system level is affecting development strategies at a lower system level, where more focus can be geared toward incremental improvements of existing business than searching for new ones. It also contributes to the emerging discussion on the interconnected and networked BM literature (Jocevski et al., 2020) by exploring empirical cases. For practical relevance, this research has some important managerial implications that can support managers in incumbent firms, facing situations of a swiftly changing environment. The need to allocate resources for initiatives with potential return in the long term cannot be ignored (shaping strategy); instead, these are important for market success but need to be balanced with investments in the short term (stabilizing strategy), exploiting existing market opportunities. Further, as BMT takes place in networks of actorslarge ecosystem or encapsulated business network (Prenkert, 2017) strategies for BMT typically need the actor's continuous involvement to succeed with both shaping and stabilizing strategies. Therefore, firms need to evaluate what type of strategies are used in different relationships and networks. By a better understanding of their interplay between shaping and stabilizing strategies, firms can overcome the BMT dilemmafocusing too much on either strand of the BMT. For incumbents transforming the BM, the firm's activities can be separated and categorized in accordance with strategic intent: activities aiming at disturbing the market structures or activities aiming at stabilizing the existing structure. By acknowledging the different nature of the activities, firms can overlook how they, in parallel, both shape and stabilize markets. If the magnitudes and direction of the activities create too much tension in the organizations, the firm is obliged to evaluate the strategies and re-thinking the transformation of the BM. In this sense, the BMT is a part of the firm's strategic design (Windahl et al., 2020) for market shaping. Finally, one of the most powerful strategies to disrupt a market structure is to combine new technology with changing the mindset or philosophy of doing business (Ringberg et al., 2018). Tongur and Engwall (2014) exemplify this with the electric road system and how such a technological shift will challenge truck manufacturers. In the current study, the tendencies toward a radical shift in technology and mindset exist in terms of radically new BMs, but these types of disruptions are rare and require entrepreneurial capabilities among management teams (Ringberg et al., 2018). Hence, developing the required capabilities to manage the different types of transformation is one important managerial implication. Future research avenues The need to consider BMT as a dual-directional process is emphasized in this study (Figure 1). By revealing strategies for shaping new market structures and stabilizing existing ones, the present study contributes to how firms overcome the BMT dilemma. In the three empirical illustrations discussed, collaborations between different types of actors are important for transforming BMs. However, in many cases, BMs are considered a belonging of the firm (Mason and Spring, 2011), and literature that examines the roles of the actors surrounding the focal firm is still weak (Palo and Tähtinen, 2013). Thus, more empirical research on how networked BMs in different contexts are shaping markets is needed. Today, too much research solely focuses on firm strategies, and hence, forgot the crucial importance of the surrounding network (Klimanov and Tretyak, 2019)both the business network with its business actors but also the larger ecosystem of actors from a variety of sectors (Jocevski et al., 2020). The key issue for firms involved in networked BMs is how to interact and collaborate. Scholars are encouraged to develop the field of BMT across different sectors to build advanced knowledge of how they relate to strategies of market shaping and market stabilization in different settings. Finally, as indicated above, we see also, within the larger strategy conversation, a need to further bring in and discuss the underlying capacities that enable (or perhaps, hinder) incumbents to move from one BM to another, where the innovation and market-shaping capabilities (Teece, 2018;Windahl et al., 2020) could be good candidates for discussion. Note 1 In Sweden, households can buy electricity from a different retailer than the firm owning the local grid. In such cases, it is two separate bills.
2021-06-26T18:56:20.096Z
2021-05-31T00:00:00.000
{ "year": 2021, "sha1": "f02ebcdfe1ae57669bc671d4ee37f969e2e6ddf6", "oa_license": "CCBY", "oa_url": "https://www.emerald.com/insight/content/doi/10.1108/JBIM-06-2020-0264/full/pdf?title=overcoming-the-business-model-transformation-dilemma-exploring-market-shaping-and-stabilizing-strategies-in-incumbent-firms", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "f02ebcdfe1ae57669bc671d4ee37f969e2e6ddf6", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
256249825
pes2o/s2orc
v3-fos-license
Optimal feedback control for fractional evolution equations with nonlinear perturbation of the time-fractional derivative term We study the optimal feedback control for fractional evolution equations with a nonlinear perturbation of the time-fractional derivative term involving Caputo fractional derivatives with arbitrary kernels. Firstly, we derive a mild solution in terms of the semigroup operator generated by resolvents and a kernel from the general Caputo fractional operators and establish the existence and uniqueness of mild solutions for the feedback control systems. Then, the existence of feasible pairs by applying Filippov’s theorem is obtained. In addition, the existence of optimal control pairs for the Lagrange problem has been investigated. Introduction Control theory has received considerable attention due to its extensive applications in various areas of science, e.g., ecology, economics, and engineering, particularly in systems with controllability, feedback control, and optimal control [1][2][3][4][5]. Control systems are most often based on the principle of feedback, whereby the signal to be controlled is compared to a desired reference signal and the discrepancy is used to compute a corrective control action. It is wonderful that the study of fractional control systems has attracted research recently [6][7][8][9][10][11][12]. In [7], Wang et al. considered the optimal feedback control of a nonlinear system, given by fractional evolution equations, that has the form where C D α is Caputo fractional derivative of order α ∈ (0, 1), u 0 ∈ E, and A : D(A ) → E is the infinitesimal generator of a compact analytic semigroup of uniformly bounded linear operators {T(t)} t≥0 in a reflexive Banach space E. The control function v(·) takes values in the Polish space V and f : [0, T] × E × V → E is a given function satisfying suitable assumptions. Motivated by the previous work, we are concerned with the optimal feedback control of the semilinear fractional evolution equations with a nonlinear perturbation of the timefractional derivative term as follows: ⎧ ⎨ ⎩ C D α;ω 0 (u(t)g(t, u(t))) = A u(t) + f (t, u(t), v(t)), 0 < t ≤ T, u(0) = u 0 , (1) where C D α;ω 0 is the Caputo fractional derivative with arbitrary kernel ω of order α ∈ (0, 1), A : D(A ) ⊆ E → E is the infinitesimal generator of a compact analytic semigroup of uniformly bounded linear operators {T(t), t ≥ 0} in a reflexive Banach space E, and u 0 ∈ E. The control v has a value in a control set V[0, T] and f : [0, T] × E × V → E and g : [0, T] × E → E will be specified in what follows. It should be noted that the nonlinear perturbation term g in (1) contributes to a more complicated derivation of a mild solution, which requires certain assumptions on the semigroup and operator A . Furthermore, when the evolution operator A is defined to be the zero operator on the Banach space E = R, our problem (1) can be modified and rewritten as hybrid fractional differential equations. The fractional derivative of an unknown function is hybrid nonlinear, as a dependent variable is used in this class of equations. Moreover, this problem can be reduced to that considered in [7] where the function g is taken to be zero. The aim of this paper is to derive a representation of the solution for the problem (1) that depends on fractional derivatives with arbitrary kernels. Furthermore, Krasnoselskii's fixed point theorem is used to investigate the existence results for the nonlinear system (1) under the compactness assumption of the operator semigroup {T(t)} t≥0 . We further investigate the existence of optimal feedback controls for the Lagrange problem. Moreover, our results obtained in this work can be applied for further investigation in many practical problems. The paper is structured as follows. First, we will outline some definitions and lemmas that will be needed later in Sect. 2. In Sect. 3, we provide a mild solution to the nonlinear system (1) employing the semigroup operator with a function ω that prescribes the generalized Caputo derivative. Next, the Krasnoselskii's fixed point theorem is applied to prove the existence and uniqueness results of mild solutions for the problem (1) in Sect. 4. In Sect. 5, the existence of feasible pairs for the system (1) is also demonstrated. Finally, we will investigate the existence result of the optimal control pairs of the system (1). Preliminaries Throughout this paper, E is a reflexive Banach space and f L p is used to denote the Consider C([0, T], E) as the Banach space of continuous functions from [0, T] to E with the usual supremum norm. We denote by V a Polish space; that is, a separable completely metrizable topological Suppose H and F are two metric spaces. If is pseudocontinuous at each point t ∈ H, then it is called pseudocontinuous on H. where is the gamma function. Definition 2.8 (ω-Caputo fractional derivative, [15,16] The ω-Caputo fractional derivative of a function u of order α is defined by where Furthermore, we also have where E α is the Mittag-Leffler function. Definition 2.11 ([15]) Let u, ω : [a, ∞) → R and ω(t) be a nonnegative increasing function. Then, the Laplace transform of u with respect to ω is given by for all s such that this integral converges. Theorem 2.17 (Krasnoselskii's fixed point theorem) Let B is a nonempty convex, closed, and bounded subset of a Banach space E. Assume that F 1 and F 2 are operators from B to E such that Now, we outline some facts about the semigroups of linear operators which can be found in [21,22]. The infinitesimal generator of {T(t)} t≥0 of a strongly continuous semigroup (i.e., C 0semigroup) {T(t)} t≥0 is given by We denote the domain of A by D(A ), that is, Lemma 2.18 ([21, 22]) Let {T(t)} t≥0 be a C 0 -semigroup and let A be its infinitesimal generator. Then Throughout this work, we assume that the analytic semigroup {T(t)} t≥0 has the following properties: (i) There is a constant M ≥ 1 satisfying (ii) For any 0 < η ≤ 1, there exists a positive constant C η such that Representation formula of mild solutions based on semigroup theory Lemma 3.1 Any solution of the problem (1) satisfies the following integral equation: Proof Applying Definition 2.8 and Lemma 2.9 to the problem (1), it can be rewritten in the form of the integral representation as follows: Taking the generalized Laplace transform on both sides of equation (7), we have that for s > 0, It follows that Now, we consider the change of variable It follows that The following one-sided stable probability density in [23] is considered: Using (8), we obtain Then, we have Now, we take the inverse Laplace transform to obtain where φ α (θ ) = 1 α θ -1-1 α ρ α (θ -1 α ) is the probability density function defined on (0, ∞). (1) if satisfies the following integral equation: Definition 3.2 A function u ∈ C([0, T], E) is called a mild solution of the problem where the operators Q α;ω (t, τ ) and R α;ω (t, τ ) are defined by Existence and uniqueness of a mild solution In order to demonstrate the main results, we outline the following assumptions: The function f is a locally Lipschitz continuous with respect to V, i.e., for all t ∈ [0, T] and u 1 , The following existence of mild solutions for the problem (1) will be proved by using Krasnoselskii's fixed point theorem. Step 1: We assume that for each k > 0, there exist u k , w k ∈ B k such that According to (A 3 ) and Lemma 3.3(i), it follows that Multiplying to both sides by 1 k and taking the limit inferior as k → ∞, we get which is contradiction. Step 2: F 1 is a contraction on B k . For arbitrary u, w ∈ B k , we have According to (11) of Theorem 4.1, we obtain that F 1 is a contraction. Step 3: F 2 is a completely continuous operator. Firstly, we claim that F 2 is continuous on B k . Let {u n } ⊂ B k be such that u n → u ∈ B k as n → ∞. For t ∈ [0, T], by Assumptions (A 2 ) and (A 3 ), we have Using the Lebesgue dominated convergence theorem, for any t ∈ [0, T], we obtain as n → ∞. This implies that (F 2 u n )(t) -(F 2 u)(t) C → 0 as n → ∞. Hence F 2 is continuous. Next, we prove the equicontinuity of F 2 (B k ). For any u ∈ B k , we have for 0 ≤ t 1 < t 2 ≤ T, , v(τ ) dτ =: I 1 + I 2 + I 3 . By Lemma 3.3, we obtain that and hence I 1 → 0 and I 2 → 0 as t 2 → t 1 . For t 1 = 0 and 0 < t 2 ≤ T, it easy to see that I 4 = 0. Thus, for any ε ∈ (0, t 1 ), we have Therefore I 3 → 0 as t 2 → t 1 and ε → 0 by Lemma 3.3, (iii) and (iv). It follows that Fix t ∈ (0, T], then, for every ε > 0 and δ > 0, we define an operator F ε,δ 2 on B k as By the compactness of T(ε α δ) for ε α δ > 0, it follows that the set N ε,δ (t) = {(F ε,δ 2 u)(t) : u ∈ B k } is relatively compact in E for all ε > 0 and δ > 0. Furthermore, for any u ∈ B k , we Proof For u ∈ B k , we define the operator G on B k by (Gu)(t) = Q α;ω (t, 0) u 0g(0, u 0 ) + g t, u(t) Notice that it is enough to show the uniqueness of a fixed point of G on B k . According to (10), we know that G is an operator from B k into itself. For any u, u * ∈ B k and t ∈ [0, T], according to (A 3 )-(A 5 ), we have This implies that G is a contraction map satisfying (12). Hence the uniqueness of a fixed point of the map G on B k follows from the Banach contraction principle. In this section, we present the existence of feasible pairs for system (1). To establish our results, we introduce the following hypotheses: We can verify that, for any t ∈ [0, T] and 1 p < α < 1, E j n (t) is bounded. By Lemma 3.3, it is not difficult to verify that E j n (t) is compact in E and also equicontinuous. Due to Ascoli-Arzela theorem, {E j n (t)} is relatively compact in C([0, T], E). Obviously, Q j is a continuous linear operator. Therefore, Q j is a compact operator for j = 1, 2. We need to investigate the following result in order to solve our optimal feedback control problem. Existence of optimal feedback control pairs In this section, we consider the Lagrange problem (P) for the optimal feedback control as follows: find a pair (u 0 , v 0 ) ∈ H[0, T] such that For any (t, u) ∈ [0, T] × E, we denote the set To investigate the existence of optimal control pairs for problem (P), we assume that (C) The map Σ(t, ·) : E → 2 R×E has the Cesari property for a.e. t ∈ [0, T], that is, = Q α;ω (t, 0) u 0g(0, u 0 ) + g t, u(t) For any δ > 0 and sufficiently large l, we have φ l (t), φ 0 l (t) ∈ Σ t, O δ u(t) .
2023-01-26T14:51:00.453Z
2022-04-08T00:00:00.000
{ "year": 2022, "sha1": "5581001af2324c50fb7b3e1f2049e09abf9f5ef3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13661-022-01604-2", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "5581001af2324c50fb7b3e1f2049e09abf9f5ef3", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
149714034
pes2o/s2orc
v3-fos-license
Supervision in Psychotherapy from the Perspective of Transactional Analysis © 2018 This article deals with supervision in clinical psychology that is distinct from pedagogical practice in psychology. The objective is to expand the reflection about supervision, the role of the supervisor and the training of psychotherapists from the perspective of supervision in the methodology of transactional analysis. Supervision is about a process of professional development that must qualify the skills of the trainee, develop those that are lacking and expand their potential to achieve professional success, since the construction of psychotherapeutic knowledge is not limited to theoretical content and must include training practical skills, professional posture and ethics. Introduction Newly qualified as a Teacher of the National Union of Transactional Analysts, supervising newly graduated students and student psychotherapists of transactional analysis, besides accumulating the position of guest professor at the Graduate School of Medical Sciences of Santa Casa de Misericórdia in São Paulo, for some time I have observed and reflected on the roles of supervisor and supervising.Having occupied the position of Director of Teaching and Certification of UNAT-BRASIL, this interest led me to look for ways to improve the training of supervisors.Supervision differs from clinical practice and pedagogical practice of psychology.Clinical practice focuses on diagnosis, intervention and cure.The pedagogic practice aims to acquire scientific and theoretical knowledge of psychology.Supervision is about a process of maturation and professional development. Sakamoto (2006) considers that the core of supervision "is to meet the demands of the theoreticaltechnical integrations of clinical practice involved in attendance, the demands of the specific clinical practice of the approach adopted, and also helping the student in a professional training process with the acquisition of a professional identity "Supervision, therefore, can occur in two contexts: 1) supervision in clinical school -in psychology courses for students in their last years, and 2) supervision in university extension courses for trained professionals who want to acquire new techniques of psychotherapeutic approach or receive support to improve their professional performance. The supervision exercised in the university context has as variables: the curricular grid, which limits the number of hours for the supervision process; the fact that it is the first experience of customer service by the student; the obligation to get credits, even for those students who are not interested in the clinical area; and the designation of the supervisor based on the teachers available.This teacher does not always have the profile and the interest necessary for this task, and sometimes, in the supervisory space, replicates the methodology of the classroom. The student / psychotherapist has two main motivations: 1) the professional task of the clinical care of their patient, and 2) the integration of theory and technique that form the basis of professional identity.9 (2), 81-86 https://doi.org/10.29044/v9i2p81Supervision can also be exercised in the context of university extension courses: courses in psychoanalysis, cognitive-behavioural, transactional analysis, psychodrama and others.The variables that interfere here are the volunteer's choice as trainee, their experience as a trained psychologist, the desire to expand his knowledge, expand the scope of techniques, learn new theories and find support for difficulties in clinical practice.In this case, supervision is part of a tripod that characterizes the theoretical model of these courses' theoretical knowledge, psychotherapy and supervision.The supervisor, in this case, chooses supervision as a method of work, invested in their training, and is dedicated to developing and improving their competence in this area.This fact distinguishes them from the university professor, assigned to the role of supervisor.Zaslavsky et al (2003) states that, in this context "supervision is a process of qualification of the candidate.In this sense, the supervisor's attitude should stimulate, in supervising, the development of his own abilities.One of the main functions of supervision is to develop in supervising the ability to perceive their own difficulties.This would be the way to achieve independence, following the learning process through self-criticism." (p.3)In this context, supervision can be considered to be of a trainee who is in the process of training to acquire a new skill during the creation of identity in the role of psychotherapist.Therefore, I will use the word "training" as synonymous with "supervising".I believe that the experience in the certification of transactional analysts of the National Union of Transactional Analysts -UNAT-BRAZIL, can contribute to the reflection on the role of the supervisor in the process of creating the professional identity of the psychotherapist. Psychotherapy Training -Eric Berne's Experience In the early 1960s in the USA, psychiatrist Eric Berne was responsible for training physicians residing at McAuley Hospital in San Francisco, California, proposing a training method that included: customer service, presentations and theoretical discussion in seminars and staff conferences. Originally the psychotherapy group sessions conducted by Berne were attended in a mirrored room by resident physicians, until one day a schizophrenic patient in outbreak threw a chair breaking the mirror.Faced with this situation, Eric Berne invited the residents to participate as observers in the same room as the group.At the end of the psychotherapeutic work he asked patients to switch places with medical observers and proposed that residents, now in the centre of the group being watched by clients, would talk about what they had observed.The practice proved to be efficient, the resident doctors referred to clients with more objectivity and respect, and clients were interested in the discussion.Berne decided to include this proposal as a method of teaching in the hospital, conducting all team conferences in the presence of clients.He later included other staff members in the discussion, including nurses and social workers. Berne considered that the therapist-client relationship should happen in an OK / OK basis which, in TA terms means that each have value and qualities independent of their roles; both are healthy and deserving of respect.If, on the one hand, the doctor / psychotherapist has privileged access to technical information, on the other, the client has privileged access to his / her history and the construction of his / her psychological process.This type of training method for residents and psychotherapists was not properly a model of supervision but was included as a working philosophy in the methodology of the processes of training transactional analysts of both ITAA and UNAT-BRASIL. Creating the OK / OK space between client and psychotherapist, between supervisor and trainee; means generating a dialogical space of mutual respect and interest with a balance of power between the parties.The best way to build an OK / OK process of competence acquisition is through questions, as quoted by Andersen (1991): "We consider that our contribution consists basically of questions, in particular those which our interlocutors generally do not ask themselves, and which give rise to many answers which, in turn, can generate new questions."(p.59) The same author comments that a reflexive posture includes: the review of spontaneous and automatic response usually centred on certainty (judgment); personal investigation generating intrasubjective movement (thoughts, feelings); the construction of a collaborative context; and the transformation of the conversation into an external dialogue of internal dialogues in order to generate what he calls "dialogue of dialogues". In the context of supervision, this posture requires the supervisor to listen to the trainee, to question the impact the supervision has, and to make room for feedback on the trainee's interventions.On the other hand, the trainee, when answering the supervisor's questions, may reflect on their certainties and uncertainties, find out the impact of their actions and what feelings are mobilized as a result.When the dialogue between supervisor and trainee happens in this way, both are enriched by the experience.The trainees appropriate their own knowledge and questions while the supervisor, instead of presenting themself as all knowing, places trainees in the position of asking the questions, thus helping the trainees find their own answers.This attitude creates an environment conducive for the trainee to listen to the client: both within the case and the client's feedback on the psychotherapeutic procedure; to listen to themself in the role of psychotherapist, and to listen to feedback from supervisor or peers. It is necessary to consider that both supervisor and trainee have backgrounds: their life experiences, maturity, needs and knowledge.The questioning of the premises behind statements and the deepening awareness of the motives that lead supervisor and trainee to choose certain positions, allow the revelation of the background that surrounds them.And both trainee and supervisor are affected by their backgrounds, as well as the personal and emotional issues underlying performance.The backgrounds of the supervisor and the trainee must be heard, respected and at the same time relaxed by dialogue, leaving the dangerous territory of a presumed knowledge that limits access to the acquisition of new learning or questioning. The OK/OK posture, therefore, presupposes balanced participation and responsibility of the parties.It is important that the contract between them is clear, establishing the goals to be achieved, the method for attendance and supervision, and what is expected of the performance of supervisor and trainee, including the motivation, expectations and fantasies of both.When these premises are established from the beginning of the supervisor/trainee relationship, problems, difficulties and transference processes can be discussed and solved. When creating the dialogical space and the balance of forces between supervisor and training, a plan of action that addresses the training needs of the trainee is urgently needed. Development Needs The opening to acquiring new skills occurs differently for each trainee.One must consider the motivation, the theoretical knowledge, the maturity and the stage of development for each.At each stage of learning, the trainee experiences different needs.This is similar to the stages of early childhood development, where specific skills are developed as the child deals with the learning opportunities that life naturally provides.Levin (1982) cites six stages of development that apply from child development to the acquisition of knowledge and new skills: Phase 2: Doing -The world of sensations and action between six and eighteen months -development needs are about trusting others, learning that it is safe and wonderful to explore the world, believe in your intuition, be creative and active and get support for these activities. Phase 3: Thinking -The domain of concepts -between eighteen months and three years -development needs are about thinking for oneself, solving problems, expressing and managing feelings, especially anger, initiating the process of individualization . Phase 4: Identity -The continuous evolution of the self -between three and six years of age -development needs are about affirming one's own identity, acquiring information about the body, sex, roles, about the world, socializing, learning to deal with the consequences of their actions and separating fantasy from reality. Phase 5: Becoming Skilled -The 'hows' and 'whys' of life -between six and 12 years -development needs refer to the act of learning new skills (without having to be perfect), learning from mistakes and being appropriate, testing your skills and comparing yourself to others, testing ideas and values between different families. Phase 6: Integration -Creation and reproduction -from 12 to 18 years -development needs are about achieving a clear separation from the family, developing independence, integrating sexuality with your identity. Of course, in each stage of development the human being faces situations that invite them to develop each skill.The first opportunity to acquire these skills occurs in childhood, but as in a spiral, each of these phases can be recycled into adulthood.The opportunity for recycling is naturally offered by life and its challenges. The learning context is one such opportunity.When the necessary conditions, encouragement, and recognition are offered, the person quickly uses their skills to explore the new experience and acquire a suitable repertoire for their development. Supervision in a Developmental Context The supervisor, when considering these elements in the supervision process, is able to provide a unique attention to each trainee, qualifying their development needs and offering them the stimulus necessary for their evolution.This can also be done within training groups. An important element to consider is the quality of feedback from the supervisor to the trainee.Often, by focusing on the result, the supervisor points out faults and points to be corrected in the trainee's performance, promoting negative feedback.If this happens too often, the level of anxiety and resistance during the supervision process can become high.Napper & Newton (2000) applied the concept of Levin's Development Phases to student training.This model allows the supervisor to create a repertoire of stimuli and positive feedback that stimulate the motivation of the trainee and enable the balance between motivation and demand. In Phase 1: Being -The basis of our existence In adult life, the trainee recycles this phase at the beginning of any new activity, whenever accepting a new challenge, as for example, the beginning of working as a psychotherapist. According to Sakamoto (2006), the "apprentice" psychotherapist experiences "expectations about professional competence, fantasies, desires about impotent or omnipotent behaviours, anxiety about the new and unknown professional situation" (p.2). Considering these needs can be very helpful to the supervisor, who can stimulate the trainee with "You're doing well", "You can ask me at any time", "You can use imagination, fantasies can help learning", "Go at your pace, you have time, you do not have to hurry." The supervisor should provide a predictable structure, focused on theoretical knowledge.Identify the strengths and weaknesses of the trainee, because the way in which they apply them in their practice creates a baseline from which future knowledge will be built.At this point, the supervisor is seen as a model and indicates the 'how-to' through case-discussion, roleplaying, pieces of therapy, providing a repertoire to be "copied" by the novice psychotherapist. The supervisor should also be an OK / OK relationship model with the trainee.When, from the beginning, a safe dialogue space is created for the trainee to express their fears, insecurities, fantasies, in a climate of unconditional acceptance, in which any question is welcome, a trust bond is generated between supervisor and trainee.In a group, this posture creates the space to talk about personal experiences and exchange feedback with respect and security.When the trainee feels safe, they go naturally to Phase 2 and begins to explore the new world that opens up to them. Phase 2: Doing -The world of sensations and action The trainee begins to explore the new information, trying to put it into practice.Like a child who begins to crawl and broaden their experiences, the trainee wants to experience everything a little.The very act of exploring various situations is already gratifying, and there is still no clear enough relationship between theory and practice.Various flavours are experimented with, as well as looking a little at everything, getting to know and creating an image of this new world. The needs of the trainee can be met by the supervisor with an attitude that involves statements such as "I like the way you ask questions", "You have creative and excellent ideas", "You make good correlations", "Let's build on what you have observed", "I encourage you to think about your ideas and experiences", "I will help you relate your experiences to theoretical references."This is the time to encourage the trainee to act, test their skills and knowledge.As the trainee gains some mastery of theory then the focus is on developing a sense of confidence, helping them feel comfortable and secure in the role of psychotherapist, recognizing and appropriating what they already know. This can be achieved by encouraging the application of skills, help and positive reinforcement for what is being done.With the encouragement of the supervisor, the trainee can explore different types of techniques, procedures and attitudes, seek to relate theoretical knowledge to practice, locate and fill possible gaps in their studies, learn to describe the behaviours of the clients and relate them to diagnostic hypotheses, and to observe the results of their interventions.After much exploration, the trainee naturally begins to draw conclusions and to trust their own perception, moving to Phase 3. Phase 3: Thinking -the domain of concepts In adulthood, the trainee begins to master theoretical concepts, knows how to put them into practice, and needs to find their own method of doing things. The needs of the trainee can be met by the supervisor in an attitude that involves statements such as "You work well with details", "You can think of a way to solve this problem", "What do you think of this?", "You have an excellent ability to think, "How do you feel about these thoughts?"This is also a good time to begin analysing transference processes by asking the trainer about the feelings that affect them in their relationship with the client or with the supervisor or peers in the supervision group.It is important to learn to think about what they feel and also to perceive the feelings that arise from these thoughts. Evaluation and feedback on results achieved and information on skills not yet acquired will be the basis for acquiring new knowledge.Helping the trainee to exercise their 'inner gaze', describing their contact with their own emotions, memories, beliefs and fantasies, while observing the client's behaviour and phenomenological experiences, and placing them in a theoretical context: "Now I am aware that you ... (external observation) "," Now I am aware that I ... (internal observation: feelings, fantasies, physical reactions) ". Questions about what the trainee observes in the client, in them, in the supervisor and the dynamics between them may be important.It is common at this stage for the trainee to test their own ideas, seeming to oppose the supervisor's suggestions.This requires patience from and understanding by the supervisor, because the more space there is for different points of view, the faster the trainee can feel recognized in their own way of thinking, moving on to the next step. Phase 4: Identity -The Continual Evolution of the Self Here the trainee already possesses some mastery of theory, knows how to apply it and can observe themselves in the therapeutic process.At this stage, the goal is to build an identity as a psychotherapist, refine methodology and learn to do therapeutic planning. The supervisor can stimulate the trainee by saying, "You can find out what happens as a result of your actions," "What would be your way of dealing with this situation?","I like the way you risked doing this," "You're figuring out how to handle this information very well."It's time to invite the trainee to dare to do it their way. It may be useful to work in a group at this time when the variety of possible responses for a given situation can be observed, discussing and evaluating the therapeutic process from new perspectives, observing the interventions performed and comparing success and points of resistance; giving and receiving feedback, valuing both positive and corrective aspects.As theoretical knowledge increases, one invests in planning the treatment and next steps, exploring other intervention options.The trainee should be encouraged to look at and learn from their mistakes.Some questions that might help: "What would you do differently if you could repeat this therapy session?", "What will you do next time you work with this client?"This is the time to work with countertransference, defined here as all the psychotherapist's reactions to the client, which are the result of unresolved conflicts of the therapist.It may include beliefs, reinforcement memories, expectations, and anticipations.The supervisor may suggest that the trainee go through a process of psychotherapy to work on their personal issues that are interfering with their objectivity as a psychotherapist. One can also use the comparison between different approaches, authors or techniques to deal with a situation, discussing the pros and cons of each and valuing the background of the trainee.At that moment, valuing the individual response and highlighting the skills already acquired may be fundamental.It is a good time to encourage the trainee to share and compare their experience with other trainees, which gradually strengthens their confidence, leading them to the next phase. Phase 5: Becoming Skilled -The "hows" and "whys" of life In adulthood, the trainee needs to test self and others, find out where they can go and where their limits are.These needs can be met by the supervisor in an attitude that involves statements such as "Trial and error is the best way to learn", "This test is just for you to have an idea of how you are going, not to define your skills or your ability," "What can you do to improve your performance?" It is a good time to discuss values and ethics in the relationship with the client, with the peers, with multidisciplinary teams and with society.Skills and theoretical knowledge are already well established, and the supervisor can deepen the questioning about the reasons that led to the choice of a particular intervention and what it is expected to achieve.Evaluation of results and planning of future actions are the main topic to be addressed at that time. Theoretical discussions are important so the trainee can explore new possibilities for action.To create dialogical space for the trainee to test opinions different from those of the supervisor and to be respected in their own uniqueness is fundamental.Critical analysis and new theoretical discoveries can be very interesting to arouse the taste in the trainee for scientific writing. You can ask the trainee to summarize the supervision: "What was the problem presented?","What did you learn from the client work and supervision?","What did you learn about yourself?","What can you do differently next time?"Trainees should be given an opportunity to explore, experiment, and gradually develop a unique therapeutic style of their own.Working in groups, you can establish something to be observed; each one speaks to an aspect that has not yet been mentioned; and in the end the trainee reports what they learned from the different observations.It is also possible to vary the topics to be discussed and, as far as possible, to begin to relate different approaches through a case study: a) The trainee should describe the problem based on behavioural observation, b) Analyse this observation through a theoretical approach, c) Raise a hypothesis about the client from this theory, d) Develop one or more interventions that are consistent with the theory and hypothesis raised, e) If the trainee goes well thus far, they may be challenged to examine the behaviour from other perspectives, and possible theoretical hypotheses and interventions. The supervisor will then stimulate the discussion of the various possibilities for intervention.Advanced trainees may be asked to supervise other trainees, develop scientific research, or come up with new ideas. At this point, it is necessary to prepare the trainee for the disconnection of both the supervisor and the protected environment of the supervision group.The moment of farewell comes when the trainees leave excited about what they have learned, able to think critically about their work and able to relate to colleagues with interest and respect. Conclusion There are several ways for the supervisor to stimulate the motivation of trainee in the supervision process.In addition to aspects of theoretical knowledge, technical and ethical skills, the supervisor should also be aware of the ability of the future psychotherapist to self-develop, stimulating them according to the level of development, valuing the trainee's background and skills already acquired. The quality of the supervisor-trainee relationship is critical to successful supervision and success in the practice of the psychotherapist.The art of asking and giving feedback can be precious tools in the supervisory process. The supervisor should constantly invest in their own improvement, as much as they invest in the improvement of their trainees. Maria Regina Ferreira Da Silva, Psychologist and Teaching Member UNAT-Brasil, can be contacted on mresil@terra.com.br Phase 1 : Being -The basis of our existence from birth to six months -development needs relate to existing and living, communicating what you need, trusting and having your needs met. Phase 6 : Integration -Creation and reproductionThe trainee at this stage needs to create independence and broaden their view on the different aspects of therapy.The supervisor can encourage them by saying, "I believe you can describe well what you do," "Tell me what and how you are doing.","You are summarizing information/ideas brilliantly!"
2019-05-12T14:24:45.290Z
2018-12-24T00:00:00.000
{ "year": 2018, "sha1": "fd6a3f33dd4cc553d409d5039e071c08020e01f7", "oa_license": "CCBY", "oa_url": "https://www.ijtarp.org/article/download/18879/12227", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fd6a3f33dd4cc553d409d5039e071c08020e01f7", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
245134022
pes2o/s2orc
v3-fos-license
Sirt1 Promotes the Restoration of Hepatic Progenitor Cell (HPC)-Mediated Liver Fatty Injury in NAFLD Through Activating the Wnt/β-Catenin Signal Pathway Non-alcoholic fatty liver disease (NAFLD) has developed into the world's largest chronic epidemic. In NAFLD, hepatic steatosis causes hepatocytes dysfunction and even apoptosis. The liver has a strong restoration or regeneration ability after an injury, however, it is unclear through which pattern fatty liver injury in NAFLD is repaired and what the repair mechanism is. Here, we found that in the high-fat diet (HFD)-induced NAFLD mice model, fatty liver injury caused the significant ductular reaction (DR), which is a marker to promote the repair of liver injury. SOX9+ and HNF4α+ biphenotype also suggested that hepatic progenitor cells (HPCs) were activated by fatty liver injury in the HFD-elicited NAFLD mice model. Concurrently, fatty liver injury also activated the Wnt/β-catenin signal pathway, which is a necessary process for HPC differentiation into mature hepatocytes. However, Sirt1 knockdown weakened HPC activation and Wnt/β-catenin signal in Sirt1+/− mice with HFD feeding. In rat-derived WB-F344 hepatic stem cell line, Sirt1 overexpression (OE) or Sirt1 activator–Resveratrol promoted HPC differentiation via activating Wnt/β-catenin signal pathway. Glycogen PAS staining demonstrated that Sirt1 OE promoted WB-F344 cells to differentiate into mature hepatocytes with glycogen synthesis ability, while Sirt1 inhibitor EX527 or Wnt/β-catenin pathway inhibitor HF535 decreased glycogen positive cells. Together, our data suggested that Sirt1 plays a vital role in activating HPCs to repair fatty liver injury or promote liver regeneration through the Wnt/β-catenin signal pathway in NAFLD, which might provide a new strategy for fatty liver injury or NAFLD therapy. INTRODUCTION The liver is a hub of material metabolism in humans and animals and an important digestive and detoxification organ. However, the liver is the main target organ attacked by various adverse factors. Non-alcoholic fatty liver disease (NAFLD) is the general name of a series of diseases caused by liver fat accumulation.It starts from liver fat accumulation and then develops into steatohepatitis (NASH), cirrhosis, and even hepatic cancer (HCC) (1). In recent years, NAFLD has developed into the world's largest chronic epidemic and affected almost a quarter of the world's population (2,3). Liver transplantation caused by NAFLD is also increasing year by year in the world. It is estimated that the number of patients with NASH and HCC will increase to 168% and 137% by 2030 (4). Presently the only treatment for NASH and HCC is liver transplantation, and there is an alarming increase in the number of patients with liver transplantation (5). However, the supply of donor organs is far from meeting the demand, resulting in the death of many patients (4). Also, with the increasing prevalence of obesity, diabetes and metabolic syndrome, the rising prevalence of NAFLD will seriously threaten human and animal health and even life (6). The liver is a metabolic organ with a strong ability of restoration and regeneration after injury. Thus, it is meaningful and urgent to explore the repair and mechanism of fatty liver injury for finding a new way to treat chronic liver disease. The liver is a unique organ with a powerful regenerative capacity. When the liver receives partial hepatectomy or some chemical liver injury without damaging the remaining hepatocytes or other hepatocytes, liver regeneration can be achieved by residual hepatocyte proliferation. In this pattern, mature hepatocytes enter the cell cycle within 20-24 h and begin to restore the liver volume and function by mitosis (7). However, during persistent or severe liver injury, such as submassive necrosis, chronic viral hepatitis, and non-alcoholic fatty liver disease, this efficient renewal from residual hepatocyte is inhibited (8). In this scenario, hepatic progenitor cells (HPCs) are activated and proliferated. HPCs can gradually extend from the portal vein area to the liver parenchyma and differentiate into mature hepatocytes and bile duct cells to restore the damaged liver (9). In NAFLD, hepatic fat accumulation is a chronic process of liver damage, moreover, hepatic steatosis causes hepatocytes oxidative stress, which directly induces p21 expression or triggers apoptosis cascade, resulting in hepatocyte dysfunction without response to injury (10). Thus, we suppose that fatty liver injury in NAFLD can depend on the second pattern that activates HPCs to restore liver function. HPCs (also named oval cells in rodents) have the potential capacity of bidirectional differentiation between hepatocytes and cholangiocytes and are important cell sources for the regeneration of mature hepatocytes and cholangiocytes (11). The study shows that some chronic pathological situations, such as NAFLD, chronic viral hepatitis, and alcoholic liver disease, promote the proliferation and differentiation of HPCs around the portal vein, which is called a "biliary reaction" (DR) (12). DR is the repair reaction of hepatobiliary cell injury, and HPCs, as a key component of DR, proliferates and differentiates in activated niches to regenerate damaged livers (13). The activation of HPCs can be detected through some markers, such as SOX9, EpCAM, CD133, and LGR5 (14). Studies suggest that Wnt/βcatenin signaling promotes the differentiation of HPCs into mature hepatocytes (15). Moreover, Wnt/β-catenin signaling is essential for hepatocyte proliferation, embryonic development, liver development, and maintenance of liver homeostasis (16,17). On the contrary, the inhibition of the Wnt/ β-catenin signaling pathway impairs the differentiation of HPCs into mature hepatocytes (18). Sirt1 is a highly conserved NAD + -dependent deacetylase involved in a variety of biological functions, including transcriptional silencing, cell proliferation and differentiation, senescence, apoptosis, glucose/lipid metabolism, stress response, and insulin secretion (19). Sirt1 is also very important for the activation and self-renewal of a variety of stem cells (20). The studies show that Sirt1 absence causes the senescence of hematopoietic stem cells, while Sirt1 overexpression promotes cell proliferation and cell cycle progression of mesenchymal stem cells, spermatogonial stem cells, skeletal muscle stem cells, and neural progenitor cells (21)(22)(23). Sirt1 is reported as a key factor to regulate and determine the Wnt signal pathway and cell fate. For example, Sirt1 prevents adipogenesis of bone marrow mesenchymal stem cells through deacetylating β-catenin (24). In porcine pancreatic stem cells, resveratrol-activating Sirt1 can reduce the acetylation level of β-catenin and inhibit its degradation, and thereby induce the transcriptional activation of downstream target genes related to proliferation, apoptosis, and differentiation (25). The effect of Sirt1 on HPCs in fatty liver injury is not reported, thus, we hypothesize presently that Sirt1 plays a key role in liver repair and regeneration mediated by HPCs in the NAFLD model with liver fatty injury. Our study attempts to provide new details for revealing the activation and regulation mechanism of HPCs in chronic NAFLD. Animal Experiments C57BL/6 background male heterozygous knockout (Sirt1 +/− ) mice were provided by Professor Peng Jian, who is from College of Animal Science and Technology &College of Veterinary Medicine, Huazhong Agricultural University, China. Both background information and genotype identification of Sirt1 knockout mice were consistent with those previously published (24,26). Twenty-four wild-type (WT) female C57BL/6 mice weighing 20-21 g (9 weeks old) were purchased from the Experimental Animal Center of Hubei Province (Wuhan, China). All the mice were housed in individual plastic cages on a 12 h light/dark cycle with free access to water and food at room temperature. Half of 24 ten-week old mice were fed with a highfat diet (HFD; 60 kcal% fat) for 20 weeks, and the other half were fed with a normal diet (ND; 10 kcal% fat). To evaluate the effect of Sirt1 on liver repair and regeneration in vivo, sixteen Sirt1 +/− female C57BL/6J mice weighing 18-20 g (8 weeks old) and sixteen littermate WT mice were fed HFD or ND and divided into 4 subgroups: WT mice fed with ND, WT mice fed with HFD, Sirt1 +/− mice fed with ND, Sirt1 +/− mice fed with HFD. After 20-week diet experiments, all the mice were fasted for 12 h and then sacrificed, and their tissues and sera were collected and immediately frozen in liquid nitrogen for further experiments. All the animal procedures were approved by the Hubei Province Committee on Laboratory Animal Care. HE and Oil Red O Staining Fresh mouse liver was used to perform HE staining after paraffin sections and Oil Red O (ORO) staining after frozen sections. The staining result was photographed obtained by inverted fluorescence microscope IX73 (Olympus, Japan). Serum Routine Analysis Fresh mouse serum samples were prepared and the serum levels of triglycerides (TG), total cholesterol (TC), glucose (Glu), glutamic-pyruvic transaminase (ALT) and aspartate-oxaloacetic transaminase (AST), and albumin (ALB) were detected by the completely automatic biochemical analyzer (Beckman, USA). Cell Culture and Transient Transfection Hepa1-6 hepatocytes or WB-F344 stem cells were cultured at 37 • C, 5%CO 2 in DMEM containing 10% fetal bovine serum, penicillin (100 U/ml), and streptomycin (100 mg/ml). For treatment of Hepa1-6 cells with palmitic acid (PA). Transient transfection was performed by Lipofectamine 2000 according to the manufacturer's instructions when cells were grown to 80% confluence. Briefly, cells were incubated for 6 h in a serum-free medium containing plasmid DNA and Lipofectamine 2000. The transfection medium was subsequently replaced with DMEM supplemented with 10% calf bovine serum, and the cells were cultured for an additional 24 h and harvested. Real-Time Quantitative PCR Total cellular RNA was extracted using Trizol (Thermo Fisher Scientific), followed by treatment with RNase-free DNaseI (Thermo Fisher Scientific) to remove genomic DNA contamination (Takara). RNA was then reverse transcribed into cDNA using Moloney murine leukemia virus reverse transcriptase. Half a microliter of the cDNA product was amplified by real-time PCR using gene-specific primers (Supplementary Table 1) to a total volume of 20 ml with SYBR Green Master Mix (Takara) on a CFX96 thermal cycler (Bio-Rad, Hercules, CA, USA). Relative gene expression was normalized to glyceraldehyde 3-phosphate dehydrogenase using the 2 − Ct method. Immunofluorescence Liver tissues were embedded in OCT at −80 • C and then cut into 6-µm sections. After antigen retrieval, the specimens were fixed with 4% paraformaldehyde and incubated with TBST solution (TBS-0.1% Tween 20) (Sigma-Aldrich) with 0.3% Triton solution (Sigma-Aldrich) for 20 min at room temperature, blocked with an antibody with TBST with 10% goat serum for 2 h at 4 • C, incubated with primary antibody overnight at 4 • C in TBST with 5% BSA, washed twice and incubated with secondary antibodies at room temperature for 1 h, and observed under confocal microscopy (Leica Microsystems, TCS-SP8TCS-SP8). WB-F344 Cell Differentiation WB-F344 cells were cultured in 12-well plates. The cells were divided into four groups and were cultured in a DMEM medium. When the cells grew to about 80%, the four groups were corresponded respectively into empty pcDNA3.1 plasmid transfection group (Control), pcDNASirt1 plasmid transfection group (Sirt1 OE), pcDNASirt1 plasmid transfection with EX527 treatment group (Sirt1 OE + EX527), and pcDNASirt1 plasmid transfection with FH535 treatment group (Sirt1 OE + FH535). Then the cells for four groups were cultured in a differentiation medium added with Hepa1-6 conditioned medium, were induced to differentiate into hepatocytes for 72 h, and the samples were collected for the experiments. Glycogen PAS Staining The differentiation of WB-F344 cells into hepatocytes was determined by glycogen PAS Staining with glycogen PAS staining kit. The staining result was photographed and obtained by inverted fluorescence microscope IX73 (Olympus, Japan). Statistical Analysis For in vivo experiments, 8-10 mice were used in each subgroup. In vitro experiments were performed at least three times with similar results. Data were expressed as the means ± SEM. Statistical differences between groups were determined using ANOVA or Student's t-test depending on group size. P < 0.05 was considered as statistical significance. NAFLD Mouse Model Was Established by HFD Feeding To establish the NAFLD model, mice were fed with HFD for 20 weeks as described in the methods. The results showed that body weight (BW) was increased in the mice with HFD feeding for 10 weeks later compared to the control group with ND feeding ( Figure 1A). Mice phenotypic differences were observed between ND and HFD groups, and the mice of the HFD group were obese compared with those of the ND group ( Figure 1B). After dissection, hepatic morphological differences were noted that the livers in the control group were dark red and smooth, and the edge was clear, while the livers in the HFD group were Frontiers in Nutrition | www.frontiersin.org pale brown and swollen, and the edge was round and obtuse ( Figure 1C), and the liver weight was significantly increased in the HFD group compared to the control group ( Figure 1D). Liver HE staining showed that compared with the control group, the arrangement of hepatocytes in the HFD group was disordered, the cellular vacuole degeneration was severe, the structure of hepatic lobule was damaged, and the portal area and lobule were infiltrated by inflammation cells (Figure 1E). Further, Oil red O staining showed that the liver in the HFD group was accumulated by a large number of lipid droplets, while almost no lipid droplets were observed in the liver of the control group ( Figure 1F). Serum analysis demonstrated that the concentration of both ALT ( Figure 1G) and AST ( Figure 1H) in the HFD group was significantly higher than that of the control group, suggesting liver function was seriously damaged in the HFD group. Thus, these data suggested that the mouse NAFLD model was successfully established by HFD feeding. HPC Proliferation Activated by Fatty Liver Injury Promoted Liver Regeneration in NAFLD Using the above NAFLD model, we want to explore whether the repair of fatty liver damage was promoted. NAFLD is a chronic liver injury, liver repair or liver function recovery needs to depend on activating HPC proliferation, and then differentiates into mature hepatocytes and triggers liver regeneration. Therefore, we detected the expression of SOX9, CD133, and LGR5, three important markers for HPC proliferation, and the results showed that the mRNA and protein levels of SOX9 and CD133 were significantly increased in the fatty liver injury mice model (Figures 2A,B), and LGR5 protein level was also significantly upregulated ( Figure 2B). CK19 is an important marker for bile duct cells, and the proliferation of CK19 + cells is thought as a sign of DR. Our immunofluorescence results revealed that CK19 + cells were activated and proliferated in the fatty liver injury of the mouse NAFLD model, indicating that fatty liver injury caused serious DR ( Figure 2C). Furthermore, a large number of SOX9 + HNF4α + biphenotypic HPCs were widely distributed in the liver with fatty liver injury of HFD-induced NAFLD group, while in the control group, positive SOX9 was only observed in bile duct cells ( Figure 2D). To determine whether activated HPCs promoted liver regeneration, Ki67 + cells, a key marker for liver regeneration, were examined by immunofluorescence staining, and the result showed that Ki67 + cells increased obviously in the liver with fatty liver injury of HFD-induced NAFLD group (Figure 2E). These data suggested that fatty liver injury in NAFLD activated HPC proliferation and promoted HPC-mediated liver regeneration. Fatty Liver Injury Activated Hepatic Wnt3a/β-Catenin Signaling Pathway Wnt/ β-catenin signal is thought as an important driving factor for liver regeneration because it is closely associated with the activation and differentiation of HPCs. To clarify that the repair of fatty liver injury in NAFLD depends on the Wnt/βcatenin signal to activate HPC proliferation and differentiation, we further detected the expression of Wnt pathway-related genes in the liver of the NAFLD model. The results showed that the mRNA levels of hepatic ligand Wnt3a (a key ligand for HPC activation in Wnt pathway), β-catenin, and their target gene CyclinD1 were significantly higher in the HFDelicited NAFLD model than in the control, while Wnt4 (another ligand of Wnt pathway) had no significant difference. GSK3β, a negative regulator of the Wnt pathway, was significantly decreased in the liver of the HFD-induced NAFLD model ( Figure 3A). The same results were acquired in the protein levels of CyclinD1 and GSK3β (Figures 3B,C). Further, the immunofluorescence staining results showed that the expression and nuclear localization of β-catenin were significantly increased in the HFD group ( Figure 3D). These data revealed that the hepatic Wnt3a/β-catenin pathway was activated by liver fatty injury in HFD-feeding mice. Therefore, combined with 3.2 results, we could obtain a conclusion that liver fatty injury promoted the activation and proliferation of HPCs via the Wnt3a/ β-catenin pathway, which was conducive to the repair of liver injury. Sirt1 Knockdown Aggravated HFD-Elicited High Blood Glucose/Lipid and Liver Fatty Injury According to all the above data, HPCs promoted the restoration of liver fatty injury via the Wnt3a/ β-catenin pathway. Also, Han et al. have reported that Sirt1 plays an important role in the maintenance, activation, and differentiation of various stem cells and adult hepatocytes (20). Thus, we were to explore the effects of Sirt1 during liver fatty injury since the effect of Sirt1 on HPC activation has not been reported presently. Our current data showed that in different subgroups, Sirt1 +/− mice with HFD feeding (Sirt1 +/− + HFD) gained the biggest weight after feeding 18 weeks, followed by wild type with HFD feeding (WT + HFD), while the body weights of wild type with ND feeding (WT + ND) and Sirt1 +/− with ND feeding (Sirt1 +/− + ND) were significantly lower in the last few weeks (Figure 4A). The liver weights of wild-type HFD and Sirt1 +/− HFD groups were also significantly higher than those of two corresponding ND groups and the liver weights of the Sirt1 +/− HFD group were higher than those of the wild-type HFD group (Figure 4B). Serum concentrations of Triglyceride (TG) (Figure 4C), total cholesterol (TC) (Figure 4D), glucose (Glu) (Figure 4E), ALT (Figure 4F), and AST ( Figure 4G) in four groups showed a consistent trend, while serum albumin (ALB) had no significant effect ( Figure 4H). HE staining showed that hepatocytes in wildtype ND and Sirt1 +/− ND groups were orderly and hepatic lobule structure was clear, while hepatocytes in Sirt1 +/− HFD and wildtype HFD groups were disordered, with severe cell vacuolar degeneration, hepatic lobule structure destruction, and portal area and inflammatory cell infiltration in the lobule. Notably, the Sirt1 +/− HFD group is more serious (Figure 4I). The results of oil red O staining showed that larger lipid droplets accumulated in the liver of the Sirt1 +/− HFD group than wild-type HFD group, but almost no lipid droplets were found in wild-type ND and Sirt1 +/− ND groups ( Figure 4J). These results suggested that Sirt1 knockdown aggravated HFD-elicited liver fatty injury. Sirt1 Deficiency Repressed the HPC-Mediated Restoration of Liver Fatty Injury Our above data have demonstrated that liver fatty injury promoted the activation and proliferation of HPCs. Consistent with the above results, a large number of SOX9 + HNF4α + biphenotypic HPCs appeared again in the wild-type HFD group. However, we observed that the number of SOX9 + HNF4α + biphenotypic HPCs was significantly decreased in the Sirt1 +/− HFD group compared to the wild-type HFD group (Figure 5A), suggesting Sirt1 knockdown inhibited HPC activation. Moreover, the number of Ki67 + cells in the liver of the wild-type HFD group was higher than that in the Sirt1 +/− HFD group, indicating that the HPC proliferation activity of Sirt1 +/− mice was inhibited after liver fatty injury ( Figure 5B). These data suggested that HPC-mediated restoration of liver fatty injury was restrained in Sirt1 +/− mice. Effects of Palmitic Acid Stimulation on the Expression of Stemness Genes and Pathway The occurrence and development of NAFLD are closely related to the excessive free fatty acids in blood circulation, especially saturated fatty acids, of which palmitic acid (PA) is an important component (27). Excessive PA can cause oxidative stress and steatosis of hepatocytes. In order to identify the effects of palmitic acid on hepatocyte proliferation, we used PA to stimulate Hepa1-6 cells. Considering the concentration of PA above 300uM had a significant effect on the survival of Hepa1-6 cells (Figure 6A), thus, we chose the PA concentration of 0uM, 100uM, and 200uM to carry on the present experiments. The results showed that After PA stimulation for 24 h, the mRNA levels of Wnt3a and CyclinD1 were increased significantly, while Wnt4 did not be affected, but GSK3β was significantly decreased (Figure 6B). Consistently, the protein expression levels of SOX9, CD133, βcatenin, and CyclinD1 were markedly increased ( Figure 6C). Furthermore, Wnt-Agonist1, an activator of the Wnt pathway also upregulated the mRNA levels of SOX9, β-catenin, Wnt3a, and Wnt4 ( Figure 6D). These data indicated that excessive free fatty acids damaged hepatocytes, but simultaneously promoted hepatocyte proliferation via the Wnt/β-catenin pathway, which might be conducive to the initiation of the cell repair mechanism. Sirt1 OE Promoted the Activation and Differentiation of WB-F344 Oval Cells HPCs are also called oval cells in rodents, and the WB-F344 cell line is a rat-derived oval cell line. To confirm the effect of Sirt1 on HPC activation, Sirt1 plasmids were transfected into WB-F344 cells, and the results showed that the transcriptional levels of SOX9, CD133, and EpCAM marker genes for HPC activation were significantly upregulated after Sirt1 OE (Figure 7A). Sirt1 OE also increased significantly the protein levels of HPC differentiation markers SOX9, CD133, LGR5, and CD44 in WB-F344 cells (Figure 7B). Similarly, resveratrol (RSV) and nicotinamide (NAM), two important activators for Sirt1, produced the consistent influence trend on HPC differentiation marker SOX9 with Sirt1 OE (Figures 7C-F). The hallmark function of mature hepatocytes is the ability to synthesize glycogen. Therefore, we determined whether WB-F344 hepatic progenitor cells differentiated into mature hepatocytes by glycogen PAS staining. And the results showed that there were more glycogen positive cells in the Sirt1 OE group compared to control, but there was almost no glycogen positive cells in the Sirt1 OE group with Sirt1 inhibitor EX527 treatment (Sirt1 OE + EX527) ( Figure 7G). These data revealed that Sirt1 OE promoted the activation and differentiation of WB-F344 hepatic progenitor cells. Sirt1 OE Promoted the Activation and Differentiation of WB-F344 Cells Through the Wnt/β-catenin Pathway To clarify the mechanism of Sirt1 regulating HPC differentiation, we analyzed the effect of Sirt1 OE on the Wnt/β-catenin signal pathway. The present results revealed that overexpressed Sirt1 significantly increased the mRNA levels of Wnt3a, βcatenin, and AXIN2 in WB-F344 oval cells (Figure 8A). GSK3β is a repressor of Wnt3a/β-catenin pathway, and GSK3β-Tyr216 phosphorylation/activation promotes β-catenin degradation. Our Western Blot results showed that Sirt1 OE inhibited significantly the GSK3β-Tyr216 phosphorylation ( Figure 8B) and increased the levels of total and nuclear β-catenin (Figures 8B,C). STAT3, SOX9, and β-catenin are important transcription factors determining HPC proliferation and differentiation. After WB-F344 cells were stimulated by Sirt1 activator resveratrol (RSV) for 24 h, the nuclear protein was extracted from WB-F344 cells and the levels of nuclear SirT1, STAT3, SOX9, and β-catenin were detected by Western Blot. The results showed that RSV enhanced the entry of Sirt1, STAT3, SOX9, and β-catenin into the nucleus (Figure 8D), which is in line with the conclusion in vivo that Sirt1 knockdown repressed the expression of β-catenin and CyclinD1, and enhanced the expression of HFN4α (Figure 8E), suggesting inhibiting HPC proliferation and differentiation. To understand fully the regulation of Sirt1 on HPC differentiation via Wnt/β-catenin, after Sirt1 plasmid was transfected into WB-F344 cells for 24h, we used FH535, an effective inhibitor of the Wnt/β-catenin pathway, to treat the cells, then induced the cells to differentiate into hepatocytes for 72 h. Glycogen PAS staining results showed that glycogen positive cells were markedly decreased in Sirt1 OE with the FH535 treatment group (Sirt1 OE+FH535) compared to the Sirt1 OE group (Figure 8F), suggesting that the differentiation of HPCs into hepatocytes activated by Sirt1 depends on the Wnt/β catenin pathway. DISCUSSION The liver is an important metabolic organ of the body, which has a strong ability to regenerate after injury (9). With the increase in the incidence of fatty liver disease, the aplastic disorder has become an important clinical problem (28). NAFLD is the main cause of chronic liver disease in many parts of the world, and fat is the most common cause of nonalcoholic steatohepatitis. In the present study, we chose HFDelicited NAFLD mice as a chronic liver injury model to explore HPC-mediated fatty liver injury and repair as well as the corresponding mechanism. After 20 weeks of HFD induction, mice presented obese phenotypes, and the liver was accumulated a large amount of fat and infiltrated a lot of inflammatory cells (Figures 1A-F), suggesting that HFD caused mice fatty liver or lipid toxicity and liver fat accumulation elicited the recruitment of inflammatory cells and further developed into NAFLD. And the significant increase of ALT and AST concentrations in serum also indicates that liver function has been seriously damaged (Figures 1G,H). Clinically, DR is often observed in patients with chronic liver disease, and most of the hepatocytes in these patients have proliferation disorders (29). It was previously reported that the CK19 positive area was enlarged in liver tissues of patients with NAFLD, which represented the occurrence of bile duct reaction. In addition, bile duct reaction was higher in NASH patients with severe liver fibrosis, suggesting that bile duct reaction was associated with the progression of non-alcoholic steatohepatitis (30). In our liver fat injury model, obvious DR was observed, CK19 positive cells significantly increased, and bile duct cells spread and presented atypical morphology ( Figure 2C). This phenomenon indicated that hepatic progenitor cells (HPCs) might originate from bile duct cells rather than hepatocytes. In non-alcoholic fatty liver, oxidative stress plays a major role in hepatocyte replication disorder by directly inducing p21 expression or by triggering apoptosis cascade (31). The damage of hepatocyte regeneration and the increase of hepatocyte injury caused by long-term oxidative stress are the common results of HPC activation and differentiation (30). In the liver fatty injury model at present, hepatic SOX9 + HNF4α + biphenotype suggested that HPCs were activated by fatty liver injury ( Figure 2D). SOX9 is a member of the Sry-related high mobility family box transcription factor, which plays a key role in the embryonic formation of many tissues and organs, including chondrocytes, testis, heart, lung, pancreas, bile duct, hair follicles, retina, and central nervous system (14). It has been reported that hybrid hepatocytes expressing SOX9 and HNF4 α have been identified as HPCs in Herring tube, and they proliferate vigorously and then differentiate into hepatocytes after liver injury (32)(33)(34). In our study, liver fatty damage also caused the activation of HPCs expressing SOX9 and HNF4 α, which were also enriched near the Herring tube. The increased expression of SOX9, CD133 and LGR5 (markers for HPC proliferation) (Figures 2A,B), DR with CK19 + phenotype ( Figure 2C) as well as Ki67 positive phenotype (Figure 2E) also demonstrated the activation HPC proliferation in liver fatty damage. Many studies have reported that Wnt/ β-catenin signal pathway is an important driver of liver injury repair and regeneration and plays a key role in the activation, proliferation, and differentiation of adult liver progenitor cells (18,35). Our experimental results show that either mouse liver fat injury or PA stimulation of Hepa1-6 cells enhanced the Wnt/βcatenin signal (Figures 3 and 6). In the previous report, Wnt3a can stimulate the proliferation activity of HPCs in vitro, and Wnt/ β-catenin signal pathway is activated obviously in HPC proliferation induced by DDC diet in mice (36). Our present study found that in liver fat injury, the activation of the Wnt pathway stimulated HPC proliferation (Figures 1-3). Liver fat injury caused the upregulation of Wnt3a expression ( Figure 3A) and the downregulation of GSK3β expression (Figures 3B,C), a repressor of Wnt3a/β-catenin pathway, which stimulated the entry of β-catenin into the nucleus (Figure 3D). This increase in nuclear β-catenin promoted the expression of its target gene CyclinD1 (Figure 3B). CyclinD1 is a cell cycle key regulator to promote the process of the G1 cell cycle to the S phase which is a marker of cell proliferation. The other study shows that the proliferative activity of rat oval cells is increased significantly in AAF/PHx model, and the nuclear βcatenin staining was positive (16). Therefore, the activation of the Wnt/β-catenin pathway might be indispensable for promoting HPC proliferation in the repair of liver injury and liver regeneration. It has been demonstrated that Sirt1 affects the maintenance, activation, and senescence of many kinds of stem cells (19,37). Here our results showed that HFD-elicited liver fatty injury was seriously aggravated in Sirt1 +/− mice (Figure 4), and Sirt1 knockdown significantly inhibited the activation of HPCs according to the number of SOX9 + HNF4α + biphenotypic HPCs as well as the proliferation of HPCs based on Ki67 + cells in the liver of Sirt1 +/− mice with HFD feeding (Figure 5). Reversely, in vitro Sirt1 OE or activation with Sirt1 activators (RSV or NAM) significantly upregulated the markers of HPCs in ratderived WB-F344 oval cells (Figure 7). These data indicated that Sirt1 had a positive effect on the activation of HPCs. It is worth noting that the role of NAM in regulating Sirt1 activation is still controversial. NAM is generally considered to be the product of deacetylation catalyzed by Sirt1 and is considered to have product feedback inhibition on Sirt1 activity. However, in the present study, we found that NAM is a Sirt1 activator and promoted its expression and function in cell proliferation (Figures 7D,F). The possible reason is that NAM is the main precursor of intracellular NAD + synthesis through the salvage pathway, while NAD + can promote Sirt1 activation. The previous study of Jang et al. (38) also obtained a similar result that supplementation of NAM in cell culture increased the level of intracellular NAD + and lead to the activation of Sirt1. LGR5 is a Wnt target gene with low expression in the normal liver and also a receptor of Wnt agonist R-spondin1 (39). Huch et al. found that LGR5 + cells appeared near the bile duct after liver injury in mice, and the LGR5 + cells differentiated into BECs and hepatocytes (15). In our study, we also found that Sirt1 OE increased significantly the expression of LGR5 ( Figure 7B). Thus, Sirt1 might regulate the Wnt signal pathway by stimulating the expression of Wnt target gene LGR5 and thereby promoting liver regeneration. STAT3 plays a key role in liver regeneration, promoting hepatocyte proliferation in hepatocyte-mediated liver regeneration and initiating cell progression from the G1 phase to the S phase (40). As an important transcription factor, SOX9 can participate in a variety of genetic pathways, regulate stem cell selfrenewal ability and multi-directional differentiation potential, and promote the development of stem cell and progenitor cell niches in tissues and organs (41). Indeed, we found that activating Sirt1 increased the expression of SOX9, a marker for HPC differentiation (Figures 7C,D) and the entry of SOX9 and STAT3 into the nucleus (Figures 8C,D) and increased glycogen positive cells (Figure 8F), indicating that more HPCs differentiated into mature hepatocytes. It is reported that endogenous or exogenous activation of β-catenin improves liver regeneration in animal models and patients (42). Our current results showed that Sirt1 OE in WB-F344 oval cells inhibited significantly the GSK3β-Tyr216 phosphorylation ( Figure 8B) and increased the levels of total and nuclear β-catenin (Figures 8B,C), suggesting that Sirt1 increased activity of Wnt/ β-catenin. On the contrary, we also proved this conclusion because Sirt1 knockdown inhibited significantly Wnt/ β-catenin pathway in the liver of Sirt1 +/− mice ( Figure 8E). Further, glycogen PAS staining results showed that glycogen positive cells were markedly decreased in Sirt1 OE after β-catenin inhibitor FH535 treatment compared to the Sirt1 OE group (Figure 8F). These data suggested that Sirt1activating the differentiation of HPCs into hepatocytes need depend on the Wnt/β-catenin pathway. Actually, in bone marrow mesenchymal stem cells, Sirt1 has also been demonstrated to activate the Wnt/β-catenin pathway, and it can promote βcatenin accumulation in the nucleus by deacetylating β-catenin (24), which is consistent with our present results in hepatic progenitor cells. In conclusion, as summarized in Figure 9, our data suggested that liver fat injury induced by HFD caused a significant biliary reaction and SOX9 + HNF4α + biphenotypic HPCs were activated in the fatty liver of mice with enhanced proliferative activity, and Sirt1 plays a vital role in activating HPCs to repair the fatty liver injury and promote liver regeneration through Wnt/β-catenin signal pathway in NAFLD, which might provide a new strategy for fatty liver injury or NAFLD therapy. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT The animal study was reviewed and approved by Hubei Province Committee on Laboratory Animal Care; Wuhan, Hubei, China. Written informed consent was obtained from the owners for the participation of their animals in this study. AUTHOR CONTRIBUTIONS QL and XC: conceptualization and resources. QL, YG, YW, and XC: methodology and data curation. QL, YG, YW, YC, and BL: investigation. QL, YG, YC, and XC: original draft preparation. QL, YG, and XC: review and editing, visualization, and
2021-12-15T14:25:52.735Z
2021-12-15T00:00:00.000
{ "year": 2021, "sha1": "0cbc4ce36b74037d618cacee867194db4454b05a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2021.791861/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0cbc4ce36b74037d618cacee867194db4454b05a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
209516698
pes2o/s2orc
v3-fos-license
Draft genome sequences of Salmonella Oslo isolated from seafood and its laboratory generated auxotrophic mutant In recent years, the concept of bacteria-mediated cancer therapy has gained significant attention as an alternative to conventional therapy. The focus has been on non-typhoidal Salmonella (NTS), particularly S. Typhimurium, for its anti-cancer properties, however, other NTS serovars such as Salmonella Oslo, which are associated with foodborne illnesses could potentially be effective anti-cancer agents. Here, we report the draft genome sequence of Salmonella Oslo isolated from seafood and its laboratory generated auxotrophic mutant. Introduction Non-typhoidal Salmonella (NTS) is one of the pathogens that frequently cause foodborne infections throughout the world. The pathogenic potential of NTS strains have been well understood [1]. However, its therapeutic potential as an anti-cancer agent remain unexplored. Among more than 2500 NTS serovars, S. Typhimurium VNP20009 [2][3][4], an auxotrophic mutant strain of S. Typhimurium A1-R [5,6], has been successfully studied and was even tested in phase I clinical trial for the treatment of solid tumors [7]. Many features of Salmonella namely the ability to thrive in hypoxic environment of the tumor, to induce innate immune response against tumor or its bactofection, and ability to release anticancer genes within tumor were found to be favorable for tumor regression in animal models [8][9][10]. However, despite the success observed in animal models, the clinical trials didn't yield expected results. This prompted the researchers to look for alternative NTS strains for their anticancer properties. In this study, the aim was to sequence the genomes of Salmonella Oslo isolated from seafood and its laboratory generated auxotrophic mutant. The availability of genetic information provides a basis for further studies, particularly for investigating their role as anti-cancer agents. Salmonella Oslo was isolated from seafood (squid sample) as per the protocol recommended by the FDA Bacteriological Analytical Manual [11] with minor modifications. Briefly, the seafood sample was pre-enriched in lactose broth, followed by enrichment in selenite cysteine broth and tetrathionate broth. Post enrichment, the sample was streaked on Hektoen enteric agar (HiMedia Laboratory Pvt Ltd, India). During the enrichment and plating steps, the incubation temperature was maintained at 37º C for Ivyspring International Publisher 16 to 18 h. Colonies with specific morphological features were selected and were subjected to a series of biochemical tests such as indole test, methyl red test, Voges-Proskuer test, citrate test, triple sugar iron agar (TSIA) test, urease and lysine iron agar (LIA) test for conventional identification. The biochemically positive colonies were further confirmed by PCR using genus-specific primer invA [12]. Serotyping was done at National Salmonella and Escherichia Centre, Central Research Institute, Kasauli, India. Lambda red recombinase method [13] was used to generate the auxotrophic mutant of Salmonella Oslo by inducing deletions in argH and leuB genes, which code for arginine and leucine respectively. Biofilm assay was performed using the method described by Stepanovic et al (2004) [14]. To compare the growth kinetics of Salmonella Oslo (SO1-wild type) and its mutant (LAT9), a 100 µl aliquot of overnight culture of SO1 and LAT9 was added to 5 ml Luria Bertani broth (HiMedia Laboratory Pvt Ltd, India) and incubated at 37°C with shaking at 200 rpm. The optical density was measured at 600 nm (OD600) at different time points such as 0, 1, 2 to 24 h after incubation and expressed as log (OD 600 X 1000). To sequence the genome of SO1 and its laboratory generated mutant, bacterial genomic DNA was extracted using a QIAamp DNA mini kit (Qiagen, Germany). The quality of the extracted DNA was checked by Qubit ® and further verified by a bioanalyzer (Agilent technologies). The genomic DNA library was prepared using a Nextera XT DNA library preparation kit (Illumina, Inc, Cambridge, UK). The whole genome sequencing was performed at UCD, Dublin. The raw sequence data were generated using the Illumina MiSeq platform with a depth of 100x. The obtained paired end reads were merged and the genome was assembled using CLC genomics (version 11) [15]. The processed reads were aligned to the reference genome LT2 strain (LT571437) with Bowtie2 program. The annotation and gene prediction of draft genome was done using the Rapid Annotations Subsystems Technology (RAST) (http:// rast.nmpdr.org/) [16]. The identification of the isolate as true Salmonella enterica serovar Oslo was confirmed by serotyping experiments. As expected, the auxotrophic strain (LAT9) was found to be phenotypically mutant to amino acid arginine and leucine. However, when PCR was performed to confirm the changes in the target genes (argH and leuB), it revealed no deletions in argH and leuB genes. This strange observation prompted us to determine the whole genome sequence of wild (SO1) and mutant (LAT9) strain. The generated library produced a total of 1193762 and 1221694 reads for wild type and mutant respectively. The paired end reads of SO1 were assembled into 121 contigs with coverage of 100x. The genome size was calculated at 4,860,262 bp comprising of 4,974 protein coding genes. The GC content of this strain was found to be 52.2%. The analysis obtained from the RAST also revealed 401 subsystems (Fig. 1). The annotated genome had 383 amino acid biosynthesis genes including argH and leuB. In addition, 78 tRNAs, 11 ncRNAs, 168 pseudo genes were also identified. Similarly, the paired end reads of LAT9 were assembled into 199 contigs with coverage of 100x. The genome size was calculated at 4,890,414 bp comprising of 5,082 protein coding genes. The GC content of this strain was found to be 52.2%. The analysis obtained from the RAST also revealed 402 subsystems (Fig. 2). The annotated genome had 392 amino acid biosynthesis genes. 79 tRNAs, 11 ncRNAs, 239 pseudo genes were also identified. Further, the growth kinetic analysis revealed that the growth of LAT9 was significantly (p-value of >0.001) slower than SO1 (Fig 3). The biofilm forming ability of LAT9 was also significantly reduced when compared with SO1 (Fig 4). To the best of our knowledge, this is the first draft genome sequence of Salmonella Oslo isolated from seafood and its auxotrophic mutant LAT9. Determination of anti-cancer activity of this laboratory generated auxotrophic mutant using cell line and animal models would provide a suitable alternative to S. Typhimurium VNP20009 as candidate strains for bacteria-mediated anticancer therapy. The whole genome shotgun projects have been submitted to GenBank and the assigned accession numbers are as follows: NZ_SJXK00000000 for SO1 and NZ_SMLR00000000 for LAT9. The version described in the paper represents the first version. Acknowledgement Financial support received from Nitte (Deemed to be University) and EMBO in collaboration with UCD (Dr. Seamus Fanning's Laboratory, Centre for food safety and zoonoses, Dublin) to the corresponding author is gratefully acknowledged. Funding The corresponding author has received financial support for this study from DST-SERB, Government of India in the form of an extramural grant (Grant no. ECR/2017/ 000559) and from Nitte (Deemed to be University) in the form of an intramural grant (NUFR1/2016/19-04).
2019-12-19T09:15:53.515Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "007c9e25051d267f51679db2f901171fe8d5a6d0", "oa_license": "CCBY", "oa_url": "https://www.jgenomics.com/v08p0007.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b57c6c87f935845f9e6872fc652541928c99294f", "s2fieldsofstudy": [ "Biology", "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
804551
pes2o/s2orc
v3-fos-license
Molecular Mechanisms Underlying the Effects of Statins in the Central Nervous System 3-Hydroxy-3-methylglutaryl coenzyme A reductase inhibitors, commonly referred to as statins, are widely used in the treatment of dyslipidaemia, in addition to providing primary and secondary prevention against cardiovascular disease and stroke. Statins’ effects on the central nervous system (CNS), particularly on cognition and neurological disorders such as stroke and multiple sclerosis, have received increasing attention in recent years, both within the scientific community and in the media. Current understanding of statins’ effects is limited by a lack of mechanism-based studies, as well as the assumption that all statins have the same pharmacological effect in the central nervous system. This review aims to provide an updated discussion on the molecular mechanisms contributing to statins’ possible effects on cognitive function, neurodegenerative disease, and various neurological disorders such as stroke, epilepsy, depression and CNS cancers. Additionally, the pharmacokinetic differences between statins and how these may result in statin-specific neurological effects are also discussed. Introduction 3-Hydroxy-3-methylglutaryl coenzyme A (HMG-CoA) reductase inhibitors, more commonly referred to as statins, are a class of cholesterol-lowering agents used for the treatment of dyslipidaemia and reduction of atherosclerotic cardiovascular disease risk. Their broad and potent effects on the lipid profile, in conjunction with cholesterol-independent (pleiotropic) cardioprotective effects, have resulted in statins being amongst the most highly prescribed medications worldwide. In spite of high patient tolerability, concerns over the neurological effects of statins have emerged in recent years. Although individual case reports form the basis of these concerns, larger studies and trials have yielded different conclusions, with negligible or in some cases beneficial actions being reported. Whilst numerous clinical studies have sought to determine statins' therapeutic potential in various central nervous system (CNS) disorders, including dementia, multiple sclerosis (MS), epilepsy, depression and stroke, there is still a lack of understanding surrounding the mechanisms of statins' neurological effects. As such, unlike recent reviews and meta-analyses which explore the risks associated with statin use and the development of various neurological conditions (for these see [1][2][3][4]), this review specifically focuses on the molecular mechanisms of statins in the CNS, how pharmacokinetic differences may influence statin action, and subsequent differences in effect between statin compounds. Mechanism of Action Statins' primary mechanism of action is through the competitive, reversible inhibition of HMG-CoA reductase, the rate-limiting step in cholesterol biosynthesis. HMG-CoA reductase catalyses the conversion of HMG-CoA to L-mevalonate and coenzyme A via a four-electron reductive deacetylation ( Figure 1). The pharmacophore of all statins bears resemblance to the endogenous HMG-CoA moiety (Table 1); it competitively binds to the catalytic domain of HMG-CoA reductase, causing steric hindrance and preventing HMG-CoA from accessing the active site [5,6]. Through inhibition of HMG-CoA reductase, statins ultimately prevent the endogenous production of cholesterol. Additionally, the resultant reduction in cholesterol concentration within hepatocytes triggers up-regulation of low-density lipoprotein (LDL)-receptor expression, which promotes the uptake of LDL and LDL-precursors from systemic circulation [7]. Consequently, a significant proportion of statins' cholesterol-lowering is a result of the indirect increase in LDL clearance from plasma, as opposed to simply reduced cholesterol biosynthesis. Secondary mechanisms of statin-induced lipoprotein reduction include inhibition of hepatic synthesis of apolipoprotein B100, and the reduced synthesis and secretion of triglyceride-rich lipoproteins [8,9]. Overall, the effect on the lipid profile is consistent between statins, with reductions in total cholesterol, LDL, and triglycerides, and an increase high-density lipoprotein. Despite having the same mechanism of action and comparative effects on cholesterol profiles, statins can still be subdivided into one of two categories: type I, fungal-derived statins (lovastatin, pravastatin, simvastatin); or type II, synthetically-derived statins (fluvastatin, cerivastatin, atorvastatin, rosuvastatin, pitavastatin). Type I statins maintain close structural homology to mevastatin, the first statin to be developed, maintaining the lactone/open acid moiety in addition to the substituted decalin ring skeleton (Table 1). Although type II statins maintain the HMG-CoA-like lactone moiety for binding, these compounds are fully synthetic inhibitors of HMG-CoA reductase and exhibit highly varied pharmacokinetic properties, including differences in metabolism, excretion, half-lives, bioavailability, dosing times and lipophilicity. Pharmacokinetics Upon oral administration all statins are well absorbed from the intestine, though they undergo extensive first-pass metabolism within the liver, which reduces systemic bioavailability to 5%-50% [11,12]. Most statins are administered as β-hydroxy-acids except for lovastatin and simvastatin, which are pro-drugs and require hepatic metabolism to their active β-hydroxy-acid state. Within the systemic circulation statins can bind variably to albumin, and also differ substantially with respect to half-life and volume of distribution [5,12]. The predominant metabolism route of most statins is via cytochrome P450 (CYP), with atorvastatin, lovastatin and simvastatin metabolised through isoform CYP3A4, and fluvastatin metabolised through isoform CYP2C9 [5,12,13]. In contrast, pravastatin is metabolised largely through sulfation, whilst up to 90% of rosuvastatin is removed via biliary excretion [5,[12][13][14]. Differences in published reports surrounding the pleiotropic effects and adverse effect profiles between statins may be a direct result of their highly varied pharmacokinetic parameters. Effects on Brain Cholesterol For the most part, cholesterol in the adult brain is largely metabolically inert, with an estimated 0.02% undergoing turnover daily [24]. The most significant period of high cholesterol synthesis in the CNS occurs during active myelination, which occurs in early neural development, through the action of oligodendrocytes (ODs) [25]. The rate of cholesterol synthesis decreases significantly after myelination has been completed, however it does still continue at a low basal level in the mature adult brain. This occurs primarily through de novo cholesterol synthesis by astrocytes, although neuronal de novo synthesis and reutilisation of free cholesterol following neuronal death also contributes [26,27]. Whilst the effect of statins on the peripheral pool of cholesterol is well-established, statins' effects on CNS cholesterol are less clear. The CNS does not rely largely on cholesterol from systemic circulation due to limited metabolic turnover during adulthood and the brain's inherent capacity to synthesise its own cholesterol [25]. As such, reductions in plasma cholesterol concentration following statin treatment are unlikely to cause acute disruption in CNS cholesterol homeostasis [28,29]. Unlike cholesterol in plasma which has a half-life of only a few days [24], brain cholesterol has been associated with a half-life of from 6 months to 5 years [30,31]. Thus, chronic statin therapy may be required before significant effects on CNS cholesterol are seen, with reductions in CNS cholesterol possible either directly through direct HMG-CoA reductase inhibition, or indirectly via a "sink effect" [32]. 24(S)-Hydroxycholesterol has been used in many studies as an indicator of brain cholesterol turnover, as it is the by-product of cholesterol metabolism through brain-selective cholesterol 24-hydroxylase (CYP46A1) and is capable of passing through the blood-brain barrier (BBB) for detection in systemic circulation. Following chronic statin administration, numerous studies have shown reductions in plasma and cerebrospinal fluid (CSF) concentrations of 24(S)-hydroxycholesterol [33][34][35][36][37][38][39]. This is in-line with a reduced elimination of cholesterol in the brain as a result of prolonged statin treatment, and suggests statins may indeed affect cholesterol homeostasis in the brain. Thus, considering the low turnover rates of cholesterol within the CNS, is it possible that chronic statin administration is required for any changes in brain cholesterol levels to be observed. CNS Entry The key question of whether statin compounds differ in their ability to permeate the CNS often emerges when considering neurological effects of statins. Whilst lipophilic statins (atorvastatin, lovastatin, fluvastatin, pitavastatin, simvastatin) are capable of crossing the BBB passively, both in vitro and in vivo studies suggest that hydrophilic statins are also able to enter the neuroparenchyma [28,40,41]. Pravastatin has been shown to induce gene expression changes within the mouse brain [40] and has also been detected in human CSF [41] which, considering its poor lipid solubility, raises the question of whether active transporters within the BBB facilitate its entry. All statins, including rosuvastatin and pravastatin, are known substrates for organic anion transporting polypeptides (OATP ; Table 1), of which OATP1A2 and OATP1C1 are known to be expressed in the brain [22,42]. While it is possible that OATP-mediated influx may be a mechanism for hydrophilic statin entry, there have been no studies to date which explore the selectivity of these statins for the CNS-expressed OATP subtypes. Additionally, the presence of monocarboxylic acid transporters at the BBB may represent an alternative mechanism of CNS entry, with pravastatin shown to have affinity for monocarboxylic acid transporters in intestinal epithelial barriers [43], although studies specific to the CNS are again lacking. Regardless of specific transporters, statins are likely to accumulate at differing rates and concentrations within the CNS based upon their differing lipid solubilities alone. When also considering their vast structural differences, their propensity for carrier-mediated uptake into the CNS may also vary between compounds. The possible variations in CNS entry, efflux and indeed potency between statins highlight the need for these drugs to be considered individually with respect to their CNS actions. Until such time that quantification of CNS uptake and efflux for each statin can be achieved, the assumption that statins' effects within the CNS are equivalent and thus broadly applicable across the whole class should be reconsidered. Statins and Cognition Despite a plethora of literature available, the effects of statins on cognitive function remain controversial [2,[44][45][46][47][48][49]. Whilst increasing epidemiological evidence suggests a role for statins in neurodegenerative conditions including vascular dementia, Alzheimer's disease (AD) and Parkinson's disease (PD), there are also several large studies in addition to a number of case reports which contradict these findings (see summary of mechanisms and evidence in Table 2). Given the previously discussed pharmacokinetic differences between statins in the CNS, it is plausible that the differences between studies thus far may be explained by different statin molecules exerting varying degrees of cognitive effect, however this remains speculative. The lack of information surrounding the molecular mechanism of action of statins in the CNS further compounds this uncertainty. Long-term statin treatment appears to be beneficial for cognitive function. Whether statins can cause acute cognitive disruption as a rare adverse effect is unclear due to lack of causal evidence from case reports. Identification of underlying mechanisms in vitro or in vivo is difficult due to the subjective nature of acute cognition changes. Vast discrepancies between models limits our understanding of the mechanisms of statins in MS. It appears likely that modulation of neuroinflammation and/or T cell immunity is involved. Further studies needed to determine if benefit is seen with statins other than simvastatin in MS. Neurofibromatosis Type I ↓ Ras activity; rescue long-term potentiation deficit. Limited in vitro and in vivo data. Conflicting data from randomised controlled trials. Further cell and animal studies are recommended to better understand possible clinical application in NF-1 before any further trials in children with the disorder are conducted. Cognitive Function The effects of statins on cognitive function have received increasing, and arguably disproportionate, attention in recent years. Data from clinical trials thus far has been inconsistent, not only in terms of results, but also analytical methods, population characteristics, existence of baseline cognitive impairments, statin(s) studied, and cognitive endpoints employed. Despite these differences, the majority of studies support a role for protection against cognitive impairment and dementia in patients without baseline cognitive dysfunction following long-term statin use [2,47,[49][50][51]. A recent meta-analysis found that in long-term cognition studies, incident dementia was reduced in statin-treated patients (hazard ratio, 0.71; 95% confidence interval, 0.61-0.82) [2]. A number of mechanisms have been implicated in statin-induced protection against cognitive impairment, including both cholesterol-dependent and -independent mechanisms. Increased LDL levels and total cholesterol have both been independently associated with cognitive impairment, thus the lowering of these lipoprotein levels, through statin treatment or other pharmacological/dietary means, has been suggested as a strategy for preventing cognitive impairment [52,53]. Despite this apparent disease link, statins have not only been implicated in cholesterol-associated reductions in cognitive impairment, but have also been found to reduce the odds of cognitive impairment independent of lipid levels [54]. Although HMG-CoA reductase is the rate-limiting step of cholesterol biosynthesis in humans, it is only the second step of a 28-step process (see Figure 1). Consequently, statin treatment also prevents the production of a number of intermediary molecules, including isoprenoid products such as farnesylpyrophosphate (FPP) and geranylgeranylpyrophosphate (GGPP). It has been suggested that much of the cholesterol-independent actions of statins may be attributable to the inhibition of these isoprenoids, including effects on cognitive function. The inhibition of farnesylation by simvastatin has been associated with the enhancement of long-term potentiation between neurons in mice [55]. This study also found that the protective effect of statin treatment was abolished following replenishment of FPP, but not GGPP. Paradoxically, it has been suggested in other studies that the constant production of GGPP, but not FPP or cholesterol, is required for neurite outgrowth and maintenance, long-term potentiation and learning [56,57], possibly suggested differing neuroprotective effects associated with these two isoprenoid intermediates. Given the different roles each of these compounds has, known differences in FPP/GGPP ratios across various brain regions may subsequently result in different local statin-induced effects within these regions. The mechanisms underlying the differential distribution of FPP and GGPP across the brain, and the interplay this has with statin effect, are not known. Another possible cellular mechanism which may underlie the possible beneficial cognitive effect of statins is the alteration of adult neurogenesis. It is hypothesized that suppression of adult neurogenesis may contribute to cognitive dysfunction and emotional symptoms in neurological and psychiatric disorders, with neuroinflammation shown to be an inhibitor of neurogenesis in the adult hippocampus [58,59]. Simvastatin has been shown to enhance neurogenesis in cultured adult neural progenitor cells, as well as in the dentate gyrus of adult mice through enhanced Wnt signalling [60]. In several models of traumatic brain injury (TBI), statins have shown promise in enhancing neurogenesis, and in some have been associated within reductions in injury-associated neurological sequelae, including reduced cognitive deficit. Both simvastatin and atorvastatin have been shown to enhance neurogenesis in the dentate gyrus following TBI in rats [61,62], which was associated with increased vascular endothelial growth factor (VEGF) and brain-derived neurotrophic factor (BDNF) expression [62], increased cellular proliferation and differentiation in the dentate gyrus [62], reduced delayed neuronal death in the hippocampus [61], and improved spatial learning [61,62]. Despite meta-analyses suggesting no adverse effect on cognition resulting from statin treatment in the short-term [2], case reports of impairment in the form of transient, reversible memory loss and confusion have been published [45]. The presentation of detrimental cognitive symptoms is highly varied, both in terms of the nature of impairment (memory loss, amnesia, mood changes), and duration of statin therapy before onset (from 2 days to several months). The prevalence of these adverse effects across published data from large scale clinical trials and epidemiological studies appears negligible [44], however inconsistency of reporting and the risk of bias should be acknowledged. The question of how and why this phenomenon occurs remains unanswered, largely due to the extremely rare nature of this effect and uncertainties over the causal nature of these observations. Due to the CNS' self-reliance in terms of cholesterol production, and the low metabolic turnover of cholesterol within the brain, it would be unlikely that an acute disruption in cholesterol synthesis in either the peripheral or CNS pool would contribute to acute cognitive impairment. This leaves cholesterol-independent, or so-called pleiotropic mechanisms implicated in this rare potential adverse effect. Alzheimer's Disease In addition to statins' acute cognitive effects, much attention has been devoted to the impact of statins both in the prevention and treatment of neurodegenerative disorders, such as AD. AD is a chronic, irreversible form of dementia, characterised by progressive memory loss and cognitive decline. The histopathology of AD is characterised by tissue atrophy and gliosis, in addition to synaptic loss predominating in the frontal and temporal cortices [63]. In addition to these structural features, intracellular neurofibrillary tangles (composed of hyper-phosphorylated tau protein) and extracellular amyloid plaques (composed of amyloid-β) are also typically seen throughout the brain parenchyma. The first reports identifying the potential therapeutic benefit from statins in AD were two independent observational studies, whereby statin use was associated with reductions in AD occurrence of up to 70% [48,64]. Since this time, a number of clinical trials have been published with conflicting data. The majority appear to support this initial finding, that statin treatment in patients without baseline cognitive impairment and before old age may have a beneficial role in protecting against the onset of AD [47,50,51,65]. Furthermore, studies suggest that statins are unlikely to provide neuroprotection against disease progression in patients with existing cognitive impairment at baseline, or if initiated in late old age [50,51]. Consistent with the previous suggestion that individual statins may contribute differently to neurocognitive effects, a cross-sectional study by Wolozin and colleagues found lovastatin and pravastatin, but not simvastatin, to be associated with a reduced risk of AD development [64]. Given that statins are known to reduce dyslipidaemia, a known contributing factor for AD risk, cholesterol-dependent effects in the peripheries cannot be discounted as a mechanism for statins' effects in reducing AD incidence. However, studies which identified that statins reduced the risk of developing dementia in patients with physiologically normal lipid profiles suggest that pleiotropic effects of statins may also contribute to this observed effect [48,53]. Several animal models of AD have shown statins to exert neurocognitive benefits in the absence of changes in plasma or brain cholesterol content, further suggesting a cholesterol-independent mechanism of protection [66][67][68]. A lack of information as to the true pathophysiology of AD limits our understanding of statins' role in AD development and progression. A variety of experimental approaches have been used across both in vitro and in vivo studies, which has resulted in a number of proposed mechanisms of action of statins in AD. As with studies broadly exploring cognitive impairment, the depletion of isoprenoid intermediates has again been implicated as a possible mechanism for statin-mediated neuroprotection from AD. A study by Eckert and colleagues identified that both FPP and GGPP levels are significantly elevated in grey and white matter of human AD patients, however cholesterol levels were not [69]. This same study found that simvastatin treatment in mice significantly reduced brain levels of FPP and GGPP levels, though the effects of other statins are yet to be quantified [69]. Whether elevated FPP or GGPP levels are contributors to or consequences of AD neuropathophysiology remains unclear. Whilst FPP and GGPP appear to mediate some of the effects of statins, it is likely that the downstream small GTPase family of signalling molecules also play an important role. These molecules, including Ras, Rho, Rac, Rab and Rap, are involved in the prenylation process, whereby their interaction with proteins increases lipophilicity and facilitates interaction with cellular membranes. Depletion of FPP and GGPP through statin treatment, and subsequent inhibition of these small GTPase proteins, has been associated with both neuroprotective and neurotoxic effects in various cell and animal models. The modulation of Alzheimer amyloid-β precursor protein (APP) metabolism has been implicated as one possible mechanism of neuroprotection, with both in vitro and in vivo studies demonstrating statin-induced attenuation of cerebral amyloidosis and APP production [66,70,71]. It has been suggested that the inhibition of the Rho-associated coiled-coil kinase1/2 (ROCK) pathway by both simvastatin and atorvastatin is a possible mechanism for stimulated soluble APP (sAPP) shedding in mouse N2a.Swe neuroblastoma cells [70]. A similar study using the same cell line identified that simvastatin preferentially increase sAPPα over total sAPP, however had no effect on other cell lines including mouse primary neurons and human neuroglioma cells, suggested that this response may be unique to this cell line [72]. Based on results from this study which compared the effects of lovastatin and simvastatin on APP processing across a number of cell types from human and mice, it is likely that statin-induced effects on APP metabolism are cell type-dependent, thus specific in vitro data surrounding APP processing should be analysed cautiously [72]. Despite statins' actions on APP metabolism remaining unclear, a number of studies have consistently demonstrated reduced amyloid-β peptide (Aβ) production induced by statin treatment. In rat primary cortical neurons, treatment with either pitavastatin or atorvastatin (0.2-2.5 µM) induced time-and concentration-dependent reductions in Aβ40 and Aβ42 production [73]. Exogenous supplementation with cholesterol in this study did not restore Aβ levels, suggesting cholesterol-independent mechanisms underlying this observation. Due to the apparent clinical link between statin use and reduced incidence/severity of inflammatory-based CNS pathologies, including AD, the reduction of chronic neuroinflammation has been proposed by many as a key mechanism for statin-induced neuroprotection. In experimental models of AD, the reduced production of Aβ has been attributed to reduction of neuroinflammation, and cells involved in the neuroinflammatory response [72,74]. In rats, atorvastatin prevented Aβ-induced microglial activation, an early step in the neuroinflammatory response [75]. Simvastatin (1-25 µM) was found to reduce Aβ-induced production of interleukin (IL)-1β in THP-1 monocytes, and reduced Aβ-induced and lipopolysaccharide (LPS)-induced nitric oxide, inducible nitric oxide synthase (iNOS) and reactive oxygen species (ROS) production in BV-2 microglial cells [76]. The release of inflammatory mediators, including IL-1β, IL-6, tumour necrosis factor (TNF)-α, and reactive nitrogen species, are also reduced by statins in astrocyte and macrophage models of Aβ-induced neuroinflammation [76][77][78], with these effects found to be mediated through Rho inhibition in THP-1 monocytes [76]. In contrast to the neuroprotective effects of Rho inhibition in microglia and monocytes, in a model of early AD using primary rat hippocampal neurons, lovastatin-induced apoptosis and cell death (10-100 µM) was attributed to Rho-dependent pathways [79]. Mevastatin treatment (10 µM) in cultured rat hippocampal slices has also been found to increase microglial activation [80]. These differences perhaps suggest a dose-dependent, statin-dependent and/or model-dependent relationship between statin use and models of neuroinflammation associated with AD. Consistent with the attenuation of neuroinflammation, atorvastatin-induced reductions in brain oxidative and nitrosative stress have also been noted in aged beagles following chronic treatment (80 mg/day for 14.5 months) [78]; similar observations have been noted in other studies using mice, whereby atorvastatin (10 mg/kg for 7 days) and simvastatin (20 mg/kg for 8 weeks) both decreased oxidative stress and inflammatory levels, though neither treatment coincided with protection against cognitive impairment [81,82]. Duration of therapy may be an important factor in the neuroprotective potential of statins, with both atorvastatin (30 mg/kg/day) and pitavastatin (3 mg/kg/day) only showing protective effects against senile plaque and phosphorylated tau-positive dystrophic neuritis after 10 months of treatment in APP transgenic mice [67]. Another noteworthy variable is age, with simvastatin (40 mg/kg/day, 3-6 months) shown to fully restore short-and long-term memory in adult (6-month), but not in aged (12-month) transgenic mice [83]. Thus, the inconsistencies between studies thus far may be attributable to differing effects between statins, dose-dependent toxicities, time-dependent effects, cell-dependent responses and/or species-dependent responses. Other mechanisms which have been implicated in statin-induced AD attenuation include: increased microglial degradation of extracellular Aβ in mice through farnesylation-dependent increases in insulin-degrading enzyme secretion (lovastatin, 5 µM) [84]; γ-secretase relocation in lipid rafts (pitavastatin, 5 µM) [85]; enhanced APP-C terminal fragment trafficking from endosomes to lysosomes [71]; and, reduced senile plaques and phosphorylated tau-positive dystrophic neuritis (atorvastatin 30 mg/kg/day, 15 months; pitavastatin 3 mg/kg/day, 15 months) [67]. On the whole, it would appear that statins exert some form of protection against early events associated with AD development. The lack of understanding as to the true pathophysiology of AD limits the application of cell and animal models of statin-mediated neuroprotection to the true mechanism of statins' apparent effects. Given that the majority of studies use a single statin as a representative of the class, differences between individual statins' mechanisms or propensity for neuroprotection against AD remains unclear. Parkinson's Disease PD is a progressive neurodegenerative disorder characterised by the presence of Lewy bodies (intracellular protein aggregates), the loss of dopaminergic neurons from the substantia nigra pars compacta in the midbrain, and associated clinical manifestations of dopamine deficiency (gait, tremor, rigidity and bradykinesia). It is the second most common chronic neurodegenerative disorder in adults over the age of 65 years [86]. Epidemiological evidence suggests that some statins may reduce the incidence of PD; Wolozin and colleagues identified that simvastatin treatment was associated with significantly reduced incidence of PD in patients aged over 65 years, however neither atorvastatin nor lovastatin showed significant effects [49]. Compared with discontinuation of statins, continuation of lipophilic statin use has been associated with a reduced risk of PD, particularly in the elderly [87]. In patients with existing PD, however, 10 day treatment of simvastatin (40 mg/day) showed no significant effects on dyskinesia, functional impairment or involuntary movement [88]. Furthermore, animal studies have shown simvastatin (10 mg/kg/day, 21 days) to protect against 6-OHDA-induced loss of N-methyl-D-aspartate (NMDA) receptors in rats [92]. Both simvastatin (1 mg/kg/day) and pravastatin (80 mg/kg/day) were also found to attenuate 1-methyl-4-phenyl-1,2,3,6tetrahydropyridine (MPTP)-induced dopaminergic neuronal loss through inhibition of p21(Ras)-induced NF-κB, though simvastatin appeared to do so more effectively [93]. A number of studies have also observed statin-induced improvements in behavioural activity and motor function in a number of PD models in vivo which correlates with protection against induced neuronal damage [88,[91][92][93]. Despite encouraging evidence from both cell and animal studies, the lack of prospective and clinical studies into statins' effects on PD limits our understanding of these drugs in this condition, and hence any conclusions regarding their therapeutic potential. Multiple Sclerosis MS is a chronic inflammatory disease of the nervous system, whereby T-cell-mediated responses are associated with the destruction of myelin sheaths, which can ultimately result in axonal damage and neurological deficit [94]. In general, statins have been considered largely beneficial in pathologies associated with demyelination, particularly MS. Although phase II clinical trials of simvastatin treatment in MS patients have recently been successfully completed [95], in vitro and in vivo evidence surrounding the effects of statins on nerve conduction and remyelination is largely conflicting. It is believed that much of this contradicting data stems from differing experimental designs, including time-dependent responses, and serum conditioning with either foetal bovine serum supplementation or exogenous cholesterol [96,97]. Several statins, including atorvastatin, lovastatin and simvastatin, have been associated with enhanced differentiation of oligodendrocyte progenitor cells (OPCs), the depletion of which exhausts remyelination capacity. Atorvastatin pre-treatment (5 mg/kg/day, 7 days) in an animal model of sciatic nerve crush injury was found to up-regulate several remyelination-associated genes, including growth-associated protein-43, myelin basic protein, ciliary neurotrophic factor, and collagen [98]. This was also associated with an increased protection against damage, including reduced structural disruption, inflammation and neurobehavioural changes [98]. Simvastatin (5-10 µM) has also been associated with inducing process extension in OPCs, and enhanced differentiation to the mature OD phenotype. Interestingly, however, this protective effect was found to be time-dependent, with increased simvastatin exposure time associated with process retraction in both OPCs and mature ODs [99]. The enhanced differentiation of OPCs in the presence of statins has raised the question of whether chronic OPC depletion is likely to affect the regenerative capacity of the neuroparenchyma. Conflicting results have been noted in studies which found detrimental effects of statin treatment on remyelination. Whilst simvastatin (2 mg/kg/day) did not impact myelin load or demyelination in healthy mice over a two week administration period, when extended to five weeks rates of demyelination significantly increased [97]. In the same study, simvastatin decreased myelin load during concomitant demyelination and impeded remyelination, which was attributed to inhibition of OPC differentiation. These results were replicated by Klopfliesch and colleagues who further identified that simvastatin (5 µM) impaired the p21Ras/ p38 mitogen-activated protein kinase (MAPK) pathway and reduced synthesis of myelin basic protein, myelin proteolipid protein and 2',3'-cyclic nucleotide 3'-phosphodiesterase (CNP) in vitro [100]. Simvastatin (5-10 µM )-induced OPC process extension and maturation can be mimicked through ROCK inhibition, and either is partially or fully reversed with isoprenoid metabolites, depending on simvastatin exposure time [99]. Given that the vast majority of cholesterol acquisition in the CNS is through glial synthesis or neuronal reutilisation, with little to no reliance on systemic cholesterol pools, cholesterol availability in ODs is a rate-limiting step for successful myelination [101]. In addition to direct effects on ODs, statins' effects on neuroinflammation and immunomodulation have also been implicated as possible contributing mechanisms in MS. Lovastatin (2 mg/kg/day) has been found to ameliorate clinical symptoms associated with experimental autoimmune encephalomyelitis (EAE), an animal model of human MS, as well as reduce neuroinflammatory mediators such as iNOS, TNF-α and interferon (IFN)-γ [102,103]. Similarly, atorvastatin (10 mg/kg/day) has also been shown to improve clinical symptoms of EAE, which has been attributed to reduced RhoA geranylgeranylation, impaired T cell responses and altered T helper (Th)1/Th2 inflammatory ratios [104]. Statins are also noted to modulate T cell immunity, a factor which plays a crucial role in autoimmune neuroinflammation. Statins have been found to affect T cell response through the inhibition of Th1 differentiation and migration across the BBB [105,106]. In the presence of statins, myelin-reactive CD4 + T cells exhibit reduced TNF-α and IFN-γ secretion, and instead secrete protective Th2 cytokines, such as IL-4 [105,106]. It is thought that the negative effects of statins which have been observed in vitro may be due to depletion of the isoprenoid GGPP, ordinarily responsible for activation of RhoA signalling [96,99,103]. RhoA-mediated inhibition of ROCK synthesis due to statin treatment induces MAPK, and peroxisome proliferator-activated receptor (PPAR)-γ activators [96]. The activation of PPAR-γ induces phosphatase and tensin homolog (PTEN), which ultimately inhibits OPC proliferation through inducing cell cycle inhibitory proteins [96,107]. The inhibition of Ras and Rho signalling by simvastatin (5 µM) was found to hamper myelin and OD process formation in vitro [100]. The reasons underlying discrepancies between cell and animal models of MS are not yet fully understood, however the extent of statin penetration and additional compensatory mechanisms within the whole brain compared to in vitro models may be possible explanations. Ultimately, even though underlying mechanisms currently remain elusive, the successful completion of simvastatin in phase II testing as a treatment for MS indicates that this compound may have some benefit in demyelinating conditions [95]. Further information from this trial will be necessary to properly evaluate simvastatin's role in myelination, with a view to clarifying if this effect is a class or compound-specific action. Neurofibromatosis Type I Neurofibromatosis type I (NF-1; formally known as von Recklinghausen disease) is an autosomal dominant disorder associated with learning disabilities and attention deficits, amongst other manifestations. Cognitive dysfunction is the most common neurological complication of NF-1 during childhood [108]. Lovastatin (10 mg/kg/day) was shown to normalise Ras activity, reverse learning and attention deficits and rescue long-term potentiation deficits in a mouse model of NF-1 [109]. Despite a phase I study suggesting that lovastatin (20-40 mg/day, 3 months) treatment in 10-17 year-old children with NF-1 may have potential benefits on cognitive parameters [110], a recent randomised controlled trial found no effect of simvastatin (10-40 mg/day, 12 months) on cognitive deficits or behavioural outcomes in children aged 8-16 with NF-1 [111]. Mechanistic studies as to whether compound-specific effects are seen in NF-1 may be warranted before further clinical evaluation is conducted. Statins and Neurological Disease In addition to effects on cognition, statins have been identified as possible preventative and/or treatment options in a number of neurological conditions, including stroke, epilepsy, depression, cancer and brain and spinal cord injury (see summary of mechanisms and evidence in Table 3). Similar to studies which explore the effects of statins on neurocognitive disorders, there is a lack of information surrounding the molecular mechanism of action of statins in the majority of the neurological disorders discussed in this review. Again, due to the limited data, whether the mechanisms which have been identified thus far are broadly applicable to all statins or solely to the statin tested is often unclear and requires further well-designed studies to be conducted. Mainly epidemiological studies. Recent meta-analysis suggested statins reduce risk of depression. Limited mechanism-based studies. Whether the observed effects from qualitative studies are statin-induced, due to decreased cholesterol, or due to an improved quality of life, or a combination is unclear. Psychiatric disorders Unknown. Limited observational studies. Causality is unclear. If prevalence is affected by statins, it is thought to be rare and only in predisposed patients. Further in vivo studies should be used to clarify statins' effects. Directed epidemiological studies would also prove useful. Numerous in vivo studies. Statins appear to exert beneficial effects in vivo if initiated immediately post-TBI/SCI. Due to some conflicting data, further well-designed studies are required before clinical application can be assessed. Stroke In addition to their well-established cardiovascular benefits, randomised controlled trials and meta-analyses have found statin use to be associated with a reduced incidence of ischemic and haemorrhagic stroke [3,4], and improved outcomes neurological outcomes and prognosis acutely following stroke across a number of studies [112,113]. Additionally, recent studies have also identified that statin withdrawal is associated with worsened post-stroke survival [114], and that statin initiation within 24 h of thrombolysis may also improve both short-and long-term outcomes [115]. Although the relationship between stroke and cholesterol levels remains unclear, statins' systemic effects on the vascular system are thought to underpin much of their effects in stroke, and include antithrombotic effects, anti-inflammatory effects, improved endothelial function, and the stabilising of atherosclerotic plaques. Several lines of evidence suggest that the modulation of endothelial nitric oxide synthase (eNOS) and reduction of nitric oxide production by statins acts as a primary neuroprotective mechanism against stroke through the improvement of cerebral blood flow around cerebral penumbra [77,116]. In a mouse model of stroke, the protective effects of simvastatin (20 mg/kg/day, 14 days) on infarct size, cerebral blood flow and neurological function were eliminated following eNOS-knockout [117]. Statin-induced increases in eNOS have been attributed to GGPP inhibition [116], subsequent reduction in RhoA and Rac1 expression and the stabilisation of eNOS mRNA [118]. Additionally, several studies have implicated statin-induced reduction in ROS and matrix metalloproteinases (MMPs) in exerting neuroprotective benefits in stroke. The release of MMPs by astrocytes and microglia are associated with neuroinflammation and BBB disruption [119,120]. Several lines of evidence suggest that statin-induced reductions in MMPs may play a role in the apparent immunomodulatory effects of statins. Atorvastatin has been shown to reduce recombination human tissue plasminogen activator (rht-PA)-induced MMP up-regulation in the rat brain, and reduced MMP-associated blood-brain barrier permeability increases [121]. In cortical astrocytes, simvastatin (1-10 µM) significantly reduced rht-PA-induced MMP-9 dysregulation through modulation of the Rho signalling pathway [122]. Similarly, ROS are thought to contribute to ischemia through direct intracellular damage to proteins, lipids and nucleic acids. In rats, atorvastatin pre-treatment (10 mg/kg/day, 3 doses) prior to middle cerebral artery occlusion significantly reduced infarct volume, which coincided with significantly reduced penumbral nicotinamide adenine dinucleotide phosphate (NADPH) oxidase activity and superoxide levels [123]. Similarly, rosuvastatin (2 mg/kg/day, 24 h and 28 day) has been shown to reduce NADPH oxidase-dependent superoxide production in cerebrovascular arteries of insulin-resistant Zucker obese rats [124]. Considering the plethora of data supporting the antioxidant effects of various statins in reducing endothelial dysfunction within the cardiovascular system, it is likely that the observed benefits of statins in cerebrovascular ischemia may also be mediation through reduced ROS activity. Epilepsy The incidence of developing epilepsy has two predominant peaks across the human lifespan: during childhood, and after age fifty. Whilst the true pathophysiology is largely unknown, it has been suggested that epilepsy which develops later in life may be a result of cerebrovascular disease, brain tumours or AD. In several epidemiological studies, statin users have been associated with a reduced risk of developing epilepsy, a finding which is supported by studies in animals and in vitro [125][126][127][128]. In a case-control study, a dose-dependent effect between statin and seizure risk was observed, with every 1 gram increase in atorvastatin used annually associated with a 5% reduced risk of hospitalisation due to seizure [126]. Although the use of cell culture for modelling seizure mechanisms and epileptogenesis is limited, in vitro studies have suggested that statins may exhibit excitoprotective properties, though not at equipotent doses. In primary neuronal cultures, simvastatin was found to reduce the association of subunit 1 of NMDA receptors to lipid rafts by 42%, a mechanism which was hypothesised to contribute to simvastatin-induced protection against NMDA-induced neuronal damage [129]. Lipid rafts are distinct, highly dynamic sterol and sphingolipid-rich microenvironments within the cellular membrane and are implicated as platforms for numerous signalling pathways, thus the perturbation of these zones has the potential to affect neuronal signalling. In addition to simvastatin's effects on lipid rafts, both simvastatin and lovastatin have also been associated with excitoprotection mediated through the inhibition of calcium-dependent calpain activation, ROCK inhibition, the activation of the PI3K pathway, and increased APP cleavage [125]. Whether all statins contribute equally to this observed excitoprotection remains questionable. An earlier study by Zacco and colleagues identified that a number of statins were capable of protecting primary neurons against NMDA-induced cytotoxicity, though neuroprotective potency differed between statins: (rosuvastatin, simvastatin) > (atorvastatin, mevastatin) > pravastatin [130]. In contrast to these in vitro findings, a study in mice comparing five commercially available statins identified simvastatin and lovastatin as effective in reducing seizure severity and histopathological signs of excitotoxicity, whilst neither fluvastatin, atorvastatin nor pravastatin showed any significant benefits in ameliorating seizure-related sequelae [131]. It should be noted however that the protective effects may only be seen at high doses, with a recent rat model of epileptogenesis identifying that a dose of 10 mg/kg/day of either atorvastatin or simvastatin significantly reduced the development of absence seizures, although this dose of pravastatin was ineffective at reducing seizure incidence. Increasing the pravastatin daily dose to 30 mg/kg/day resulted in a significant reduction in number of seizures [132]. Due to the limited data available thus far, further studies are required to evaluate the clinical implications of these findings. Depression Similarly to other neurological disorders, there remains conflict across the literature with regards to statins' effects in depression. Epidemiological evidence has suggested a possible role for statins in the reduction of depression and depression-like symptoms [133][134][135][136][137][138], with a recent meta-analysis by Parsaik and colleagues concluding that statin use was associated with a lower risk for depression (adjusted odds ratio, 0.68; 95% confidence interval, 0.52-0.89) [1]. In addition to all-cause depression, statins have also been linked to a reduced risk of post-stroke depression [136] and augment the increased risk of depression associated with hyperlipidaemia [139]. However, a number of studies have found no significant relationship between statin use and risk of depression or depression-like symptoms [140][141][142], whilst one study found that statin use was associated with increased depression prevalence [43]. Thus, whether the apparent protective effect of statins against depression is a true pharmacological effect, or a result of other factors, such as improved cardiovascular health or increased health consciousness following statin treatment, remains unclear. This uncertainty is compounded by a lack of mechanism-based studies which explore the anti-depressant effects of statins in animal models. In rats exposed to chronic mild stress, simvastatin (5-10 mg/kg/day, 14 days) reversed some stress-induced behavioural changes comparable to imipramine, a tricyclic antidepressant [143]. Similarly, atorvastatin (0.1-10 mg/kg, single dose) has been shown to exhibit acute antidepressant-like activity in mice, with modulation of NMDA receptor activity and nitric oxide inhibition identified as possible mechanisms [144]. Further well-designed animal studies which explore the relationship between statin use, hypercholesterolaemia, anxiety and depression are warranted. Psychiatric Disorders Studies designed to determine statins' effects on specific neuropsychiatric reactions are limited and have yielded conflicting results. Statin use was not associated with any alterations in risk of schizophrenia, schizoaffective disorders, psychosis, major depression, or bipolar disorder compared to non-users in an observational, propensity score-matched cohort study [145]. In contrast, one study found that statin use was associated with reduced risk of anxiety and hostility [134]. Due to the limited reports of negative psychiatric events and a lack of causality, it is largely thought that psychiatric events associated with statins are rare, perhaps occurring only in predisposed patients. CNS Cancers The effect of statins on both cancer incidence and mortality remains unclear, with evidence for both reduced and increased cancer-related mortality associated with statin use [146,147]. Although large scale meta-analyses have suggested that statins do not have significant effects on cancer incidence [51,148,149], evidence from both cell and animal studies has suggested a possible role for statins in the treatment of cancers. Of these studies, however, only a limited number have been conducted using neurological models. An early phase I study by Thibault and colleagues determined the effects of lovastatin in 88 patients, of which 24 patients had tumours of the primary central nervous system. Whilst this study observed that lovastatin (25 mg/kg daily for 7 consecutive days) was well-tolerated in both healthy and cancer patients, effects on cancer progression were not sought [150]. Similarly, a subsequent phase I/II trial using lovastatin (35 mg/kg) in patients with anaplastic astrocytoma and glioblastoma multiforme, no CNS toxicity associated with treatment was found, however no improvement in tumour response was observed [151]. Of the remaining studies which report on cancer risk associated with statin use, the majority are not designed to determine effects on cancer as a primary endpoint, thus it is difficult to ascertain the true clinical effect of statins in cancer, particularly those of a CNS origin. As such, the majority of data thus far stems from cell and animal models. Several cancer models have been investigated, with statins appearing to exert beneficial anti-tumourogenic effects in animal models of glioma (lovastatin) [152] and neuroblastoma (mevinolin, lovastatin) [153,154]. In an in vitro model, lovastatin was found to reduce the invasiveness of human glioma cells [155]. Several statins (lovastatin, mevastatin, fluvastatin and simvastatin) have also been found to increase caspase-3 mediated apoptosis and decrease extracellular-signal-regulated kinases (ERK) 1/2 and Akt, also known as protein kinase B, in C6 glioma cells through GGPP-dependent mechanisms [156,157]. Similarly, lovastatin-induced apoptosis in SH-SY5Y neuroblastoma cells is mediated through GGPP-dependent mechanisms [158]. In vivo, both simvastatin and lovastatin have been shown to reduce malignant rat gliomas and murine neuroblastoma growth respectively, with simvastatin's effects attributed to growth arrest and induction of apoptosis [152,153]. Ultimately, however, until further epidemiological studies and clinical trials are conducted, the true effect of statins on incidence of CNS cancer and tumour growth remains unclear. Brain and Spinal Cord Injury Statins, particularly atorvastatin and simvastatin, have been widely studied in vivo for their effects in TBI and spinal cord injury (SCI). On the whole, data thus far suggests a positive, neuroprotective effect induced by statins across both models. Atorvastatin has been identified across numerous studies as exerting beneficial effects against the neurological sequelae associated with SCI. Atorvastatin-treated rats (5 mg/kg, 2 h post-injury) have shown significant improvement in locomotor activity compared to control rats four weeks post-SCI in rats, which was attributed to reductions in early apoptosis at the injury site [159]. Similar studies in rats have identified additional mechanisms through which atorvastatin may exert its neuroprotective effects in SCI, including reduced blood-spinal cord barrier dysfunction through reduced RhoA/ROCK activity, reduced infiltration and expression of TNF-α, IL-1β and iNOS at the site of injury, reduced axonal degradation, myelin degradation, gliosis and neuronal death [160,161]. Whilst animal studies thus far have largely supported a beneficial role for statins in improving neurological outcomes following TBI or SCI, not all studies have found neuroprotective benefit following statin treatment in SCI [165]. As such, further evaluation of these compounds is required before the translational value of these data can be accurately assessed. Conclusions Whilst research into understanding statins' CNS effects has been extensive in recent years, there is still a distinct lack of mechanistic supportive evidence to justify the use of these compounds in the prevention or treatment of neurological disorders. The available mechanistic evidence supports a possible beneficial role of statin treatment in some conditions, such as the prevention of dementia and MS treatment, suggesting that the high concerns over statins' neurological effects may be largely unwarranted. While it is apparent that the structural differences between statin compounds contribute to their vastly different pharmacokinetic parameters, how this translates into pleiotropic differences between statins is less widely acknowledged. In the CNS in particular, an improved understanding as to the precise mechanistic differences between statins is needed so that therapeutic decision making may be better informed. Until such time that more comparative evidence is available, it would be prudent for clinicians and researchers to consider the evidence for individual statins in the CNS, as opposed to assuming a class action. Additionally, more evidence is required before any statin therapy can be recommended clinically in the treatment or prevention of these neurological conditions. Conflicts of Interest The authors declare no conflicts of interest.
2016-03-22T00:56:01.885Z
2014-11-01T00:00:00.000
{ "year": 2014, "sha1": "9d68679d502ac195c8a0ecd10913ef2d8e5dc7b7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/15/11/20607/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9d68679d502ac195c8a0ecd10913ef2d8e5dc7b7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
233250197
pes2o/s2orc
v3-fos-license
Daily Terra–Aqua MODIS cloud-free snow and Randolph Glacier Inventory 6.0 combined product (M*D10A1GL06) for high-mountain Asia between 2002 and 2019 Snow is a dominant water resource in high-mountain Asia (HMA) and crucial for mountain communities and downstream populations. Snow cover monitoring is significant to understand regional climate change, managing meltwater, and associated hazards/disasters. The uncertainties in passive optical remote-sensing snow products, mainly underestimation caused by cloud cover and overestimation associated with sensors’ limitations, hamper the understanding of snow dynamics. We reduced the biases in Moderate Resolution Imaging Spectroradiometer (MODIS) Terra and Aqua daily snow data and generated a combined daily snow product for high-mountain Asia between 2002 and 2019. An improved MODIS 8 d composite MOYDGL06* product was used as a training data for reducing the underestimation and overestimation of snow in daily products. The daily MODIS Terra and Aqua images were improved by implementing cloud removal algorithms followed by gap filling and reduction in overestimated snow beyond the respective 8 d composite snow extent of the MOYDGL06* product. The daily Terra and Aqua snow products were combined and merged with the Randolph Glacier Inventory version 6.0 (RGI 6.0) described as M*D10A1GL06 to make a more complete cryosphere product with 500 m spatial resolution. The pixel values in the daily combined product are preserved and reversible to the individual Terra and Aqua improved products. We suggest a weight of 0.5 and 1 to snow pixels in either or both Terra and Aqua products, respectively, for deriving snow cover statistics from our final snow product. The values 200, 242, and 252 indicate snow pixels in both Terra and Aqua and have a weight of 1, whereas pixels with snow in one of the Terra or Aqua products have a weight of 0.5. On average, the M*D10A1GL06 product reduces 39.1 % of uncertainty compared to the MOYDGL06* product. The uncertainties due to cloud cover (underestimation) and sensor limitations, mainly larger solar zenith angle (SZA) (overestimation) reduced in this product, are approximately 32.9 % and 6.2 %, respectively. The data in this paper are mainly useful for observation and simulation of climate, hydro-glaciological forcings, calibration, validation, and other water-related studies. The data are available at https://doi.org/10.1594/PANGAEA.918198 (Muhammad, 2020) and the algorithm source code at https://doi.org/10.5281/zenodo.3862058 (Thapa, 2020). Remote-sensing snow products are important in hydrological and other snow-related research (Hall et al., 2002;Li et al., 2019). The temporal coverage of remote-sensing snow data is sufficient for climate change studies (e.g., NOAA Advanced Very High Resolution Radiometer (AVHRR) snow data have been available since the 1980s) (Hori et al., 2017). However, the spatial resolution before this century was relatively coarse (Hüsler et al., 2012), which has been improved since the early 21st century by the most popular and up-todate snow product from the Moderate Resolution Imaging Spectroradiometer (MODIS) on board Terra and Aqua (Hall et al., 2007). The advantage of these datasets is the daily temporal resolution, and the disadvantage is the low spatial resolution and a large swath of approximately 2300 km. These limitations cause a snow overestimation at the image edges and in the images acquired in the off-nadir view (Riggs et al., 2016). Another major constraint in these passive optical remote-sensing products is the cloud cover causing the spatial and temporal time-series discontinuity. The cloud contamination in the original 8 d composite MODIS snow cover products is comparatively less than the daily products (Hall et al., 2002) but remains significant; e.g., in the Karakoram the Terra and Aqua 8 d images are 9 % and 15 % cloud-covered on average, respectively (Thapa and Muhammad, 2020). To reduce the remaining clouds up to 99.98 % in the original 8 d composite products M*D10A2, a new Terra and Aqua composite product, namely MOYDGL06*, was developed for HMA using a multi-step approach . This product, MOYDGL06*, is a significant contribution to snow-related studies. However, the 8 d composite is the maximum snow for 8 consecutive days, which does not detect the exact timing of snow onset and melt (Hall et al., 2005). Similar limitations are likely using the 8 d composite products for the snowmelt runoff modeling, which requires daily snow information. This study considers the temporal limitations in the 8 d composite data and improves the daily MODIS snow products. Various methods, including spatial and temporal filters, are used for cloud removal in MODIS data (Li et al., 2019), but less attention has been given to the removal of the overestimation attributed to the large solar zenith angle (SZA) and a wide swath of each tile. In this study, a daily cloud-free product combining MODIS Terra (MOD10A1) and Aqua (MYD10A1) is generated using the 8 d composite MOY-DGL06* product as a reference, which is not only useful for cloud removal but also reduces overestimation. Larger SZA mainly causes an overestimation which was further reduced in the daily product by combining Terra and Aqua following the MOYDGL06* product methodology with a slightly different approach. We also fill the missing data gaps, remove overestimation in the daily snow data using the respective 8 d composite snow images, and merge the improved Terra and Aqua snow assigning values reversible to the individual Terra and Aqua improved products. The improved Terra and Aqua cloud-free snow composite product merged with Randolph Glacier Inventory version 6 (RGI 6.0), namely M*D10A1GL06, is developed to make a more complete daily cryosphere product covering the period between 2002 and 2019. This product will significantly improve the hydroglaciological applications and snow-related observations in high-mountain Asia (HMA). Study area The MODIS Terra and Aqua combined daily snow product in this paper cover HMA as in Muhammad and Thapa (2020) with the geographic extent of latitude 24.32-49.19 • N and longitude 58.22-122.48 • E. The 10 major river basins of the Hindu Kush, Karakoram, and Himalaya (HKH) region and Tibetan Plateau are covered in this study. Snow data in this study have a daily temporal resolution and 500 m spatial resolution. The product is derived from MODIS Terra (MOD10A1) and Aqua (MYD10A1), and glacier (GL), version 6 (06), named M*D10A1GL06. The data in this product for the period between 2002 and 2019 are available in Geo-TIFF format. Methodology The input data for this study include Collection 6 (C6) of the daily MODIS Terra (MOD10A1) and Aqua (MYD10A1) products for the period between 2002 and 2019. The snow data were downloaded from https://earthdata.nasa.gov/ (last access: 24 January 2020) of NASA's Earth Science Data Systems (ESDS) program. The algorithm in C6 has significantly reduced the errors of omission and commission in snow pixel detection mainly due to low illumination conditions and high solar zenith angle (SZA) as compared to Collection 5 (C5) (Riggs et al., 2016). The data are described as 0-100 (Normalized Difference Snow Index (NDSI) snow cover), 200 (missing data), 201 (no decision), 211 (night), 237 (inland water), 239 (ocean), 250 (cloud), 254 (detector saturated), and 255 (fill) (Riggs et al., 2016;Riggs and Hall, 2016a, b). The data for snow pixels are the NDSI values of 0-1 scaled to the range of 0-100 derived from the daily surface reflectance product (MOD09GA). We have converted the NDSI values to binary snow using the range applied in version 5 (40-100) of M*D10A1 products. The values in the M*D10A1 products were reclassified into three classes: (1) the values 40-100 are snow class and reclassified to 200, (2) value 250 is cloud and reclassified to 50, and (3) the rest of the values are classified as no snow (25), to make them comparable with the improved 8 d composite MOYDGL06* product . The cloudy pixels in daily Terra and Aqua snow products were replaced by snow, no snow, or remain cloud-covered using the corresponding 8 d composite improved snow (MOY-DGL06*) product (2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018) with reduced uncertainty of underestimation and overestimation (Muhammad and Thapa, 2019;2020) for the period between 2002 and 2019. We processed the 8 d composite images for the year 2019 following the methodology of MOYDGL06* to extend the improved daily snow product to 2019. The Terra and Aqua daily products were separately processed and improved by removing clouds and overestimation. In the initial processing, the overestimation is reduced to the extent of 8 d composite images by discarding snow in daily MODIS images falling beyond the maximum extent of snow in the corresponding 8 d composites (MOYDGL06*) as shown in Eq. (1). We call the snow beyond the 8 d composite snow extent an overestimation because the 8 d composite images are the maximum extent of snow in the eight consecutive images. The value 50 in the superscript represents clouds, and M*D10A1 represents MOD10A1 and MYD10A1. The MOD10A1 and MYD10A1 were separately processed and shown here in the same equation. Also, the daily MODIS product contains gaps with missing data between two successive strips with an increased gap near the Equator. The missing data pixels caused by such gaps in the daily Terra and Aqua products were filled using the corresponding snow or no snow pixels of the MOY-DGL06* product using Eq. (2). The superscript NoData represents a gap in either daily MODIS Terra or Aqua data. The improved MODIS Terra and Aqua daily snow products were combined and merged with RGI 6.0 to make an improved and combined snow and glacier product. The methodology of merging daily products is different from that of MOYDGL06* as the nature of the daily and 8 d product is different to some extent. We did not replace snow pixels with no snow if a pixel is snow in either the Terra or Aqua product and suggest assigning a weight of 0.5 while using this product for snow cover analysis. The snow data in this product are also preserved to make the separated Terra or Aqua products retrievable from this product. The Terra and Aqua snow data were combined using the following Eqs. (3)-(7). The combination of daily improved snow from Terra and Aqua with RGI was also carried out in the same way, except in the case of cloud in the snow data, the glacier ice is described a either debris-covered or debris-free and derived from the RGI 6.0 inventory. The glaciers (debris-covered and debris-free) are described as 240 and 250 if they are exposed; otherwise they are given different values depending on whether the glacier is covered with MODIS Terra, Aqua, or both of the snow products. The description of improved daily snow combined with the RGI product is described by the following values. There are 36 missing images in the original snow products, with 35 in Terra snow and 1 in the Aqua snow, equivalent to 0.29 % of the total snow data, which is insignificant for the time series. The missing data in the Terra snow with ordinal dates are 2003032, 2003199, 2003351-2003358, 2004050, 2004248, 2004277, 2005265, 2006172, 2006235, 2008355-2008358, 2009252, 2010065, 2010177, 2014299, 2016050-2016059, and 2017114. The missing data were replaced with adjacent images to complete the time series. A single miss-ing image was replaced by the preceding image, while multiple missing images were replaced by preceding and succeeding images adjacent to the absent images. The Aqua missing snow image of 2003167 was replaced by 2003166 to complete the time series. The product in this paper was named by merging the names of original products, e.g., combining the Terra product (MOD10A1_Maximum_Snow_Extent_2002289) and Aqua product (MYD10A1_Maximum_Snow_Extent_2002289) merged with RGI06 (GL06) and named MOYD10A1GL06_Maximum_Snow_Extent_2002289 in the daily improved snow product (M*D10A1GL06). Results and discussion This study improved and combined daily MODIS Terra and Aqua snow data merged with RGI 6.0 separately into debris-covered and debris-free parts of the glacier (M*D10A1GL06) for the period of 18 years between 2002 and 2019. Our methodology used the improved 8 d MOY-DGL06* product as training data for improving the daily product. The 8 d data for 2019 was also improved following the algorithm described in Muhammad and Thapa (2020) as the 8 d composite product is available until 2018. It is important to mention that the snow data in the 8 d composite product are valued as 200 (snow) and 210 (no snow). These values were reclassified as 200 (note to the users of the R code associated with this paper) to improve the daily snow data. The major issues of underestimation in MODIS data, which we also highlighted in the previous paper (Muhammad and Thapa, 2020) because of clouds and overestimation caused by large sensor zenith angle (SZA), were reduced in this paper. The effect of SZA was reduced by merging the daily Terra and Aqua products with snow if the pixel is snow in both products while giving a weight of 0.5 if the pixel is snow in one of the Terra or Aqua products. This criterion reduces 6.2 % of the overestimation in the daily composite snow product. The cloud in daily Terra MODIS (MOD10A1) and Aqua MODIS (MYD10A1) and respective improved products are shown in Fig. 1. The original daily Terra and Aqua images between 2002 and 2019 were cloud-covered by 41.96 % and 43.42 %, respectively. We almost completely removed cloud cover in this paper with the remaining clouds of 0.001 % as shown by a straight red line in Fig. 1. On average, the cloud cover in the original Terra is slightly less than Aqua data; however, the spatial distribution of clouds varies significantly with time. The cloud cover is significantly higher in the daily original snow product (42.7 % on average) as compared to the 8 d composite product with 3.66 % cloud cover. These cloud cover statistics indicate that more than 91 % of the clouds were reduced in the 8 d composite M*D10A2 products available at the National Snow and Ice Data Center (Riggs et al., 2016) in HMA on average. This made our final Terra and Aqua combined daily snow product 99.99 % cloud-free on average. The cloud cover of the original and improved Terra and that of the original and improved Aqua are shown in Fig. 2a and b. The annual average snow cover in the original Terra snow product was 6.07 % and increased to 16.82 % in the improved Terra snow product. Similarly, the original Aqua snow product was 5.05 % and increased to 16.97 % in the improved Aqua snow product. The original Terra and Aqua average snow was 5.56 % and increased to 16.95 % in the improved Terra and Aqua combined snow. An example of the original Terra and Aqua images containing clouds and missing data, causing snow underestimation and the improved Terra and Aqua combined snow products, is shown in Fig. 3. The average annual clouds and snow statistics for the original Terra MOD10A1, Aqua MYD10A1, improved Terra MOD10A1, improved Aqua MOD10A1, and the combined Terra and Aqua MOYD10A1 product are shown in Table 1. Removing unmatched Terra and Aqua data in daily snow may increase the underestimation for areas where SZA is greater (Sayer et al., 2015). It is particularly challenging to detect snow when SZA exceeds 70 • (Riggs et al., 2016), which constitutes up to 8 % of the data (Horváth et al., 2014). Similarly, for SZA > 60 • the cloud optical thickness increases (Loeb and Davies, 1997), which is overcome by removing clouds using the 8 d composite data containing snow data overlapped by Terra and Aqua. In contrast, assigning a weight of 0.5 to such data may reduce the overestimation to 50 % of the data acquired from off-nadir view. To assess the variability of snow overestimation mainly due to SZA differences, we compared the minimum (snow overlapped by Terra and Aqua), maximum (snow in either Terra or Aqua), and mean snow (weight of 1 to minimum snow and 0.5 to maximum snow). The maximum and minimum snow cover area showed a difference of 12.4 % on average for the whole study area, whereas the mean snow differs by 6.2 % on average in comparison to the minimum and maximum snow. Therefore, we suggest using the mean snow for snow cover analysis using this product. Also, both the minimum and maximum snow may be analyzed for estimating a range of snow cover area. The original Terra and Aqua, minimum, maximum, and mean of the improved snow are shown in Fig. 4, showing the difference explained above for the study period. There are significant variations and underestimation in the original snow mainly due to the persistence of clouds as shown in Fig. 4. On average, 87.6 % of the individually improved Terra and Aqua snow pixels coincided with the improved Terra and Aqua combined snow products. The remaining 12.4 % of the mismatched snow pixels in the individual Terra and Aqua is suggested to give a weight of 0.50 to be used in combination with the coincided snow for understanding snow cover dynamics, regarded as mean snow. This criterion enables us to discard 50 % of the mismatched snow (6.2 %) in the improved Terra and Aqua composite product. The use of either minimum, maximum, or mean snow data may be used with caution for small-scale studies as the difference and mismatch may vary from region to region. Also, it is important to mention that the mismatch does not include those snow pixels in the individual Terra and Aqua snow products which fall beyond the snow extent of the respective 8 d composite images. The mismatch of snow is mainly caused by the off-nadir view, low spatial resolution, and large swaths of the satellite . The derived product is based on the improved and validated 8 d composite product; therefore, we did not re-validate it. It is important to mention that the MOYDGL06* product shows an overestimation of 32 % on average when compared with the M*D10A1GL06 product developed in this paper as shown in Fig. 5. These results are quite critical for studies related to snow onset and melt timing and related hydrological simulations. The snow products should be carefully selected depending on the nature of application to avoid biases and uncertainty. The daily product generated in this research is mainly recommended for hydro-glaciological, water, and snow-related studies with high-temporal (daily) resolution except for very small scale studies. An example image of the improved snow product with the description of values in the methodology and data availability sections is shown in Fig. 6. The original MOD10A1/MYD10A1 is the average snow cover of both satellites before improvement. The minimum snow cover is the snow overlapped by Terra and Aqua in the improved MOYD10A1 product, whereas the maximum snow in the improved MOYD10A1 product is snow in either Terra or Aqua products. Code and data availability The daily composite snow product derived in this paper from MODIS Terra (MOD10A1) and Aqua (MYD10A1) version 6 merged with RGI 6.0 is named M*D10A1GL06. The improved snow product is flagged by 13 numbers to represent (238), debris under Aqua snow (239), exposed debris (240), debris cover under Terra and Aqua snow (242), clean ice under Terra snow (248), clean ice under Aqua snow (249), exposed ice (250), and clean ice under Terra and Aqua snow (252). For studies using this product to analyze snow cover, we recommend using a weight of 0.5 for snow pixels if present in either the Terra or Aqua product described by values 198, 199, 238, 239, 248, and 249 and a weight of 1 for the pixels with snow in both the Terra and Aqua products with val- ues 200, 242, and 252. The combined and improved snow product compared to the original Terra and Aqua snow products for the study period is shown in Fig. 4. The combined product will serve as baseline data for hydro-glaciological and other water-related applications. The data are available at https://doi.org/10.1594/PANGAEA.918198 (Muhammad, 2020). The source code of the algorithm for this product is available at https://doi.org/10.5281/zenodo.3862058 (Thapa, 2020). The dataset README file with the data at PAN-GAEA gives the information about the data and code. Conclusion This study results in an improved Terra and Aqua MODIS version 6 combined daily snow products merged with RGI 6.0, named M*D10A1GL06. The product is a 99.99 % cloudfree product covering the temporal window from 2002 to 2019 with 500 m spatial resolution over the high mountains of Asia. The product is cloud-free, reducing an underestimation of 32.9 % and an overestimation of 6.2 %, and missing data are gap filled. The product is described by 13 values to make it separable and reversible to the individual Terra and Aqua products. The value 25 is no snow; 50 is cloud; and 200, 242, and 252 represent snow in both Terra and Aqua. When snow is detected in either the Terra or Aqua dataset, it is denoted as 198, 199, 238, 239, 248, and 249, where the even and odd values represent Terra and Aqua snow, respectively. The exposed debris-covered and debris-free ice are denoted as 240 and 250, which is similar to the MOYDGL06* product. The average cloud persistency is 42.7 % of the original products (both Terra and Aqua) for the study region in the observed period. There is a 12.4 % mismatch between the Terra and Aqua snow in the improved snow product pri-marily due to a large SZA, wide swath, and low spatial resolution which limit accurate snow detection in the complex topography. To reduce the effect of the mismatch in snow data from 50 % to 6.2 % in the statistical analysis, we suggest a weight of 0.5 to the mismatched snow pixels. The clouds cause 32.9 % underestimation in snow pixels, which together with a 6.2 % mismatch due to a larger SZA causes an uncertainty of 39.1 % on average. The mentioned uncertainty does not include the snow underestimation due to the data gaps and overestimation of snow pixels occurring beyond the 8 d maximum extent of snow in the MOYDGL06* product. The daily snow M*D10A1GL06 product associated with this paper can provide a valuable input dataset for hydroglaciological and climate modeling, snow cover dynamics, and other water-related studies. grated Mountain Development (ICIMOD), funded by Norway and by core funds of the ICIMOD contributed by the governments of Afghanistan, Australia, Austria, Bangladesh, Bhutan, China, India, Myanmar, Nepal, Norway, Pakistan, Sweden, and Switzerland. Financial support. This research has been supported by the International Centre for Integrated Mountain Development (ICIMOD, grant no. 3-939-241-0-P). Review statement. This paper was edited by Birgit Heim and reviewed by two anonymous referees.
2021-04-16T04:59:19.401Z
2021-03-02T00:00:00.000
{ "year": 2021, "sha1": "ef352f2572af665893417318169137aaafc9dbd8", "oa_license": "CCBY", "oa_url": "https://essd.copernicus.org/articles/13/767/2021/essd-13-767-2021.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ef352f2572af665893417318169137aaafc9dbd8", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
15722111
pes2o/s2orc
v3-fos-license
Feasibility and Effectiveness in Clinical Practice of a Multifactorial Intervention for the Reduction of Cardiovascular Risk in Patients With Type 2 Diabetes OBJECTIVE To evaluate the feasibility and effectiveness of an intensive, multifactorial cardiovascular risk reduction intervention in a clinic-based setting. RESEARCH DESIGN AND METHODS The study was a pragmatic, cluster randomized trial, with the diabetes clinic as the unit of randomization. Clinics were randomly assigned to either continue their usual care (n = 5) or to apply an intensive intervention aimed at the optimal control of cardiovascular disease (CVD) risk factors and hyperglycemia (n = 4). To account for clustering, mixed model regression techniques were used to compare differences in CVD risk factors and HbA1c. Analyses were performed both by intent to treat and as treated per protocol. RESULTS Nine clinics completed the study; 1,461 patients with type 2 diabetes and no previous cardiovascular events were enrolled. After 2 years, participants in the interventional group had significantly lower BMI, HbA1c, LDL cholesterol, and triglyceride levels and significantly higher HDL cholesterol level than did the usual care group. The proportion of patients reaching the treatment goals was systematically higher in the interventional clinics (35% vs. 24% for LDL cholesterol, P = 0.1299; 93% vs. 82% for HDL cholesterol, P = 0.0005; 80% vs. 64% for triglycerides, P = 0.0002; 39% vs. 22% for HbA1c, P = 0.0259; 13% vs. 5% for blood pressure, P = 0.1638). The analysis as treated per protocol confirmed these findings, showing larger and always significant differences between the study arms for all targets. CONCLUSIONS A multifactorial intensive intervention in type 2 diabetes is feasible and effective in clinical practice and it is associated with significant and durable improvement in HbA1c and CVD risk profile. RESULTSdNine clinics completed the study; 1,461 patients with type 2 diabetes and no previous cardiovascular events were enrolled. After 2 years, participants in the interventional group had significantly lower BMI, HbA 1c , LDL cholesterol, and triglyceride levels and significantly higher HDL cholesterol level than did the usual care group. The proportion of patients reaching the treatment goals was systematically higher in the interventional clinics (35% vs. 24% for LDL cholesterol, P = 0.1299; 93% vs. 82% for HDL cholesterol, P = 0.0005; 80% vs. 64% for triglycerides, P = 0.0002; 39% vs. 22% for HbA 1c , P = 0.0259; 13% vs. 5% for blood pressure, P = 0.1638). The analysis as treated per protocol confirmed these findings, showing larger and always significant differences between the study arms for all targets. CONCLUSIONSdA multifactorial intensive intervention in type 2 diabetes is feasible and effective in clinical practice and it is associated with significant and durable improvement in HbA 1c and CVD risk profile. Diabetes Care 36:2566-2572, 2013 C ardiovascular disease (CVD) is the leading cause of death, hospital admission, and disability among people with type 2 diabetes, and the overall burden is expected to increase further as the result of a worldwide diabetes epidemic (1). The incidence of CVD in people with diabetes is more than twice that observed in nondiabetic people, and the case fatality rate after a first myocardial infarction in people with diabetes is much higher than that in nondiabetic people (2,3). This makes primary prevention of CVD particularly important in people with diabetes. Compelling evidence has accumulated on the effectiveness of optimal blood pressure (BP) management and cholesterol lowering in the reduction of CVD incidence in people with diabetes (4)(5)(6)(7)(8). A targeted multifactorial intervention involving glucose control and the correction of multiple CVD risk factors substantially reduces CVD and all-cause mortality in people with type 2 diabetes (9,10). On the basis of this evidence, the disease management approach currently endorsed by international guidelines recommends the correction of all major CVD risk factors to target levels that closely approach values of low-risk populations (11)(12)(13). Notwithstanding the efforts to develop and propagate CVD prevention guidelines, the recommended target values for BP and lipids are achieved only in a small proportion of diabetic patients in clinical practice (14)(15)(16); in addition glucose control is often less than optimal, and there is evidence that HbA 1c may increase with time irrespective of treatment (17,18). It remains debatable whether the evidence resulting from clinical trials can be translated into actual clinical practice, particularly when an intervention strategy targeting multiple risk factors is involved. Such intervention is, in fact, much more demanding for both the patient and the physician than treating a single factor, and it is therefore particularly difficult to implement. The most commonly mentioned factors in poor implementation of guidelines include organizational factors, inadequate perception of the patient's global risk, a clinical inertia resulting in inadequate up titration of therapy when the target is not reached, and poor patient adherence to chronic treatments related to polypharmacy (19)(20)(21). The Multiple INtervention in type 2 Diabetes.ITaly (MIND.IT) study is a pragmatic, cluster randomized trial (clinical trials.gov identifier NCT01240070) that compares the usual clinical practice with a protocol-driven treatment strategy aimed at the optimal correction of hyperglycemia and major CVD risk factors in patients with type 2 diabetes and no previous cardiovascular events. The study aim is to evaluate in a clinical, practice-based setting the feasibility and the efficacy of a multifactorial intervention designed according to guidelines for primary CVD prevention in people with type 2 diabetes. In this article, we present data on the effects of a 2-years intervention on major CVD risk factors and HbA 1c . RESEARCH DESIGN AND METHODSdThe study was a pragmatic, cluster-randomized, open, twoarmed intervention trial with the diabetes clinic as the unit of randomization. This study design was used to allow control for contamination of interventions associated with patient randomization when the intervention requires practice changes and the intended effect is at the institutional level. The study was conducted in 10 large outpatient diabetes clinics that volunteered to participate. Each center was asked to recruit 250 consecutive patients (men and women) with type 2 diabetes of at least 2 years' duration. Additional inclusion criteria were as follows: age 50 to 70 years, no previous cardiovascular events, serum creatinine ,1.5 mg/dL. The study consisted of two phases. Phase 1 was designed as a cross-sectional clinical audit study to evaluate the degree of implementation in clinical practice of the guidelines for the primary prevention of CVD in patients with type 2 diabetes; these results have been published (22). After phase 1, one center withdrew participation before randomization; the remaining 9 centers were randomly allocated to carry on the usual care (UC, n = 5) or to implement a target-driven interventional protocol of intensive care (IC, n = 4) aimed at the optimal control of hyperglycemia, lipids, and BP (phase 2). A few weeks after the start of the trial, one of the centers randomized to IC (Carrara) declared to be unable to comply with the requirements of the study protocol because of staff shortage continued the study according to the UC protocol. In each center, all the patients with a high CVD risk seen in phase 1 were enrolled in the interventional study. High CVD risk was defined as the coexistence of two or more conditions among the following: LDL cholesterol .130 mg/dL, triglycerides .200 mg/dL, HDL cholesterol ,35 (males) or ,45 (females) mg/dL, and systolic BP .140 or diastolic BP .90 mmHg, regardless of treatment. In the IC centers, the investigators were provided with a multifactorial stepwise protocol to support the application of a treat-to-target approach. This protocol, briefly described in Table 1, included a lifestyle intervention in addition to pharmacological treatment. The investigators were free in prescribing and titrating the pharmacological interventions but were required to follow stepwise incremental protocols for the optimal correction of blood glucose, BP, and lipids with the following targets: HbA 1c ,7% (,53 mmol/mol), LDL cholesterol ,100 mg/dL, triglycerides ,150 mg/dL, HDL cholesterol .40 mg/dL in men and . 45 mg/dL in women, and BP ,130/80 mmHg. In addition, weight loss of .5% (if overweight) and the implementation of antiplatelet therapy were to be pursued. Consultation was provided every 3 months. In the UC group, the investigators followed the usual clinical practice. In both study arms, annual visits were scheduled to assess biochemistry, anthropometry, BP, electrocardiogram, and treatment target achievement. This report gives the results of the prespecified analyses performed after all patients had completed 2 years of follow-up. The investigation methods have been described in detail elsewhere (22). At baseline and at each follow-up visit, anthropometry and sitting BP were measured according to a standard protocol; serum total cholesterol, HDL cholesterol, and triglycerides were measured by standard methods; HbA 1c was measured by highperformance liquid chromatography; and LDL cholesterol was calculated according to the Friedewald equation for participants with fasting triglycerides ,400 mg/dL. Biochemical testing was performed at each center in a single laboratory. Before enrollment, each participating laboratory underwent an external quality control assessment to verify the reliability and comparability of analytical methods and to reach a standard of quality and traceability among the participating centers. Quality control was monitored thereafter during the whole study period. The external quality control assessment was provided by the San Raffaele Hospital (Milan, Italy), which takes part in an international network for the standardization of laboratory methods. The study protocol was approved by the local ethics committees. Informed consent was obtained from all participants. Statistical methods Data are given as mean 6 SE or as % (SE). Calculation of intracluster correlation coefficients (ICCs) and between groups comparisons of baseline characteristics were performed as suggested by Donner and Klar (23). The study outcomes were analyzed by mixed-model regression techniques to account for clustering in a group-randomized trial according to procedures described by Murray et al. (24). The SAS procedures MIXED (for continuous variables) and GLIMMIX (for binary variables) with the REML (restricted maximum likelihood) estimation were used with adjustment for baseline and including a time by treatment interaction term. The analysis was conducted according to the intent to treat (ITT) principle. Because of an obvious protocol violation by one center (Carrara), a sensitivity analysis was also performed excluding protocol violations (i.e., as treated per protocol). The main outcome was the change from baseline in major CVD risk factors and HbA 1c . The study had 90% power for detecting minimum differences between the two study arms of 3 mmHg for systolic or diastolic BP and 8 mg/dL for LDL cholesterol and for detecting a 0.5% reduction in HbA 1c , with a = 0.05 twosided and assuming an ICC between 0.02 and 0.05 (25). All statistical analyses were performed with the SAS statistical software package (version 9.3; SAS Institute, Cary, NC). RESULTSdNine clinics were randomized to the two intervention strategies. care.diabetesjournals.org DIABETES CARE, VOLUME 36, SEPTEMBER 2013 A total of 1,461 patients (i.e., 60% of those seen in phase 1) qualified for the intervention study and were enrolled. During the 2 years of follow-up, the clinics lost contact with 11.6% of the patients, thus leaving in the study 1,292 patients (771 in the UC clinics and 521 in the IC clinics by ITT analysis). Only participants with complete 2 years of follow-up were included in the analyses. No significant differences were observed in the baseline clinical characteristics and CVD risk factors profile between patients seen at follow-up and those not seen. The demographic, clinical, biochemical, and treatment data of the study participants at baseline are given in Table 2 according to treatment arm. There were no significant differences between participants enrolled by IC or UC clinics in both ITT and as treated per protocol analyses. At follow-up (Table 3), participants in the IC clinics had significantly lower BMI, HbA 1c , LDL cholesterol, and triglycerides and significantly higher HDL cholesterol than did the participants enrolled in the UC clinics (ITT analysis). These results were confirmed in the as treated per protocol population, which in addition showed a significantly lower systolic BP and a lower diastolic BP in the IC clinics. Notably, the improvement in glucose control was not accompanied by weight gain in the IC arm; on the contrary, a slight but statistically significant reduction in BMI was observed. The proportion of patients achieving the treatment goals after 2 years was systematically higher in the IC clinics than in the UC clinics for all targets (Fig. 1). In the ITT analysis, the proportions of patients reaching the treatment goals were 35% vs. 24% for LDL cholesterol (P = 0.1299), 93% vs. 82% for HDL cholesterol (P = 0.0005), 80% vs. 64% for triglycerides (P = 0.0002), 39% vs. 22% for HbA 1c (P = 0.0259), and 13% vs. 6% for BP (P = 0.1638) in the IC and UC clinics, respectively. Findings in the as treated per protocol analysis were qualitatively consistent; however, the magnitude of the differences between UC and IC clinics were larger and always formally significant (43% vs. 24% for LDL cholesterol, 95% vs. 82% for HDL cholesterol, 82% vs. 64% for triglycerides, 54% vs. 22% for HbA 1c , 23% vs. 6% for BP in UC and IC clinics, respectively) (Fig. 1). It is relevant, however, that even in the IC clinics the treatment remained suboptimal. Even in the best case scenario, only 55% of the participants reached the goal for HbA 1c , 43% reached the goal for LDL cholesterol, and 23% reached the goal for BP. Table 4 shows the proportions of patients on different medication regimens in the IC and UC clinics at 24 months. A significantly higher proportion of participants in the IC clinics were receiving statins and antiplatelet therapy in the ITT analysis. These findings were confirmed in the as treated analysis, which in addition showed significantly more frequent use of antihypertensive treatment in the IC clinics. The pattern of antihypertensive medication use was similar in the two study arms, with the ACE inhibitors or angiotensin II receptor blockers being the most frequently prescribed antihypertensive agents, followed by calciumchannel blockers. CONCLUSIONSdThe study shows that in clinical practice an intervention to promote a target-driven management of diabetes and CVD risk factors in patients with type 2 diabetes and high CVD risk is feasible and is associated with significant intensification of treatment and improvement in glucose control and CVD risk factors. At variance with other studies, the improvement in glucose control was not accompanied by weight gain, probably as a result of the inclusion of weight control in the IC management targets. Several studies have documented the gap between the management of diabetes recommended by guidelines and the actual care delivered in the clinical setting (14)(15)(16)22). Our results demonstrate that a target-driven management protocol for diabetes and CVD risk factors can be implemented in clinical practice, and in a 2-year period can improve the overall quality of care and the CVD risk factors profile beyond results achieved in the usual practice. This suggests a great potential for the primary prevention of CVD in diabetes. The differences achieved between the study arms at the end of 2 years of follow-up for most end points were statistically and clinically care.diabetesjournals.org DIABETES CARE, VOLUME 36, SEPTEMBER 2013 2569 significant, and all favored the IC group. It is also relevant that the trial was undertaken against a background of general improvements in the delivery of diabetes care associated with the spreading of evidence-based guidelines and the introduction of national standards of care endorsed by the Italian Diabetes Society (13, 26,27), which may have somewhat narrowed the achievable differences between the treatments groups. The quality of care in the IC clinics remained, however, suboptimal. Nearly half of the patients did not achieve the goals for HbA 1c , only one in three reached optimal BP or LDL cholesterol values, and a very small proportion met all three goals. These results are in keeping with previous findings (14 -16,28,29) and underline the difficulty in reaching the desired therapeutic targets in patients with type 2 diabetes in clinical practice. The reasons for the repeatedly documented gap between the ideal and the actual care delivered to diabetic patients are complex. Factors affecting care delivery may be more important than guidelines themselves or the strategies used to spread them; physicians' beliefs and patient compliance are also crucial issues (19)(20)(21). Guidelines on their own are not beneficial; effective implementation strategies should accompany their development. Pay for performance programs have been introduced in several countries to improve the quality of care, and there is evidence that the introduction of explicit financial incentives is associated with improvements in the quality indicators for diabetes care (30). It is difficult, however, to disentangle the impact of these measures from other concomitant quality initiatives, because few studies have adjusted for underlying trends in quality of care. Furthermore, relevant aspects of diabetes management, such as the patient's empowerment and continuity of care, are not captured by the quality and outcomes framework. In addition, concerns that pay for performance programs might erode equity in the provision of health care have been raised (30,31). In our study, quarterly counseling was recommended in the IC clinics; this may ensure sufficient continuity of care and at the same time be compatible with routine clinical practice in most settings. The implementation of this recommendation and the provision of a stepwise protocol to support the application of a treat-to-target approach may have been key factors in the overall improvement of quality of care in the IC group. The improvement of quality of care is similar to what has been reported in clinical practice-based programs in the U.K. and in the U.S. (31,32) and was obtained without allocation of extra resources or financial incentives, but rather through a physician-led effort made possible by the commitment of the personnel involved. The potential study limitations need to be discussed. Because the intervention was delivered within the setting of routine clinical practice, we randomized the clinics rather than the individual participants to avoid contamination. The study included a limited number of clinics, and those enrolled were selected on the basis of their willingness to participate in the project. This may somewhat limit the generalizability of our findings. The randomized design, however, and the large number of patients recruited in each clinic may partially offset these problems. In addition, we covered a large geographical area, and the participating centers were fairly representative for key characteristics of the diabetes clinics all over Italy (26). The study was designed in 2001, and therefore the treatment algorithms, particularly those for the correction of hyperglycemia, are not fully consistent with current recommendations (27,33). Finally, in this analysis we only assessed intermediate outcome measures, and no information was collected on the frequency and severity of hypoglycemic events. Whether such intervention would effectively reduce the occurrence of cardiovascular events can only be inferred from changes in the CVD risk factor profile (8,9). A similar study conducted to investigate the effect of early multifactorial treatment after diabetes diagnosis by screening showed a small, nonsignificant, difference in the incidence of cardiovascular events (28). The changes in the CVD risk factor profile observed in that study were, however, considerably smaller than those achieved in our study. In conclusion, a multifactorial, targetdriven intervention for the management of type 2 diabetes is feasible and effective in clinical practice. An intensive intervention strategy delivered at the clinic level is associated with a significant and durable improvement in major CVD risk factors and HbA 1c , well beyond that achieved with the usual practices. (7) ,0.0001 19 (5) 84 (3) ,0.0001 Data are proportions of patients (SE), as estimated with adjustment for baseline values and cluster design. The time by treatment interaction term was significant for antihypertensive treatment (P = 0.0113), statin use (P , 0.0001), and antiplatelet treatment (P , 0.0001).
2017-04-14T07:22:37.617Z
2013-08-13T00:00:00.000
{ "year": 2013, "sha1": "b0e7fcd6d90a3a3cfe9a87fe7c010efa7de8b617", "oa_license": "CCBYNCND", "oa_url": "https://care.diabetesjournals.org/content/diacare/36/9/2566.full.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "b0e7fcd6d90a3a3cfe9a87fe7c010efa7de8b617", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
41292711
pes2o/s2orc
v3-fos-license
Statistical Modeling of the Number of Deaths of Children in Bangladesh Efforts to reduce the number of children’s death in developing countries through health care programs focus more to the prevention and control of diseases than to determining the underlying risk factors/predictors and addressing these through proper interventions. This study aims to identify socioeconomic and demographic predictors of the number of children’s death to women aged 12-49 from the Bangladesh Health and Demographic Survey (BDHS) administered in 2011. The number of children’s death in a family is a non-negative count response variable. The average number of children’s death is found to be 28 per 100 women with a variance of 44per 100 women. Thus Poisson regression model is not a proper choice to predict the mean response from the BDHS data due to the presence of over-dispersion. In order to address over-dispersion, we fit a Negative Binomial Regression (NBR), a Zero-Inflated Negative Binomial Regression (ZINBR) and a Hurdle Regression (HR) model. Among these models, ZINBR fits the data best. We identify respondent’s age, respondent’s age at 1 st birth, gap between 1 st birth and marriage, number of family members, region, religion, respondent’s education, husband’s education, incidence of twins, source of water, and wealth index as significant predictors for the number of children’s death in a family from the best fitted model. Identification of the risk factors of the number of children’s death is an important public health issue and should be carried out correctly for the much needed intervention. Introduction Reduction of child mortality is one of the prime objectives of the southeastern Asian nation Bangladesh. Bangladesh has made impressive progress in health and human development since its emergence as an independent nation in 1971 [1,2]. Although the country achieved significant improvement in public health and in controlling the morbidities and mortalities from preventable diseases, child mortality is still a major public health issue. Every year between 8 and 11 million children die worldwide before reaching their fifth birthday [3]. The underlying cause for 60% of the deaths of children under the age five in Bangladesh is malnutrition [3,4].The primary objective of the current study is to identify socioeconomic and demographic risk factors/ predictors of the number of children's death for women aged 12-49 from the Bangladesh Health and Demographic Survey (BDHS) administered in 2011. It is useful for the policymakers to have a set of risk factors of the number of children's death in order to develop guidelines and address these risk factors with proper intervention. Framing proper guidelines and policies to reduce child mortality will insure the sustainability of achieving the Millennium Development Goal (MDG) [4] relating child mortality. In terms of demographic and socioeconomic determinants, Statistical Modeling of the Number of Deaths of Children in Bangladesh Regression (NBR) model and Generalized Poisson Regression (GPR) model had been addressed in the literature [3]. Applications of these models are based on the assumptions that the mean and variance of the response variable are equal under the standard Poisson model. The GPR model allows flexibility in dealing with over-dispersion or under-dispersion [3]. More specifically, an NBR model is suggestive for dealing with over-dispersion [15]. The main objective of the study is to develop a predictive model for the number of child deaths in families in Bangladesh. In this study we applied Negative Binomial Regression (NBR), Zero-Inflated Negative Binomial Regression (ZINBR) and Hurdle Regression (HR) to the count response, number of children's death, to identify statistically significant predictors/risk factors. Study participants Women aged 12-49 from the Bangladesh Health and Demographic Survey (BDHS) administered in 2011by the National Institute of Population Research and Training (NIPORT), ICF International (USA), and Mitra and Associates. BDHS 2011 is the sixth national demographic and health survey in Bangladesh. The findings of this study can be used for evaluating the Health, Population and Nutrition Sector Development Program (HPNSDP). Sixteen trained interviewing teams administered 17,842 successful interviews of ever-married women aged 12-49. Information was collected from ever-married women of the selected households. The detailed methodology of the survey design, data collection, and data management has been described elsewhere [1]. For this study, we ignore all the missing values and exclude the subjects with missing entries assuming that observations are missing completely at random. Due to some missing observations, finally, we carry out our analysis on data collected from 15,044 married women aged 12-49 years in Bangladesh. The box plot ( Figure 1) for the response shows that there is an unusual observation. However, this response value has been kept in the analysis considering its plausibility. Model justification Poisson regression model is a natural choice for count response variables. However, in the presence of over-dispersion, Poisson regression model does not perform well in best fitting the data and for prediction. In this study, we test over-dispersion, [15,16] where the null hypothesis is that there is no overdispersion in the data. Hence, the Poisson regression model is the null model against any other alternative model with overdispersion. Following the notations of Deans and Lawless [15,16], we let Y i be the response from their subject with covariates X i. . Then Y i is distributed as Poisson with mean µ i = µ i (X i ; β), where β is ap-dimensional vector of unknown coefficients. We denote the possible extra-Poisson variation byv i, in the presence which the standard Poisson model becomes a random or mixed effects Poisson model. Thus, for given X i and v i , Y i~ Poisson (v i, µ i ), where v i 's are continuous positive valued random variables that are independent and identically distributed with some finite mean E(v i ) and variance Var(v i ) =τ. If we let E(v i )= 1, then as Collings and Margolin [17], Var(Y i |X i ) = µ i + τ µ i 2 and the null hypothesis for testing over-dispersion becomes, H 0: τ = 0. Failure to reject the null hypothesis leads to the Poisson regression model. In this study we perform, Dean's P B [15] test for over-dispersion using a R packaged DCluster [18] that generates the test statistic P B = 13.5292 with p-value <0.000. Rejection of H 0 leads to the application of Negative Binomial type models. Since the variance is greater than the mean and there is about 79% zero counts for the response variable, we apply Zero-Inflated Negative Binomial Regression (ZINBR) and Hurdle Regression (HR) models along with the Negative Binomial Regression (NBR) model to analyze the data. Data analysis Simple summary statistics (frequency and percentages) are calculated for the selected socioeconomic and demographic risk factors. Sample mean and sample variance of the response 3/8 variable are calculated to have an idea of its distribution pattern. Bivariate analysis (based on Pearson Chi-square test) has been performed to examine the association between response variable and each of the selected predictors. To validate the Chi-square test, we categorize the response variable as 0 deaths, 1-2 deaths, and ≥3 deaths. Otherwise the sell frequencies become <5 or zero violating the asymptotic Chi-square assumption. All the significant predictors are then included in the NBR, ZINBR and HR models. For HR model we exclude the predictor age at 1 st birth since inclusion of the variable in the model makes the Hat matrix (X(X / X) -1 X / ) singular. NBR model is better suited to the count response; number of deaths of children to women aged 12-49, due to the presence of over-dispersion. In the case of excessive number of zeros ZINBR and HR perform well in terms modeling number of children's death. We applied these three models for estimating regression parameters (β) including p-values based on Wald statistics. Finally, we calculated incidence rate ratio (IRR) for all groups of categorical variables. The statistical software package R (Studio) is used for extracting information from BDHS 2011, recoding and model fitting including parameter estimation of the models. We compared the results from NBR, ZINBR and HR models and goodness of fit statistic Akike Information Criteria (AIC). Sample characteristics The average age of the surveyed women was 31.44 years while the average age of the husband was 40.97 years. The average age of the women at their first experience of childbirth was 17.89 years. On average a woman gave 2.85 births in their lifetime and the average family size is 5.6.Women gave birth to their first child approximately 3 years after their marriage. About 21% of the women experienced child death and 1.3% women experienced three or more child deaths. 17.47% of women are from the lowest wealth index category and 23.37% are from the highest wealth category. About twenty-six percent of the women had no education. Few of them (7.5%) had 11+ year of education. On the other hand, 29% of the husbands were illiterate but 14% had at least11+ year of education. The majority of the women were Muslims (88.71%). About 66% of the respondents were from urban area. In case of family water sources, 80.52% of families depend on tube-well. About 57% of the families use standard toilets. The percentage of women reported that they had no access to any kind of media is 35%.Box plot (Figure 1) for the number of deaths in a family depicts only one unusual observation. It reveals that one mother experienced 15 child deaths. Figure 2 shows that the distribution of number of children's death in a family is highly positively skewed. Incidence of more than three child death is almost negligible. Scatter plot in (Figure 3) shows that number of child deaths increases as the respondent's age increases. The distribution of the number of children's death with respect to mother's education ( Figure 4) postulates the high incidence of child death for the mothers having no education. Figure 5 also indicates that families with no education of the household head experience more child death. Proportion of deaths for male children and female children are 0.53 and 0.47 respectively. However, the difference of the proportion is not statistically significant (p-value=0. 54). Simple association: Response versus Predictor The associations between the response variable and each of (Table 1) with the following hypothesis H 0i : There is no association between the number of children's death and their risk factor. Table 2 shows that all the risk factors except respondent's current work status are significantly associated with the number of deaths of children. Model fitting We compare the fitted models with respect to AIC. Although NBR and ZINBR produce very close results, we see ZINBR model acquires the lowest AIC (17,690) among the three fitted models (Table 3). Accordingly, we assert that ZINBR is the best predictive model for the number of deaths of children of women aged 12-49 in Bangladesh and present results from this model. According to the results of ZINBR model (Table 4), one or more of the categories of the predictors (respondent's age, respondent's age at 1 st birth, gap between 1 st birth and marriage, number of family members, region, religion, respondent's education, husband's education, incidence of twins, source of water, wealth index)are statistically significantly associated with the number of children's death in a family. The NBR model also suggests almost similar findings [results not shown].On the other hand, HR model[results not shown] shows gap between 1 st birth and marriage, number of household members, region, religion, incidence of twins, and toilet type are significantly related with the response variable. The ZINBR model (Table 4) shows that as the age of mothers increase, they experience higher rate of incidence of child death during their childbearing ages. Mothers having first birth between 20 to 24year's experiences 35.90% lower child death incidence than the mothers having first birth between 13 to19 years. Among the divisions, all but Dhaka is significantly associated with the number of children's death (Barisal division is a reference category). The incidence rate ratio of children's death in Sylhet is 1.303 times higher than that of Barisal. Conversely, mothers living in Khulna experience 0.328 times lower death incidence than the mothers living in Barisal. Mothers having no education experience 2.06 times higher incidence of death than that of mothers having more than 11 years education. The results (Table 4) also show that the women having twin births are subject to 3.53 time's higher child death incidence. The mothers who have availability of tube-well and piped water possess lower rate of child death incidence than those of who do not have the facility. Deriving a predictive model for the number of children's death of women is in general a hard task. Numerous factors must be taken into account. It is not always feasible to consider these issues in modeling the number of children's death. Thus the results should be interpreted with caution. Conclusion Our study suggests that ZINBR is the right model to identify the risk factors of the number of children's death in families in Bangladesh. Respondent's age, respondent's age at 1 st birth, gap between 1 st birth and marriage, number of family members, region, religion, respondent's education, husband's education, incidence of twins, source of water, wealth index are statistically significantly associated with the number of children's death in a family. The number of children's death affects about 28% of all ever-married women aged 12-49 years in Bangladesh, and it indicates a poor health system. A number of strategies are reported in studies for the reduction of child mortality [5,6]. From this study, we identify some of the well-established demographic and socioeconomic risk factors for the number of children's death to women aged 12-49 in Bangladesh. Among these the most important one, in our opinion, is the education of a mother. Increase in the number of years of education for women delays the age at marriage, age at first birth and perhaps gap between successive births, all of which are identified as significant predictors for the number of children's death in the current study. Education of a mother is also strongly correlated with nutrition status of the family. Intervention for improving the nutrition status is important since malnutrition is one of the major causes of child mortality in Bangladesh [3]. Increasing parental educational facilities can improve child nutrition and child mortality as well. Facilities to safe drinking water and safe sanitation contribute much to reduce malnutrition. However, due to unavailability of malnutrition data on children this study could not address how malnutrition would contribute to the number of children's death. The findings of this study on the socio-demographic risk factors /determinants of the number of children's death to women aged 12-49 will provide the policymakers proper insight and guidance toward implementation of the needed intervention to reduce child mortality in Bangladesh and in other countries around the world. Reducing child mortality through intervening its significant determinants will insure the sustainability of the MDG 4 achievement program in Bangladesh. As mentioned earlier, child death depends on a diverse number of factors including sociodemographic and physiological factors in a complex pattern. The current study explores the role of socio-demographic factors in predicting the number of children's death in women aged 12-49 in Bangladesh. More extensive studies focusing on the interplay between socio-demographic factors and other relevant factors on the child death should be carried out in order to fully understand the risk factors of child mortality.
2019-03-11T13:13:02.978Z
2014-12-11T00:00:00.000
{ "year": 2014, "sha1": "832c05e194c4803f4795478bcdff9e933da9b0f5", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.15406/bbij.2014.01.00014", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "7e3d912ac836d890e0a704d009adcf5c219e9c9d", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
119021048
pes2o/s2orc
v3-fos-license
Why the angular distribution of the top decay lepton is unchanged by anomalous $tbW$ coupling We give a simple physical argument to understand the observation that the angular distribution of the top decay lepton depends only on the polarisation of the top and is independent of any anomalous $tbW$ coupling to linear order. The top quark is the heaviest known fundamental particle. Its average lifetime is about one order of magnitude smaller than the typical hadronisation time scale. This leads to decay of the top quark before the stronginteraction hadronisation process can wipe out its spin information. Thus, one can extract the top quark polarisation from the kinematical distributions of its decay products. The polarisation of the t quark produced via Standard Model (SM) processes at hadron colliders is known. It is zero for the dominant QCD-induced tt production and is dominantly left-handed but calculable for the subdominant single-t production. The rigidity of these predictions allows us to use the t polarisation to probe for possible new physics contributions to these production processes. From simple angular momentum considerations, the angular distribution of a spin 1/2 decay product f of the t quark must take the form In the SM, one finds for the t → b ν decay the values α b = −0.4, α = 1 and α ν = −0.32 at tree level, and only small modifications of these values at the one-loop level. Using these values, the measurement of the t decay angular distributions can be used to obtain the t polarisation. However, there is a possible problem. If new physics can modify the t quark production amplitudes, it can also modify the t quark decay amplitudes. Then we would expect new physics to modify the values of the α i in Eq. (1). The measurement of the distributions gives only the combinations α i P t , so if the α i can be shifted by new physics effects, this method loses its power. Thus it is noteworthy that, in a series of investigations ontt production at an e + e − collider [1][2][3] and a γγ collider [4,5], it was observed that α remains unchanged even after inclusion of anomalous tbW couplings, up to linear order in new physics parameters. This independence of α (or "lepton decoupling") was also observed for more general processes of top-quark production [6,7], suggesting that it is a property of the top quark decay and not of any specific production process. This would make the angular distribution of the decay lepton with respect to the top spin direction a very robust measure of the top polarisation. It was also noted [6,7] that this lepton decoupling follows because, for the SM, the full kinematic distribution of the decay lepton factorises into a term dependent on the lepton energy E and another term dependent on the angular variables, and that this factorisation is maintained even in the presence of anomalous tbW couplings up to linear order. Actually, the dependence of the decay distributions on E is modified by anomalous tbW couplings. Then it is possible to use the angular and energy distributions together to measure the polarisation of the t quark and in addition to probe for the presence of anomalous couplings in the decay vertex [6,8,9]. Both the lepton decoupling and the factorization do receive corrections at the quadratic order in anomalous couplings [5,10]. But, in view of the already rather strong constraints on the tbW vertex [11], in which the least constrainted parameter f 2R is required to be less than about 0.1, lepton decoupling to linear order is quite sufficient for practical purposes. In Ref. [12], Hioki has given an argument for lepton decoupling based on a physical picture. In this paper, we would like to present a more transparent derivation of this result. Derivation: The key ingredient in our proof of lepton decoupling is the fact that, in the SM, the bν system produced in t → b ν is in a J = 0 state. As a result of this, the entire spin of the top is transferred to the lepton. This can be seen by a Fierz transformation of the SM decay amplitude. Starting from this fact, we will show that lepton decoupling for anomalous terms in the tbW vertex follows from simple rotation algebra. At the tree level in the SM, the amplitude for t decay is a product of matrix elements of left-handed currents. arXiv:1809.06285v2 [hep-ph] 31 Oct 2018 and considering only the upper two components of the Dirac spinors, we can write the decay matrix element as with Then the Fierz identity Each bracket is a Lorentz-invariant. In particular, the (bν) system is produced in a J = 0 state. We can now use the result in Eq. (3) to compute the spin density matrix for the t quark in terms of the lepton orientation. We will do this first in the SM and then add anomalous tbW couplings to linear order. The decay lepton produced in (an assumed SM) W decay is always right-handed. Hence the lepton direction is correlated with the lepton spin. We work in the rest frame of the decaying t quark. The t spin orientation is defined by a 2-component spinor ξ in the frame of the decay. Then we can best analyze the density matrix by choosing coordinates in which the lepton momentum is parallel to theẑ axis. The decay amplitude is a linear combination of the amplitudes for two configurations, those in which the t spin is parallel and antiparallel to theẑ axis. We show these two cases in Fig. 1. In the SM, it follows directly from Eq. (3) that the decay amplitude for t spin S z t = − 1 2 vanishes. Then the spin density matrix takes the form Already here we see the factorization of the dependence on t spin and lepton energy. To obtain the density matrix for a general t spin orientation-or for a general orientation (θ , φ ) of the lepton direction relative to theẑ axis-we perform a rotation and obtain in accord with [3]. The SM vertex is the case in which all coefficients f i are zero. Analyze this more general situation in the frame shown in Fig. 1. Now the decay matrix elements for S z t = + 1 2 and S z t = − 1 2 are both nonzero, and thus all four elements of the t spin density matrix receive nonzero contributions either linear or quadratic in the f i . However, the new contributions to the decay amplitudes can depend on the angle between the plane containing the (b, ν) momentum vectors and the reference (x,ẑ) plane. Call this angle φ b . For S z t = + 1 2 , the (b, ν) system has S z = 0 and so the decay amplitude is independent of φ b . On the other hand, for S z t = − 1 2 , the (b, ν) system must carry away S z = −1, and so The density matrix then takes the form O(f 2 i ) To obtain the density matrix for the charged leptons, we integrate over the orientations of the other t decay products, keeping the lepton momentum fixed. This includes an integration over φ b . After this integration, we find up to terms of quadratic order in the f i . Notice that the upper left matrix element of Γ t can be modified by nonzero f i , in a manner that depends on E ; however, the factorization between the dependences on t spin and E is preserved. This is the result that we sought to prove. In this case, the density matrix for a general t spin orientation will be similar to Eq. 4, but with a different E dependent factor. It is useful to take stock of what we needed to assume, and what we did not need to assume, to achieve this result: (i) Chiral lepton: We treated the + as massless, and, in accordance with the V − A nature of the W boson decay, having strictly positive helicity. (ii) SM spin correlation: We needed the property of the SM amplitude that the t spin is completely correlated with the lepton spin. This property does not hold if we replace the charged lepton with either ν or b. So the result holds only for charged leptons, and for T 3 = −1/2 light quarks in hadronic decays of W + -boson. (iii) Partial averaging: We needed to average over the azimuthal orientation of the b, ν vectors in the frame of the t decay. This would naturally be done if the t polarisation is measured from the inclusive lepton distribution. On the other hand, we did not require the b quark to be massless or the W boson to be on-shell. For massive leptons, viz., τ 's, the matrix element M(t ↓ → τ + ↑ bν τ ; φ b ) does not vanish in the SM and we get α τ = 1. This leads to a correction in α τ at O(f i ), but this correction is suppressed by m τ /m t . 1 Conclusions: In this note, we have analyzed the robustness of the parameter α associated with the t spin polarisation against the contributions from anomalous tbW couplings. We related this robustness to the factorisation of the energy and angle distributions for charged leptons. This factorisation emerges due to the SM property of the vanishing of the amplitude for the charged lepton with momentum in a direction opposite to the top spin. Further, the factorisation, demonstrated here in the rest frame of the decaying quark, remains true in the laboratory frame as well. Thus energy integrated angular distribution of the lepton produced in the decay of a polarised top quark does not receive any modifications from the anomalous tbW coupling, in the laboratory frame as well. This analysis offers us insight into the effect of anomalous tbW couplings on the kinematic distributions of the charged lepton produced in the t decay. The same analysis applies, in fully hadronic W decays, to the angular 1 We argue in this paper that the decay lepton distribution is a robust measure of the top quark polarization given by the production process, even in the case where there is an anomalous tbW coupling. In single-top production, the anomalous tbW coupling contributes to the production cross section and so an anomalous contribution affects the top quark polarization that is generated [13][14][15][16][17]. This does not affect our conclusions; the altered top quark polarization is still measured correctly by the lepton distribution. and energy distribution of the T 3 = − 1 2 quark in the final state [18,19]. The robustness of the independence of the angular distribution from the anomalous couplings, to linear order, offers us the possibility of using these kinematic distributions to construct independent probes of both the top polarisation and the anomalous tbW couplings. Acknowledgments: We thank Xerxes Tata for a careful reading and tough critique of this paper. We are grateful to Stefano Frixione and the CERN Theory Group for providing a congenial atmosphere to begin our discussions. MEP is grateful to the Center for High Energy Physics at the Indian Institute of Science, Banagalore, for a very pleasant setting in which to complete them. We also thank Eric Laenen for his comments on the manuscript. The work of RMG is supported by the De-
2018-10-31T12:57:02.000Z
2018-09-17T00:00:00.000
{ "year": 2019, "sha1": "2257066ebe9ed3c289f62a2da7d9fbb1b10680e6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.physletb.2019.01.022", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "2257066ebe9ed3c289f62a2da7d9fbb1b10680e6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
259354109
pes2o/s2orc
v3-fos-license
Development of hybrid immunity during a period of high incidence of Omicron infections Abstract Background Seroprevalence and the proportion of people with neutralizing activity (functional immunity) against SARS-CoV-2 variants were high in early 2022. In this prospective, population- based, multi-region cohort study, we assessed the development of functional and hybrid immunity (induced by vaccination and infection) in the general population during this period of high incidence of infections with Omicron variants. Methods We randomly selected and assessed individuals aged ≥16 years from the general population in southern (n = 739) and north-eastern (n = 964) Switzerland in March 2022. We assessed them again in June/July 2022, supplemented with a random sample from western (n = 850) Switzerland. We measured SARS-CoV-2 specific IgG antibodies and SARS-CoV-2 neutralizing antibodies against three variants (ancestral strain, Delta, Omicron). Results Seroprevalence remained stable from March 2022 (97.6%, n = 1894) to June/July 2022 (98.4%, n = 2553). In June/July, the percentage of individuals with neutralizing capacity against ancestral strain was 94.2%, against Delta 90.8% and against Omicron 84.9%, and 50.6% developed hybrid immunity. Individuals with hybrid immunity had highest median levels of anti-spike IgG antibodies titres [4518 World Health Organization units per millilitre (WHO U/mL)] compared with those with only vaccine- (4304 WHO U/mL) or infection- (269 WHO U/mL) induced immunity, and highest neutralization capacity against ancestral strain (hybrid: 99.8%, vaccinated: 98%, infected: 47.5%), Delta (hybrid: 99%, vaccinated: 92.2%, infected: 38.7%) and Omicron (hybrid: 96.4%, vaccinated: 79.5%, infected: 47.5%). Conclusions This first study on functional and hybrid immunity in the Swiss general population after Omicron waves showed that SARS-CoV-2 has become endemic. The high levels of antibodies and neutralization support the emerging recommendations of some countries where booster vaccinations are still strongly recommended for vulnerable persons but less so for the general population. • Our study is one of the first that assessed infection-induced, vaccine-induced and hybrid immunity and neutralizing activity of antibodies against SARS-CoV-2 in a larger population-based sample.• The population-based cohort study showed that by mid-2022, SARS-CoV-2 has become endemic and the levels of antibodies and neutralization against the ancestral strain, Delta, and Omicron variants were very high in the general Swiss population.• Hybrid immunity confers higher levels of neutralizing activity compared with both vaccine-induced and infection-induced immunity. • The high levels of antibodies and neutralization support the emerging recommendations of some countries where booster vaccinations are still strongly recommended for vulnerable persons but less strongly recommended for the general population. Introduction 3][4][5] Up to the point where vaccinations were approved in late 2020, seroprevalence increased to, on average, 25% as a consequence of SARS-CoV-2 infections, but with great variations within and across countries. 1,6,7Following the introduction of vaccines, seroprevalence quickly increased to around 50% in the general population worldwide and to above 90% in high-income countries. 6,8The lower rate of severe COVID-19 in vaccinated individuals provides strong support for the effectiveness of vaccines.][11] The rise of the highly infectious Omicron VOCs in early 2022 caused many infections in fully vaccinated or boosted persons.This led to a high seroprevalence and functional immunity in the general Swiss population, as measured by neutralizing activity of antibodies in serum. 12Functional immunity contributes to protection from severe courses of COVID-19 and is stronger if induced by both vaccinations and infections than by either alone (i.e.][15] To inform public health measures and further booster vaccine strategies, it is important to assess population levels of seroprevalence and durability of functional and hybrid immunity developed during a time of high incidence of Omicron infections.The aim of this study was to assess the trajectory of anti-SARS-CoV-2 antibody titres and functional and hybrid immunity in the general population, and to compare such trajectories across age groups and three cantons, i.e. federal states of the Swiss confederation, covering the three main regions in Switzerland. Study design, sampling, and participants This prospective, population-based, multi-region cohort study is part of the Corona Immunitas research programme in Switzerland, 16,17 for which we had completed four Phases of seroprevalence studies between April 2020 and October 2021 using a standardized protocol (study registration: ISRCTN registry 18181860).The current study includes results from Phases 5 and 6, for which assessments were conducted between 1 March and 1 April 2022, and 30 May and 11 July 2022, respectively (detailed results of Phase 5 published elsewhere). 12n Phase 5, we randomly selected individuals from the general population in southern (canton of Ticino) and northeastern (canton of Zurich) Switzerland, who were assessed again in Phase 6.For cross-sectional analyses in Phase 6, we supplemented the southern and eastern Switzerland sample with a random sample from the general population in western Switzerland (canton of Vaud).Due to another seroprevalence study requested by the cantonal health authorities of Vaud, which took place in the autumn of 2021, the canton of Vaud only participated in Phase 6; it was not feasible to conduct an additional assessment between autumn 2021 and June 2022.The three Swiss cantons differ across demographic, sociocultural and linguistic aspects and climate, all of which may impact on the dynamics of the pandemic. 18However, they are fairly representative for their language region (Italian, German, and French; map of Switzerland for overview see Supplementary Figure S1, available as Supplementary data at IJE online).The Swiss Federal Office of Statistics provided random samples of the general population in age-stratified (16-29, 30-44, 45-64 and 65 years) groups, separately for the cantons of Ticino, Vaud and Zurich.We selected these groups after consultation with the Swiss Federal Office of Public Health to adequately account for the potential impact on seroprevalence of social behaviour, adherence to public health measures and vaccination uptake, all of which differ across these age groups. 19The target sample size was 200 for each age stratum in the three cantons (i.e. total planned sample size of 2400).Based on the framework proposed by Larremore et al., 20 we deemed 200 participants to provide precise estimates; given a sensitivity of 97% and a specificity of 99% for the serological test we have used, 21 Data collection We invited participants to in-person study visits at a health care facility to provide a blood sample.People who were not able or willing to travel were offered home visits.For each participant, trained personnel collected venous blood samples, according to clinical standards and COVID-19 hygiene measures.Before the first study visit, all participants completed a baseline questionnaire including information regarding sociodemographics, vaccinations, SARS-CoV-2 infections, hospital and intensive care unit (ICU) admissions, symptoms in case of infections and past medical history, using the secure, webbased Research Electronic Data Capture platform (REDCap) for data collection and management. 22,23They also had the possibility to fill in the questionnaire in a paper/pencil version.Participants from the cantons of Ticino and Zurich who were recruited in Phase 5 were invited for a second study visit and blood sampling in Phase 6, 3 to 4 months later.Before this second study visit, participants filled in another questionnaire targeting the time between the first and second blood sampling including questions on new self-reported SARS-CoV-2 infections, symptoms and vaccinations. Laboratory assays for SARS-CoV-2 antibodies and neutralizing capacity against SARS-CoV-2 variants We assessed SARS-CoV-2 specific antibodies against the spike and nucleocapsid proteins using Sensitive Anti-SARS-CoV-2 Spike Trimer Immunoglobulin Serological (SenASTrIS), a Luminex binding assay. 21The assay measures binding of IgG antibodies to the trimeric SARS-CoV-2 spike and the nucleocapsid proteins.The test has a high specificity (99%) and sensitivity (97%), has been validated in samples of the general population and in specific subgroups 21 and results in semiquantitative median fluorescence intensity (MFI) values.The MFI values have additionally been translated to the WHO International Journal of Epidemiology, 2023, Vol.52, No. 6 units per millilitre (U/mL) scale as measured by the Elecsys Anti-SARS-CoV-2 immunoassay by Roche. 12We also assessed the presence of SARS-CoV-2 neutralizing antibodies against three variants (ancestral strain, Delta and Omicron) that were dominant in Switzerland in 2022, using a cell-and virus-free assay. 24This assay measures the proportion of antibodies that block the interaction of the angiotensinconverting enzyme 2 receptor (ACE2r) with the receptorbinding domain of the trimer spike protein of the ancestral strain and variants of concern.All analyses were performed in the laboratory of Immunology of the Lausanne University Hospital (CHUV). Outcome definition We defined seropositivity based on the presence of anti-spike IgG antibodies according to the threshold of SenASTrIS test positivity with MFI 6 (levels categorized: 6 and <12 low, 12 and <40 middle, 40 high) and neutralization capacity based on the cut-off value of the cell-and virus-free assay of 50.Functional immunity was defined based on neutralization capacity of the cell-and virus-free assay above the threshold value of 50.This was determined independently for each variant spike (ancestral strain, Delta, Omicron).Last, source of immune status was defined based on SARS-CoV-2 vaccination status (self-reported) and SARS-CoV-2 infection, determined as seropositivity for anti-nucleocapsid IgG (MFI6), report of a positive polymerase chain reaction (PCR) or rapid antigen test or presence of anti-spike IgG antibodies in the absence of a SARS-CoV-2 vaccination.We categorized immune status as follows: immune naı ¨ve (i.e.no detectable antibodies and no reported infection or SARS-CoV-2 vaccination), vaccine-induced only, infection-induced only or hybrid immunity (SARS-CoV-2 vaccination and infection). Statistical analysis We used medians and interquartile ranges (numerical variables) or absolute numbers and percentages (categorical variables) for the descriptive analyses. We calculated seroprevalence using a Bayesian logistic regression model adjusted for age group (16-29, 30-44, 45-64 and 65þ) and sex with weak normal(0, 1) priors on beta coefficients per canton.We incorporated the uncertainty of sensitivity (hierarchical prior) and specificity [uniform(0, 1) prior] of the serological test as binomial models.We used a Markov chain Monte Carlo sampling approach with four chains (250 warm-up iterations and 1250 estimation iterations per chain, 5000 iterations in total with warm-up iterations not considered) using the probabilistic programming language stan and the rstan package to run the model in R. Model convergence has been assessed using R-hat and by inspecting traceplots.We applied post-stratification weights based on the target population's demographic structure (population size per age group and sex) to obtain seroprevalence estimates by estimating weighted means of probability of seropositivity based on the posterior distribution.Reported estimates are medians and 95% confidence intervals 2.5-and 97.5-quantiles of the resulting probability distributions. 2,25,26urthermore, we determined the percentage of individuals in whom anti-spike IgG antibodies remained negative or positive (i.e.unchanged) or changed from negative to positive or positive to negative.We conducted all analyses in R, version 4.2.1. 27rom March 2022 to June/July 2022, the percentage of participants from Ticino and Zurich with detectable anti-spike IgG antibodies remained stable (>96% across age groups); only in seven participants the anti-spike IgG decreased below the threshold; all of these were unvaccinated and had become infected in 2022.In contrast, anti-nucleocapsid IgG antibodies fluctuated more and changed from positive to negative in 7.3% of the participants and from negative to positive in 18.6%.The neutralization capacity against the variants remained more stable (from positive to positive: 93.1% for the ancestral strain, 88.5% for Delta, and 80% for Omicron), with little variation across age groups [Figure 1 (overall trajectories), Supplementary Table S2, available as Supplementary data at IJE online (trajectories stratified by age group)].There was a higher loss of neutralization capacity (from positive to negative) observed for Omicron with 8.6% (ancestral strain 1.2%, Delta 4.2%), whereas on the population level only little changed with respect to newly obtained neutralization capacity (from negative to positive: ancestral strain 1.3%, Delta 1.8%, Omicron 3.9%). Participation In June/July 2022, 1.0% (n ¼ 25) of all participants were immune naı ¨ve (i.e.no detectable antibodies and no reported infection or SARS-CoV-2 vaccination), 41.1% (n ¼ 1050) had vaccination-induced immunity only, 7.1% (n ¼ 181) infectioninduced immunity only, and 50.6% (n ¼ 1289) hybrid immunity (vaccination and infection).For eight participants, relevant data to determine immune status were missing.Seroprevalence and hybrid immunity in Phases 5 and 6 of Corona Immunitas, in relation to the evolution of the pandemic in Switzerland, are illustrated in Figure 2. The percentage with high levels of anti-spike IgG antibodies was more than double in persons with a hybrid immunity (99.8%) and vaccinated-only individuals (99%) compared with individuals with an infection only (45.9%) [Table 3 (pooled results); and Supplementary Table S3, available as Supplementary data at IJE online (results stratified by canton)].Such large differences were also observed for neutralization capacity.Neutralization against Delta and Omicron was highest in participants with hybrid immunity, followed by those who had only been vaccinated and much lower in those with infection only (ancestral strain: hybrid 99.8%, vaccinated 98%, infected 47.5%; Delta: hybrid 99%, vaccinated 92.2%, infected 38.7%; Omicron: hybrid 96.4%, vaccinated 79.5%, infected 47.5%).Compared with March 2022 (Phase 5), hybrid immunity in participants from Ticino and Zurich increased from 35.8% to 50.6% by June/July 2022 (Phase 6), reflecting the high incidence with Omicron infections since spring 2022. Discussion This population-based cohort study showed that not only SARS-CoV-2 seroprevalence but also antibody titres were very high in the Swiss general population by June/July 2022, without notable differences across cantons, age or sex strata.At least 51% of participants developed hybrid immunity, and among those more than 96% had neutralizing antibodies against the ancestral strain, Delta and Omicron variants.In participants who received vaccination but were not infected previously, the percentage with neutralizing antibodies was lower, in particular against Omicron.The 7% of participants with only infection-induced immunity had about 15 times lower antibody titres and less than 50% of them showed neutralizing antibodies. Trajectories of anti-spike IgG antibodies from March 2022 to June/July 2022 remained remarkably stable in participants from Ticino and Zurich.The fluctuation of anti-nucleocapsid IgG antibodies in contrast reflected quick waning of antinucleocapsid antibodies as well as substantial infection activity with the Omicron variant in spring 2022 in Switzerland.Thus, we observed stable seroprevalence and high levels of antibodies in the general population.Since the summer 2022 up to March 2023, the number of SARS-CoV-2 infections was still moderately high in Switzerland but fluctuated less c Chronic disease includes reporting any of the following conditions: cancer, diabetes, diseases/treatments that weaken the immune system, physiciandiagnosed high blood pressure, cardiovascular diseases and chronic respiratory diseases. International Journal of Epidemiology, 2023, Vol.52, No. 6 ACE2r, angiotensin-converting enzyme 2 receptor; CI, confidence interval; IgG, immunoglobulin G; IQR, interquartile range; NA, not applicable; SARS-CoV-2, severe acute respiratory syndrome coronavirus 2; WHO U/mL, World Health Organization units per millilitre (according to Elecsys V R Anti-SARS-CoV-2 S). a Cantons (here Ticino, Vaud and Zurich) are federal states of the Swiss confederation.b Unit for levels of anti-spike IgG antibodies is the median fluorescence intensity (MFI) as measured by the Luminex binding assay SenASTrIS (Sensitive Anti-SARS-CoV-2 Spike Trimer Immunoglobulin Serological). 20Low: from threshold of test positivity to less than 3 standard deviations above this threshold (6 to <12); moderate: 3 standard deviations above positivity threshold but unlikely to provide neutralization (12 to <40); high: neutralizing capacity likely (40). c The MFI values have additionally been translated to the U/mL scale as measured by the Elecsys Anti-SARS-CoV-2 immunoassay by Roche and are presented as population median and IQR. than before and with very few hospital admissions due to COVID-19.Our results of stable and high population immunity together with the rather stable epidemiological situation imply that the transition from the pandemic to an endemic situation is taking place.Our findings are in line with previous studies, mainly conducted in non-representative, convenience and relatively small samples, and/or in sub-populations (e.g.][30][31][32][33][34] However, to our knowledge, this study is the first to demonstrate the extent of hybrid immunity and neutralization capacity in the general population in 2022. A large study from Israel in 2021 showed that hybrid immunity provided stronger protection than vaccination and infection alone. 15Although the proportion of persons with hybrid immunity was not reported, the observation time for persons with infection and vaccination up to (re-) infection or censoring was shorter compared with those only vaccinated or only infected, implying a very low prevalence of hybrid immunity back in 2021. Strengths of our study include the prospective, populationbased cohort study design, coverage of the three main language and cultural regions of a country, the well-established methods of the Corona Immunitas research programme, the large sample size and the use of previously validated serological tests and neutralizing antibodies. 21,24In addition, retention of participants since March 2022 was high.Limitations include the modest participation rate, as is commonly the case in population-based studies; however, this may have introduced self-selection bias.We observed that in general, individuals with higher health literacy and trust in public health authorities in dealing with the pandemic were more likely to participate.Overall, this may have led to an overrepresentation of vaccinated persons and, consequently, of the seroprevalence.Another limitation is the lack of measures of cellular immunity, which is not feasible to test in large populationbased studies.In addition, we may have underestimated hybrid immunity, as anti-nucleocapsid antibodies wane quickly and we likely missed some infections that occurred before 2022.Self-reports of infections compensate only to some extent for the low to moderate sensitivity of anti-nucleocapsid assays beyond 6 months of infection, because many infections are mild or asymptomatic.We assessed self-reported SARS-CoV-2 vaccination status and did not check vaccine certificates for feasibility reasons.Although we do not expect that many participants answered this question dishonestly or that recall bias occurred regarding vaccination, we cannot exclude this possibility.It is difficult to estimate how such a potential bias may have affected the results. Our results have implications for vaccination strategies.Recommendations for primary series and booster vaccination need to consider the effectiveness and safety of vaccines as well as the epidemiological and societal context. 35eroprevalence is only a rough proxy marker of immunity in the population, since seropositive persons have a wide range of antibody titres and neutralizing capacity against SARS- ACE2r, angiotensin-converting enzyme 2 receptor; IgG, immunoglobulin G; IQR, interquartile range; NuC, nucleocapsid; SARS-CoV-2, severe acute respiratory syndrome coronavirus 2; WHO U/mL, World Health Organization units per millilitre (according to Elecsys V R Anti-SARS-CoV-2 S). a Participants who were immunologically naı ¨ve (n ¼ 25) or were missing relevant data to determine their immune status (n ¼ 8) have been excluded.b Unit for levels of anti-spike IgG antibodies is the median fluorescence intensity (MFI) as measured by the Luminex binding assay SenASTrIS (Sensitive Anti-SARS-CoV-2 Spike Trimer Immunoglobulin Serological). 20Low: from threshold of test positivity to less than 3 standard deviations above this threshold (6 to <12); moderate: 3 standard deviations above positivity threshold but unlikely to provide neutralization (12 to <40); high: neutralizing capacity likely (40). c The MFI values have additionally been translated to the U/mL scale as measured by the Elecsys Anti-SARS-CoV-2 immunoassay by Roche and are presented as population median and IQR. CoV-2 VOCs as a consequence of infection only, vaccination only or both infection and vaccination, as this study and other studies showed. 13,14Therefore, information on the proportion of persons in the general population with neutralizing capacity and hybrid immunity provides more solid guidance.The Swiss Federal Vaccination Commission recently released finely granulated recommendations for booster and primary series vaccinations, based on the best available international evidence on the effectiveness and safety of bivalent or other booster vaccines and based on the results of Corona Immunitas presented here.Whereas the Commission issued a strong recommendation for a second booster for people above 64 years of age, for those with chronic conditions and for pregnant women, the recommendation was moderately strong for health care staff and formal and informal caregivers, and only weak for the general population between 16 and 64 years of age.In addition, they recommended only one primary series dose for unvaccinated persons since most of them have had a SARS-CoV-2 infection (>90% according to the results presented here).These recommendations considered the high seroprevalence in Switzerland and the high proportion of persons with hybrid immunity and neutralizing capacity and include considerations on the optimal timing for the next booster campaign in autumn/winter 2022.The Canadian authorities issued similar recommendations for booster vaccines but population-based data on immunity in the population were not available to the extent and level of detail presented here. 36 Conclusion This prospective population-based cohort study with 2553 participants showed that seroprevalence remained very high in Switzerland in 2022, without differences across cantons and age groups.Antibody titres increased, and the majority of participants developed hybrid immunity with very high levels of neutralization against the ancestral strain, Delta and Omicron variants of SARS-CoV-2.Individuals with immunity only from infection had 15 times lower antibody titres, and less than half of them showed neutralization.Our results support the emerging recommendations of some countries where booster vaccinations are still strongly recommended for vulnerable persons but less strongly recommended for individuals in the general population. Table 2 . Prevalence of SARS-CoV-2 IgG antibodies and ACE2r-blocking (neutralizing capacity) as measured by a virus-free assay, Ticino, Vaud and Zurich, Switzerland, June-July 2022, (n¼2553), stratified by canton a and age group
2023-07-07T22:15:46.315Z
2023-07-05T00:00:00.000
{ "year": 2023, "sha1": "2be749ea8cd8b6692fd5b4cd16681261245491a8", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1093/ije/dyad098", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "1dbb51fb1fdd00fc55366edda9ab5276cd0bea1f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
27492744
pes2o/s2orc
v3-fos-license
On the conjecture by Demyanov-Ryabova in converting finite exhausters In this paper, we prove the conjecture of Demyanov and Ryabova on the length of cycles in converting exhausters in an affinely independent setting and obtain a combinatorial reformulation of the conjecture. Given a finite collection of polyhedra, we can obtain its"dual"collection by forming another collection of polyhedra, which are obtained as the convex hull of all support faces of all polyhedra for a given direction in space. If we keep applying this process, we will eventually cycle due to the finiteness of the problem. Demyanov and Ryabova claim that this cycle will eventually reach a length of at most two. We prove that the conjecture is true in the special case, that is, when we have affinely independent number of vertices in the given space. We also obtain an equivalent combinatorial reformulation for the problem, which should advance insight for the future work on this problem. Introduction Exhausters are multiset objects that generalise the subdifferential of a convex function. Such constructions are popular in applied optimisation as they allow for exact calculus rules and easy conversion from 'upper' to 'lower' characterisations of the directional derivative. Exhausters were introduced by Demyanov in [5] and attracted a noticeable following in the optimisation community [1,2,7,8,12,13,15,19]. Exhausters and other constructive generalisations of the convex subdifferential such as quasi-and codifferentials allow for straightforward generalisation of Minkowski duality that is not available for other classic constructions [10,11]. Neither the essentially primal graphical derivatives [16] nor dual coderivative objects [14] allow for well-defined dual characterisations. The exhauster approach is not without drawbacks: such constructions inherently lack uniqueness, and whilst some works are dedicated to finding minimal objects [17], it is shown that minimal exhausters do not exist in some cases [9]. The conjecture that we are studying in this paper is in a similar vein: we want to establish the uniqueness of a dual characterisation of a function by establishing a steady 2-cycle in the relevant dynamical system defined by the conversion operator. Constructive nonsmooth subdifferentials are well suited for practical applications, especially in finite dimensional continuous problems with minimax structure of the objective function, and have been utilised successfully both in applied problems such as data classification (see an overview [3]) and in theoretical problems coming from other fields, such as spline approximation [18]. Given a positively homogeneous function h : R n → R, its upper exhauster E * is a family of closed convex sets such that h has an exact representation so that h is the infimum over a family of sublinear functions. An upper exhauster E * is the collection of subdifferentials of these functions. The lower exhauster E * h is defined symmetrically as a supremum over a family of superlinear functions. Exhausters constructed for first order homogeneous approximations of nonsmooth functions (such as Dini and Hadamard directional derivatives) provide sharp optimality conditions, moreover, exhausters enjoy exact calculus rules which makes them an attractive tool for applications. An upper exhauster can be converted into a lower one and vice versa using a convertor operator introduced in [6]. Upper exhauster is a more convenient tool for checking the conditions for a minimum (and vice versa, lower exhauster is better suited for maximum); conversion is also necessary for the application of some calculus rules. When the positively homogeneous function h is piecewise linear, it can be represented as a minimum over a finite set of piecewise linear convex functions described by the related finite family of polyhedral subdifferentials. The exhauster conversion operator allows to obtain symmetric local representation as the maximum over a family of polyhedral concave functions, and vice versa, where the families of sets remain finite and polyhedral. The Demyanov-Ryabova conjecture states that if this conversion operator is applied to a family of polyhedral sets sufficiently many times, the process will stabilise with a 2-cycle. Here we focus on a geometric formulation of this conjecture that does not rely on nonsmooth analysis background. In this paper, we will first define the conversion operator and explain the statement of the conjecture. Then we will prove this conjecture in the special case. We will restrict the conjecture to the case with n + 1 affinely independent vertices in an n dimensional space, and then prove it is always true. In the final section of the paper, we will reformulate this geometric problem into an algebraic problem by considering the orderings on the vertex set and forming a simplified map. Then we will show the algebraic formulation and the geometric problem are equivalent. Preliminaries Given a polyhedron and a direction, we can define the supporting face of this polyhedron as the set of points which project the furthest along the given direction. Definition 1. Let d ∈ S n−1 be a direction and P a polyhedron, we define P d be the supporting face of P at direction d (see Fig. 1). That is, Note that always (P d ) d = P d . In the sequel we will use the following two reformulations of Conjecture 4. Lemma 5. Let Ω 0 be a finite family of polyhedral sets in R n . Conjecture 4 is equivalent to each of the following statements. (1) There exist an N ∈ N such that if n > N, then any polyhedron P satisfies P ∈ Ω n ⇔ P ∈ Ω n+2 . (2) Given a polyhedron P . Then there exist N ∈ N such that if n > N, then P ∈ Ω n ⇔ P ∈ Ω n+2 . Proof. It is evident that statement (1) is equivalent to Conjecture 4, also (1) is stronger than (2). Statement (2) yields (1) due to the finiteness of our setting: there are finitely many polyhedra that can be formed on a finite set of vertices, hence, we only need to check (2) for finitely many polyhedra, hence there exists N for which (2) holds for all P in this finite collection, which we can then substitute in (1). Observe that Conv(F (X)) = Conv(X) for any set of polyhedra X. So Conv(Ω i ) is constant. We let C = Conv(Ω 0 ) = Conv(Ω i ) for all i ∈ N, and by C d we denote the supporting face of C in direction d in alignment with the notation of Definition 1. Example 6. For the example shown in Figure 2, C is the convex hull of 5 convex sets (i.e. a single vertex, two line segments, a triangle, and a rectangle) in R 2 . Every edge and vertex in C is a supporting face for some direction d. Affinely independent case The main goal of this section is to prove that Conjecture 4 is true for affinely independent case, i.e. when all vertices in of the polyhedra in our family form an affinely independent set. We begin with several technical claims and finish with the proof of the main result in Theorem 10. by definition of the supporting face) Now we have both inclusions, we have shown that C d = Ω n+1 (d) for any given d. Proof. By the Lemma 7, we have either: In any of these cases, there exist N 1 ∈ N >0 such that if n > N 1 , and 2 divides n, then, Similarly, there exist N 2 ∈ N >0 such that if n > N 2 and 2 does not divide n, then, Therefore, we can set N = max{N 1 , N 2 }, which proves the proposition. Recall the definition of simplex below: .., v k ∈ R n be affinely independent. Then, the simplex determined by this set of points is: In other words, a simplex is the generalisation of a tetrahedral region of spaces to an arbitrary dimension. A k-simplex is a k-dimensional polytope that is a convex hull of its k + 1 vertices. Observe that every face of a simplex (sub-simplex ) is still a simplex in its lower dimensional space. Theorem 10. If C is a simplex, and each P ∈ Ω is a sub-simplex of C, then the conjecture is true. Proof. By induction, we can show that every P ∈ Ω i and every P d for any direction d is a sub-simplex of C. For any sub-simplex P of C, with P = C, P is a supporting face of C. Therefore, there exist N P such that for n ≥ N P , we have P ∈ Ω n ⇔ P ∈ Ω n+2 (by Proposition 8). Let N := max{N P | P is a sub-simplex of C, P = C} + 2. Then Ω N = Ω N +2 since N − 2 ≥ N P for any sub-simplex P of C satisfying P = C. We have shown the statement for proper faces, now we will show it is true for C, which is But Since P ∈ Ω N −1 and P / ∈ Ω N +1 , we have Therefore C ∈ Ω N and C / ∈ Ω N +2 implies C ∈ Ω N −1 and C / ∈ Ω N +1 . Algebraic reformulation of the conjecture using orderings on vertex set We can formulate this geometric problem into an algebraic problem by ordering the vertex set. Firstly, we label all the vertices of the polyhedra in Ω 0 , the order doesn't matter. After that, we pick a direction d, then we can "encode" d by writing the vertex set in order of furthest to closest along the d direction. We ignore the directions such that having more than one vertex are furthest along the direction. In other words, we ignore the directions perpendicular to edges of the polyhedra. Then we know that based on the description of the transformation, every direction gives a convex hull. For each direction d, we compare the encoded word of the direction with the polyhedra from the previous state, then we can write down the precise vertex set of the convex hull that is created. Lemma 11. Let n be the number of vertices in R 2 . If there are no more than two vertices collinear, then we have exactly n(n − 1) number of directions. Proof. Let d be an arbitrary direction. Then we can rotate d clockwise to obtain all directions. We can encode d by writing the vertex set in order of furthest along d to closest along d. As we rotate the direction d clockwise, each pair of letters swaps exactly twice. This implies that there are 2 × n 2 = n(n − 1) swaps in total. Also we know that each swap gives a new ordering on the vertex set. Therefore, there are n(n − 1) vertex orders in total. Note: If there are 3 or more vertices collinear, or two or more pairs of collinear vertices are parallel to each other, then the number of orders for vertex set would be less than n(n − 1), as some of the swaps would happen the same time as we rotate the direction around the R 2 plane. Therefore, the upper bound of the number of the directions is n(n − 1) for the general case. Example 12. Consider the following example in R 2 which start with a set contains a line segment and a single vertex in R 2 . Then we name the three vertices as A, B, and C. Now suppose we want to know what is the convex hull created by direction ACB. We move in order along each letter in the direction ACB, and check with each polyhedron in Ω 0 . "A" is in polyhedron "AB", then we stop and move onto the next polyhedron in Ω 0 . "A" is not in polyhedron "C", so we move to the next letter in the direction, which is "C", "C" is in polyhedron "C". We can stop now as we have exhausted polyhedra in Ω 0 . We can conclude that the convex hull created by direction ACB is AC. Similarly, we can do the same algorithm for all 6 directions, so we end up 6 polyhedra, which are: {AC, AC, BC, BC, AC, BC} Then delete the repeated elements to obtain: F ′ : Pseudo transformation which ignores the directions that give whole edges. We denote these restricted directions as S n−1 , which is a subset of S n−1 , and we denoted the corresponding images as follows: Abstract Algebraic Formulation Let V be a finite set, and let τ = {d j } j∈{1,...,n} be a set of orderings of the set V . Let P(V ) be the power set of set V . We define the function G τ : P(P(V )) −→ P(P(V )) as the following: For each j ∈ {1, ..., n}, let d j be the maximality function given by the ordering d j , that is, d j = max{V } given d j , which means d j depends on τ . Define D j : P(P(V )) −→ P(V ) by, Where X ∈ P(P(V )) is a collection of subsets of V . Finally, we define G τ : P(P(V )) −→ P(P(V )) by, Example 13. Given the previous Example 12, we have the following corresponding algebraic structure based on out abstract algebraic formulation above. • The maximality function d j is equivalent to obtaining the supporting face given the direction d j . • The function D j gives the convex hull of all supporting faces for a given direction d j . • The function G τ outputs the Ω set. For example, given d 1 = ACB and X 0 = Ω 0 = {A, B}, C ∈ P(V ), we get: Then we can compute the convex hull, Therefore, given X 0 , we can compute X 1 We can then continue the process to obtain X 2 , X 3 , X 4 , ... The following names will be helpful on giving the equivalent algebraic version of the conjecture: • We call the function G τ an oscillator if it has the following property: For any X 0 ∈ P(P(V )), the sequence X 0 , X 1 , ... defined by X i+1 = G τ (X i ) eventually cycles with period at most 2. • We call the tuple (V, τ ) geometric if V is a set of vertices, and τ is given by directions d ∈ S n−1 . Equivalent conjecture: For every finite geometric pair (V, τ ), the function G τ is an oscillator. Lemma 14. Let P 1 , P 2 , ..., P k be polyhedra. If P = Conv({P 1 , P 2 , ..., P k }) and d is the Intuitively, consider the diagram below: Figure 5: Hyperplane H contains P d Suppose P is the polyhedron from the lemma, d is a direction, and H is the hyperplane contains P d and orthogonal to the direction d. Then given a point p ∈ P d , we can write p as a convex combination, that is p = k i λ i x i , with k i λ i = 1, λ i ≥ 0, and x i ∈ k i P i . Then we know all polyhedra P i must be below the hyperplane H, therefore all x i must be in the shaded area. The point p is obtained by averaging the points x i , therefore, all x i must be contained in the hyperplane H. Indeed, if there x j is below the hyperplane H, there needs to be another point x ′ j above the H, which contradicts the fact all x i must be contained in shaded area. Proof. Let d ∈ S n−1 be the linear function. By definition, we have Let p ∈ P d be an arbitrary point on P d . Then we can write Then we have Therefore, the equality holds, so we have In order to maximise k i λ i (d T x i ), we need to maximise each d T x i with λ i > 0. By definition, (P i ) d is the subset of polyhedron P i such that d T x i is maximal for x i ∈ P i . Lemma 15. Let P be a polytope, g ∈ S n−1 exposes a face F of P . Then g has a neighbourhood N g such that any g ′ ∈ N g ∩ S n−1 exposes a vertex in F . Proof. We will prove the result by contradiction. Suppose to the contrary, then there exists a sequence of restricted directions {d ′ j } ∞ j=1 converging to d such that each d ′ j exposes a vertex not in F . Since P is finite, without loss of generality, we can assume that each d ′ j exposes the same vertex v / ∈ F . Let u ∈ F be an arbitrary point. Since v / ∈ F , we have But for each j, d ′ j exposes v, therefore, we have A contradiction. Lemma 17. If d ∈ S n−1 exposes a vertex v from a polyhedron P . That is, Then there is a neighbourhood of d in which every direction also exposes v. Lemma 18. Let P be a polyhedron, d ∈ S n−1 exposes a face of P , and v be an extreme point of the face exposed by d. Then for all ǫ > 0, there exists d ′ such that ||d − d ′ || ≤ ǫ and d ′ exposes v. (2) For any vertex v ∈ P , there is a g ′ ∈ N g ∩ S n−1 such that v ∈ Ω i−1 (g ′ ). Part (1): Given Ω i = {P 1 , ..., P n }, by Lemma 15, we can construct N j of g for each polygon P ∈ Ω i−1 , so that any g ′ ∈ N j ∩ S n−1 exposes a vertex of P j in (P j ) g . Let N g = ∩ n j=1 N j . Since g ′ ∈ N g ∩ S n−1 is a restricted direction, and the restricted directions only expose vertices. Then for each j, we have g ′ ∈ N j ∩ S n−1 . Hence g ′ exposes a vertex of P j in (P j ) g ⊆ P . So we have (P j ) g ′ ⊆ P. Therefore, Part (2): Let v ∈ P be a vertex, then we know v ∈ (P j ) g for some j. Now, by Lemma 18, there is a g ′ ∈ N g ∩ S n−1 which exposes v ∈ P j . Therefore, v ∈ Ω i−1 (g ′ ). Use Part (2) we have shown above, let g ′ 1 , g ′ 2 , g ′ 3 , ..., g ′ g ∈ N g be restricted directions such that every vertex v ∈ P is contained in some Ω i−1 (g ′ j ). Let P ′ j = Ω i−1 (g ′ j ) for some j. Then every vertex v ∈ P is contained in Conv({P ′ 1 , ..., P ′ k }). Therefore, we get one inclusion: Therefore, we have the equality: Recall that we denote our two transformations by: and Theorem 19. Given the two transformations we had above, there exists two following maps: That is: Proof. This is equivalent to showing: For the first equality F (Ω ′ i ) = F (Ω i ), this is the same as showing: It is sufficient to show that for all d ∈ S n−1 , we have: By definition, this is the same as showing: • Let P ∈ Ω i , then by Theorem 16, we have Then by Lemma 14, we have Hence, we have Note: We cannot apply the F map or F ′ map to the equation, as both maps may not be injective. Therefore we have to prove the Theorem 20 back track maps. Example 21. Consider the following example: Figure 6: An example on comparison between maps F and F ′ The importance of the abstract algebraic formulation result allows us to work on the conjecture using more general algebra. After we obtain the set of orderings on vertex set that corresponding the set of restricted directions, we are able to forget about the geometry of the sets, and proceed with the equivalent algebraic version of the conjecture, which means we may can apply many powerful algebraic and combinatorial tools on this problem.
2016-04-04T03:52:08.000Z
2016-01-24T00:00:00.000
{ "year": 2016, "sha1": "774e51b41baf2430d83ccede5f17f270ac35282c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1601.06382", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "774e51b41baf2430d83ccede5f17f270ac35282c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
88469572
pes2o/s2orc
v3-fos-license
Bacteriological profile of pyoderma in children Introduction: Pyoderma are cutaneous bacterial infections, commonly seen in India and it constitutes a major portion of patients attending the dermatological clinics. It has been found to be associated with the low socioeconomic status and more prevalent in paediatric age group. Aim: To find out causative organisms and their latest antibiotic susceptibility patterns in pyodermas. Materials and Methods: All patients visiting, Dermatology Dept in Kamineni Institute of Medical Sciences, Narketpally were screened over 18 months and those with erosive skin lesions and/or purulent discharge were included in the study and swabs were received in Microbiology Department for culture and sensitivity. Results: There were 92 isolates from 100 cases, Out of 92 cases the various organisms isolated,include S.aureus, Coagulase negative staphylococcus, group A Streptococci, E.coli, Klebsiella spp, Enterobacter cloaca and Pseudomonas aeruginosa. S.aureus was the commonest organism isolated accounting for 72.5% of the total no of cases. S.aureus was most sensitive to Clindamycin (94.6%), followed by Cefazolin (90.5%), Amikacin (85.1%) and Tetracycline (74.3%). It was least sensitive to Penicillin (2.7%), Ciprofloxacin (50%) and Erythromycin (59.5%). Pseudomonas aeruginosa (8.8%) was second most common isolate and was most sensitive to Imipenem, Piperacillin/ Tazobactum, Ceftazidime/ Clavulanic acid and Amikacin (100%) each. It was least sensitive to Ciprofloxacin, Piperacillin, Ceftazidime (77.7%) each and Gentamicin (66.6%). Out of total 74 isolates of S.aureus 2 were resistant to Methicillin. Thus the percentage of MRSA isolated was 2.7%. Conclusion: The gram positive organisms were more sensitive to cefuroxime, clindamycin, cafezolin. Gram negative organisms were most sensitive to piperacillin/tazobactum, ceftazidime/clavulanic acid and amikacin. Only two staph.aureus strains were methicillin resistant and they were sensitive to vacomycin, cpirofloxacin, tetracycline. The presence of inducible clindamycin resistance among Staph.aureus strains indicates the importance of identification of such strains by D-test to avoid treatment failures with clindamycin. Introduction Pyoderma is a cutaneous infection, caused by pus forming bacterial, commonly seen in India and it constitutes a major portion of patients attending the dermatological clinics. 1 It has been found to be associated with the low socioeconomic status and more prevalent in paediatric age group. [2][3] It has been associated with the climatic changes, particularly seen in summer and during monsoon. 4 Factors such as immunosuppression, atopic dermatitis, scabies, pediculosis, pre-existing tissue injury and inflammation predisposes towards pyoderma formation. Pyoderma is classified into primary and secondary infections. Impetigo, folliculitis, furuncle, carbuncle, ecthyma, erthyrasma, and sycosis barbae constitutes primary pyoderma and secondary pyodermas constitute tropic ulcer, infected pemphigus, infected contact dermatitis, infected scabies, and various other dermatoses. Baslas et al in 1990 studied 570 cases of pyoderma, in which 58.8% cases were of primary pyoderma, and rest were secondary pyoderma. 5 Chopra et al in 1994 carried out study in 100 cases found that maximum cases were of impetigo (31%) followed by furunculosis (24%), folliculitis (22%), pyogenic intertrigo (6%), sycosis and carbuncle (6% each), ecthyma (2%) and cellulitis (1%). 6 Majority of cases belonged to age group of 0-10 years. 6 Several other Indian studies classified and demonstrated the presence of primary and secondary pyoderma from different regions. 7,8 The bacterial etiological factors mainly includes Gram positive organisms among them S.aureus is the most common organism isolated, 9 with beta haemolytic Streptococci being the next common isolate, 10 Enterococcus has also been isolated from a few cases. 11 The various Gram negative organisms 6 isolated include E.coli, Pseudomonas spp, Proteus spp, Citrobacter spp, Klebsiella spp, and Acinetobacter spp. 12 In the present study at was conducted in Microbiology department in collaboration with department of Dermatology Kamineni Institute of Medical Sciences aim to demonstrate the presence and distribution of the primary and secondary pyoderma cases in paediatrics as well as their bacterial etiological factors. The work has been approved by institutional ethical committee. Materials and Methods A hospital based cross sectional study was conducted. The study period was from Jan 2012 to Aug 2013. A total of 100 paediatric cases were included in the study and the history was taken along with physical and dermatological examination with the help of dermatologist for all the patients. Paediatric patients with skin lesion with formation of pus were included. All the samples were collected aseptically with two sterile cotton swabs for each sample from the lesion, which were processed for isolation and identification of bacterial isolates according to CLSI guidelines. Gram stain preparations were made from one swab, and culture plates were inoculated from the other swab. Each sample was inoculated on blood agar, MacConkey agar. The plates were incubated at 37°C for 18-24 hours in an incubator. The bacterial colonies were subjected to Gram staining and biochemical tests for identification. The identification was carried out according to the laboratory protocol. The pathogen isolated was subjected to antibiotic susceptibility test on Muller Hinton agar media according to CLSI guidelines. The antibiotics used in our study for gram positive cocci were, Penicillin (10units), gentamicin (10 µg), amikacin (30µg), ciprofloxacin (5 µg), cefazolin(30 µg), cefuroxime(30 µg), erythromycin (15 µg), co-trimoxazole (25 µg), tetracycline (30 µg), and vancomycin (30 µg), clindamycin (2 µg). Staphylococcus aureus strains which were erythromycin resistant were further subjected to double disc diffusion test (D-test) to detect inducible MLS B (Macrolidelincosamide-streptogramin B) resistance. Erythromycin (15µg) disc was placed at a distance of 15mm (edge to edge) from clindamycin (2µg) disc on a Mueller Hinton agar plate previously inoculated with 0.5 McFarland bacterial suspension. After overnight incubation at 37 0 C, the plates were examined to detect flattening of the zone (D shaped) around clindamycin in the area between the two discs. Strains that were positive in the D-test were considered inducible MLS B resistant, strains that were resistant to both erythromycin and clindamycin were considered constitutive MLS B resistant and strains that were resistant to erythromycin but susceptible to clindamycin were considered MS(moderate sensitive) phenotype. 13 Results Out of 100 cases the primary pyoderma has been observed in 72% of cases and 28% of cases are with the secondary pyoderma. The folliculitis seen in 38% of cases, Impetigo in 17% of cases and remaining other primary pyoderma constitutes 17% of lesions. The major lesions of secondary pyoderma constitutes infected scabies 14%. The study includes 63 boys and 37 girls with the mean age of 12.2 years (0 days-14 years). (Fig. 1) Fig. 1: a. Folliculitis b. Periporitis c. Dissecting cellulitis d. Infected Eczema Of the total 72 cases of Primary Pyoderma most of the cases were seen in pre-school 1-5 years(41.6%), followed by 6-9 years(34.72%) of age group. Out of total 100 cultures 92 samples showed growth, among positive cases yielding growth 82 cases (89.1%) showed only one type of growth, whereas 10 cases (10.9 more than one types of organisms. Out of 102 total isolates from 92 cases, S.aureus showed 72.5% of growth, Coagulase negative Staphylococci (CONS) 4.9%, Group A Streptococci (GAS) 2.9%, Pseudomonas aeruginosa8.8%, Klebsiella oxytoca 3.9%, Klebsiella pneumoniae 2.9%, Escherichia coli 1.9%, Enterobacter cloaca 1.9%. Staphylococcus aureus was the commonest organism isolated 74 (72.5%) in, followed by Pseudomonas aeruginosa 9 (8.8%). MRSA has been noticed in 2 cases. The isolation of the organisms distributed among primary and secondary pyoderma has been seen in bar diagram Fig. 2 and Fig. 3. The gram positive bacteria isolated have shown resistance to penicillin (72%), Erythromycin (41%), Cotrimoxazole (30%), and ciprofloxacin (50%) and remaining are sensitive to most of other antibiotics. (Table 1) The Gram negative organisms were most sensitive to imipenem followed by piperacillin/tazobactum, ceftazidime/clavulanic acid and amikacin. (Table 2) Four Staphylococcus aureus strains were clindamycin resistant, all were inducible MLS B , in which 2 strains were MRSA and remaining 2 were MSSA. Discussion Bacterial skin infections in children are a common problem encountered in clinical medicine. The present study was carried out on a group of 100 cases of pyoderma, visiting the Dermatology outpatient department of Kamineni Institute of Medical Sciences, to establish the bacterial causes of common primary and secondary pyodermas as well as to determine their antimicrobial susceptibility pattern against different antibiotics. Present study showed that majority of the patients belonged to lower income group (79%) followed by the middle income group (21%). None were from high income group. This has been noted by other workers also. 14 In the present study conducted on 100 cases the most common pathogen isolated was S.aureus (72.5%). The same finding has been reported by other workers. Patil 15 Isolation of Streptococci in the present study was 2.9% which is similar to the study of R Patil et al, 9 where the isolation rate was 2.3%, K Mariam Ali et al 16 , where the isolation was also 2.3% and G Shashi et al 17 where isolation rate was 3%. However other studies 3 have shown a higher isolation rate. Among the gram negative isolates Pseudomonas aeruginosa were the commonest isolate in the present study (8.8%). This is similar to study conducted by D P Ghadage et al, 1 where the most common gram negative isolate was Pseudomonas aeruginosa (7.56%). The present study has shown that Staphylococcus aureus, the most common organism isolated, was most sensitive to Vancomycin (100%), Cefuroxime (100%), and Clindamycin (94.6%), followed by Cefazolin (90.5%), Amikacin (85.1%) and Tetracycline (74.3%). It was least sensitive to penicillin (2.7%), Erythromycin (40.5%) and Ciprofloxacin (50%). Similar findings have been shown by other workers, 3,5 however R Patil et al 9 and K V Ramana et al 11 have shown increased sensitivity to Ciprofloxacin. Conclusion We conclude that Primary pyoderma was common in children and the commonest clinical type was Folliculitis followed by Impetigo. Secondary pyoderma cases contributed only few cases where commonest type was Infected Scabies. The most common causative agent of pyoderma was Staphylococcus aureus followed by pseudomonas aeruginosa, Klebsiella oxytoca, Group A beta hemolytic Streptococci, Klebsiella pneumoniae, and Escherichia coli. The gram positive organisms were most sensitive to vancomycin followed by cefuroxime, clindamycin, cefazoline, amikacin. The Gram negative organisms were most sensitive to imipenem followed by piperacillin/tazobactum, ceftazidime/clavulanic acid and amikacin. The presence of inducible clindamycin resistance among Staphylococcus aureus strains indicates the importance of identification of such strains by D test to avoid treatment failures with clindamycin. Conflicts of Interest: None.
2019-03-31T13:32:42.820Z
2019-03-15T00:00:00.000
{ "year": 2019, "sha1": "16ea2f1197a3f612166dfa42f89a480221a956d0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.18231/2581-4761.2019.0014", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b8f687075714de8bd52993c8b0e32648d10758d6", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
252167499
pes2o/s2orc
v3-fos-license
Evaluation of continuous intravenous lidocaine on brain relaxation, intraoperative opioid consumption, and surgeon’s satisfaction in adult patients undergoing craniotomy tumor surgery: A randomized controlled trial Background: In craniotomy tumor removal, brain relaxation after dura opening is essential. Lidocaine is known to have analgesic and antiinflammatory effects. It is excellent in decreasing cerebral metabolic rate of oxygen, cerebral blood flow, and cerebral blood volume; and can potentially reduce intracranial pressure, resulting in exceptional brain relaxation after dura opening. However, no study has examined continuous intravenous lidocaine infusion on brain relaxation, intraoperative opioid consumption and surgeon’s satisfaction in adult patients undergoing craniotomy tumor removal. Methods: A total of 60 subjects scheduled for craniotomy tumor removal were enrolled in a double-blind, randomized controlled trial with consecutive sampling. Patients received either an intravenous bolus of lidocaine (2%) 1.5 mg/kg before induction followed by 2 mg/kg/h continuous infusion up to skin closure (lidocaine group) or placebo with similar volume (NaCl 0.9%). Neurosurgeons evaluated brain relaxation and surgeon’s satisfaction with a 4-point scale, total intraoperative opioid consumption was recorded in μg and μg/kg/min. Results: All sixty subjects were included in the study. Lidocaine group showed better brain relaxation after dura opening (96.7% vs 70%; lidocaine vs placebo, P < .006), less intraoperative fentanyl consumption (369.2 μg vs 773.0 μg; P < .001, .0107 vs .0241 μg/kg/min; lidocaine vs placebo, P < .001). Higher surgeon’s satisfaction was found in lidocaine group (96.7% vs 70%, P = .006). No side effects were observed during this study. Conclusions: Continuous lidocaine intravenous infusion improves brain relaxation after dura opening, and decreases intraoperative opioid consumption, with good surgeon satisfaction in adult patients undergoing craniotomy tumor removal. Introduction Annually, an estimated 22.6 million patients have neurological disorders or injuries that require the expertise of a neurosurgeon, 13.8 million among them require surgery, and 735,000 cases that require surgery are brain tumors. [1] Craniotomy is performed for various indications, including brain tumor resection. [2] Brain relaxation is essential in anesthesia for craniotomy surgery because optimal brain relaxation can improve surgical conditions, facilitate surgeons to access the area that will be resected, and reduce the risk of injury from retraction injury and ischemia from compression. [3] Rasmussen et al [4] found that the incidence of mild or moderate brain swelling at the time of dura opening was approximately 35.7% in brain tumor resection surgery. There may be an increase in surgical complications and poor outcomes related to poor brain relaxation. Subjective assessment by the neurosurgeon, based on visualization and tactile, is still the primary assessment for brain relaxation. [3] In addition, to avoid increases in intracranial pressure (ICP), uncontrolled hypertension should be avoided during high-intensity noxious stimuli, such as during intubation, insertion of headpins, skin incisions, and extubation. [2] Lidocaine, a drug that belongs to the class of amide local anesthetics, is a classic drug that has been used for a long time in the field of anesthesia. However, its use is still limited for local anesthesia. The previous studies show that systemic lidocaine has analgesic, anti-hyperalgesia, and antiinflammatory properties. [5][6][7][8] Its clinical effect was known to benefit abdominal, thoracic, gynecology, and ambulatory surgery to reduce intra-and postoperative pain and opioid consumption, reduce hypnotic agent requirement (sparing effect), decrease ileus, reduce postoperative nausea and vomiting and decrease the length of stay after surgery. [9][10][11] In neurosurgery, lidocaine also has the benefit to minimize postoperative pain. [12] In mammal experimental, lidocaine infusion can reduce cerebral metabolic rate of oxygen (CMRO 2 ) by inhibiting synaptic transmission and sodium channel block mechanism in brain cell membranes, resulting in decreased electrophysiological function and membrane stabilization effect. [13][14][15] Furthermore, lidocaine infusion can reduce cerebral blood flow (CBF) due to cerebral vasoconstrictor properties that are perceived in response to a reduction in cerebral metabolism. [16][17][18] However, clinical studies of lidocaine in neurosurgical patients are limited. From extrapolated data of mammal experimental, lidocaine can reduce CMRO 2 and CBF; it is hypothesized that lidocaine can improve brain relaxation in craniotomy surgery. Based on our knowledge, no clinical study has examined the effect of systemic lidocaine on brain relaxation during dura opening in tumor craniotomy surgery yet. In addition, this study will also assess opioid consumption and surgeon's satisfaction intraoperatively. Study design This study was an experimental study with a double-blind, randomized controlled trial design. The study was conducted at the Integrated Surgical Unit, Cipto Mangunkusumo Hospital, Indonesia, between February 28th and August 31st, 2021. We used the sample size formula to compare 2 proportions with a confidence interval of 95% and 80% power. We deemed 30 % to be a clinically significant difference in this calculation. We calculated the sample size to be 27 for each group, anticipating a drop out of 10%; each group required 30 subjects, with a total sample of 60 subjects for this study. Sampling was carried out with consecutive sampling after obtaining approval from the Ethics Committee of the Faculty of Medicine, Universitas Indonesia Ethical approval and consent to participate The study protocol was approved by the Ethics and Research Committee of Universitas Indonesia (1450/UN2.F1/ETIK/ PPM.00.02/2020; protocol no: 20-11-1437; approval date: December 7th, 2020) and was registered on February 26th, 2021, in ClinicalTrials.gov (NCT04773093). Written informed consent to participate was obtained from each participant. No organs or tissues were obtained from participants. Inclusion and exclusion criteria Inclusion criteria included adult patients >18 years of age who were scheduled to undergo craniotomy for tumor removal with dura opening, the physical status of American Society of Anesthesiology Classification 1 to 3, baseline Glasgow Coma Scale is 15, and surgery using headpins fixation. Exclusion criteria included the patient or family refusing to participate in the study; the patient had an atrioventricular block, severe hepatic and renal function impairment, signs of hemodynamic instability, midline shift > 5.4 mm, diagnosis of glioblastoma multiforme or metastases, aneurysm or arteriovenous malformation surgery, using cerebrospinal fluid drainage (external ventricular drain, ventriculoperitoneal shunt, or lumbar drain) intraoperatively, routinely taking adrenergic agonist or antagonist drugs (e.g., beta-blockers, a2 agonists, vasodilators, vasoconstrictors or inotropes), patients routinely taking opioids within 2 last week, and a history of allergy to lidocaine. Study protocol Adult patients scheduled to undergo craniotomy tumor surgery who fulfilled the inclusion criteria but not the exclusion criteria were asked to give informed consent 1 day before the surgery. Subjects were randomized using software to generate a unique identification number. The number was written on paper and was placed in a nontransparent sealed envelope. Once the patients who fulfilled the inclusion criteria arrived at the patient's reception room, the envelope was opened. The pharmacy pre-prepared 2 sets of drugs in 10 mL syringes for bolus and 20 mL for continuous infusion, assuming the same 20 mg/ mL concentration. Tumor size was measured using method specified by Rasmussen et al. [4] The anesthesiology resident on duty in the operating room (OR) took the pre-prepared intervention according to the subjects' allocation (drug A and drug B). During induction, the patient was given intravenous fentanyl, the experimental drugs A or B according to subject allocation at a 1.5 mg/kg (loading dose 3 minutes before intubation), propofol, and rocuronium. The patient was then intubated, and then the central venous line and arterial line were inserted. The experimental drug was followed by a continuous infusion of 2 mg/kg/h immediately after administering the bolus. Anesthesia was maintained using sevoflurane, intermittent fentanyl, and continuous atracurium infusion. End-tidal CO 2 was maintained in normocapnia. A forced-air warming blanket was used to maintain normothermia. Before headpin fixation, the patient was given a 1 μg/kg bolus of fentanyl and was allowed an additional 1 μg/kg if necessary. When the surgeon began to drill the skull, the patient was given 20% mannitol at 0.5 g/kg (finished within 30 minutes). Immediately after the dura was opened, the neurosurgeon assessed brain relaxation. Data were recorded by the research team in charge of taking notes in the OR. If the brain was swelling, the mannitol dose could be repeated if needed. Intraoperative fluid administration was adjusted to maintain normovolemia. The experimental drug was discontinued when the surgeon finished suturing the skin. Postoperative analgesia and antiemetic regiment were given intravenously (paracetamol, ketorolac, ondansetron). Muscle paralysis was reversed with intravenous neostigmine. The decision to extubate the patient in the OR or the intensive care unit was depended on the intraoperative condition. A level of consciousness was noted after extubation. All patient was admitted to the intensive care unit for postoperative monitoring. The trial stopped if there was any hemodynamic instability such as arrhythmias or hypotension during surgery that did not improve with fluid resuscitation and needed an anti-arrhythmia, inotropic or vasopressor agent. Outcome assessment Brain relaxation was assessed immediately after dura opening. The neurosurgeon carried the assessment subjectively (inspection and palpation) with a standardized scale of 4 grades: the brain is very relaxed, at the level below the dura; the brain is quite relaxed, at the level of dura; moderate brain swelling; and pronounced brain swelling. [3,4] Grades 1 and 2 indicated good brain relaxation, while 3 and 4 indicated poor brain relaxation. Intraoperative opioid consumption was calculated based on the total number of fentanyl use for intraoperative analgesia (in μg) and the number of fentanyl use divided by body weight and surgery duration calculated from intubation ending when the last skin suture was completed (in μg/kg/min). The surgeon's satisfaction with the operation was assessed at the end of the procedure. The surgeon's satisfaction in this study was divided into 4 grades: very satisfied; satisfied; less satisfied; and very dissatisfied. We consider grades 1 and 2 to indicate good surgeon's satisfaction, while grades 3 and 4 show dissatisfaction. Statistical analysis The data obtained were then analyzed using the Statistical Package for Social Sciences computer program version 26 (IBM Corporation, 2019). Categorical data were presented in numbers and percentages (n [%]). In addition, numerical data were introduced using mean ± standard deviation if the data distribution is normal or the median (minimum-maximum value) if the distribution is not normal. Student t test and Mann-Whitney test were used to analyze 2 numerical variables. The results of the analysis were considered significant if the P value < .05. Results We enrolled 60 patients who met the inclusion criteria and signed the informed consent to participate in the study from February 28th to August 31st, 2021. The subjects were randomly assigned into 2 groups and received their allocated intervention (Fig. 1). There was no statistical significance between the 2 groups in gender, diagnosis, American Society of Anesthesiology physical status, tumor location, brain edema, or midline shift (<5.4 mm) on computed tomography scan or magnetic resonance imaging, craniotomy, or re-craniotomy, and preoperative steroid use. Similarly, based on the characteristics of age, height, weight, tumor size, duration of anesthesia, duration of surgery, preoperative hemoglobin, and postoperative hemoglobin, there was no statistical significance between the lidocaine and placebo groups ( Table 1). In the lidocaine group, the proportion of good brain relaxation in the lidocaine group was higher than in the placebo group (96.7% vs 70%; P = .006) ( Table 2). Relative risk was 0.11, and the number needed to treat was 4. The proportion of very satisfied and satisfied surgeons was higher in the lidocaine group with P < .001 (Table 3). Overall, there was lower mean arterial blood pressure and heart rate intraoperative in the lidocaine group than in the placebo group, especially in a noxious event such as intubation, headpin fixation, skin incision, and extubation (Fig. 2). Effects of continuous intravenous lidocaine infusion on brain relaxation and surgeon satisfaction The incidence of brain swelling after dura opening in this study was 16.7%. It appears that the effect of intravenous lidocaine infusion can improve brain relaxation condition after opening the dura compared to placebo. The proportion of good brain relaxation in the lidocaine group was 96.7% and, in the placebo group, was 70% (statistically significant, P = .006). Its effect on cerebral metabolism can explain the effect of intravenous lidocaine on brain relaxation after dura opening. In mammal experiments done by Astrup et al, lidocaine infusion resulting in flat electroencephalogram concluded that spontaneous electrocortical activity is abolished by lidocaine, similar to barbiturate action. [13,14] The abolition of electrocortical activity reduces 60% of energy consumption or brain metabolism. [19] In addition, lidocaine also affects Na-K leak fluxes. From the experimental model, in the ischemic brain, the Na-K ion pump fails to maintain homeostasis due to energy depletion, resulting in Na ion leaks into and K ion out of cell passively following electrochemical gradient and membrane permeability. The grade of ion K leaks outside the brain cell can be measured by microelectrodes inserted into the surface of the brain cortex. After lidocaine infusion, ion K leaks outside brain cells are reduced and slowed, indicating that lidocaine reduces Na-K exchange leak fluxes. The effect of reducing ion leak fluxes is also seen in hypothermia but not in thiopental, indicating lidocaine, not thiopental, has a membrane-sealing effect. [13] This membrane sealing effect (membrane stabilization) is related to energy to maintain cellular integrity that accounts for 40% of brain metabolism. 19 In this experiment, Astrup et al also measured the effect of lidocaine on CMRO 2 and cerebral metabolic rate for glucose (CMRgluc) by the sagittal sinus outflow method that allows continuous measurement of oxygen and glucose consumption. The result is that lidocaine can reduce CMRO 2 and CMRgluc when given alone and after thiopental infusion. This effect is specific to lidocaine, supporting the hypothesis that lidocaine can block Na-K leak fluxes and oxygen and glucose consumption for active ion transport. [13] Based on these experimental results, lidocaine can reduce cerebral metabolism by inhibiting synaptic transmission and membrane sealing effect that reduces ion transport demand. 13 Sakabe et al also studied Data are expressed as mean ± SD, median (minimal-maximal), or numbers. Compared with the placebo group, P < .05. ASA = American Society of Anesthesiology, Hb = hemoglobin. Table 2 Comparison between lidocaine and placebo group on brain relaxation and intraoperative opioid consumption. Relax means brain below the dura or at the level of the dura. Swelling means moderate brain swelling or pronounced brain swelling. *Chi-square test. †Unpaired t test. Table 3 Comparison between lidocaine and placebo group on surgeon's satisfactory. Surgeon's satisfactory, n (%) lidocaine effect on cerebral metabolism in mammal experimental using a lower dose of lidocaine and have the similar result that lidocaine can decrease CMRO 2 significantly. [15] Furthermore, lidocaine can reduce CBF due to decreased cerebral metabolism and cerebrovascular vasoconstrictor properties. 16 Lam et al study in humans during normocapnia and hypocapnia support this postulate based on data that lidocaine infusion 5 mg/kg loading dose over 30 minutes followed by infusion of 45 μg/kg/min in normocapnia patient can reduce CBF and CMRO 2 by 24% and 20% respectively. [17] In Grover et al [18] study, 1.5 mg/kg lidocaine loading dose can decrease ICP by reducing cerebral blood volume and cerebral metabolism. Based on data supporting the hypothesis, lidocaine can reduce cerebral metabolism and CBF, it can explain its effect on brain relaxation after dura opening during craniotomy surgery. In the surgeon's satisfaction outcome, the proportion of very satisfied and satisfied surgeons in the lidocaine group was 70% and 26.7%, respectively. In contrast, in the placebo group, it was 10% and 60%, respectively (P < .001). Further analysis between surgeon's satisfaction and brain relaxation when the dura opens found that 100% of the surgeon is very satisfied and satisfied is when the brain relaxation is good. Based on our knowledge, currently, there is no validated checklist or surgeon's satisfaction questionnaire yet, so this study assesses surgeon's satisfaction intraoperatively by subjectively evaluating the surgeon using a satisfaction scale. Effect of continuous intravenous lidocaine infusion on intraoperative opioid consumption Continuous infusion of intravenous lidocaine can reduce total intraoperative fentanyl for tumor resection craniotomy surgery by 403.8 μg (95% CI 293.3-514.4; P < .001). Furthermore, if adjustments were made to the duration of anesthesia and the duration of surgery as well as body weight, continuous infusion of intravenous lidocaine could reduce the need for intraoperative fentanyl by 0.0134 μg/kg/min (95% CI 0.0105-0.0162; P < .001). In their study of patients undergoing surgical resection of brain tumors, Carrales et al [20] found a 48.2% decrease in intraoperative use of fentanyl to 0.0367 μg/kg/min in the lidocaine group compared to the placebo group. Another study of intravenous lidocaine in craniotomy surgery for supratentorial tumors by Peng et al [14] concluded that continuous intraoperative intravenous lidocaine infusion had a clinical analgesic effect by significantly reducing the proportion of subjects with acute postoperative pain. The analgesic effect of continuous intraoperative intravenous lidocaine that can reduce intraoperative opioid requirements in the group of patients undergoing craniotomy surgery was also seen in this study. The effect of continuous intravenous lidocaine to decrease intraoperative opioid requirements is due to the analgesic effect of lidocaine on the central and peripheral nervous systems. [5,8] In injured nerves, systemic lidocaine can prevent depolarization of neuronal membranes. [5,8] Systemic lidocaine can also decrease or prevent neoproliferation of active sodium channels and block their spontaneous firing, especially in traumatized tissues. [8] In acute pain, intravenous lidocaine exhibits significant analgesic, anti-hyperalgesic, and antiinflammatory effects. [5] Lidocaine also has the effect of decreasing the sensitivity and activity of neurons in the spinal cord (central sensitization) and decreasing the N-methyl-D-aspartate receptor-mediated postsynaptic depolarization. [5,8] Lidocaine also has a direct effect on opioid receptors. [8] In terms of antiinflammatory, systemic lidocaine exhibits effects on polymorphonuclear cells (PMNs) and inflammatory signals through an inhibitory mechanism on PMN cell priming, when exposure of PMNs to certain mediators results in an exaggerated response by releasing cytokines and reactive oxygen species. [6][7][8] Side effect Side effects that can occur due to intravenous lidocaine administration include tinnitus, numbness or metallic taste in the mouth area, twitching, lightheadedness, seizures, arrhythmias, and hypotension. These side effects generally occur when plasma lidocaine levels exceed 10 μg/mL. [5] Beaussier et al [9] concluded that with a lidocaine bolus dose of 1.5 mg/kg followed by continuous infusion of 2 mg/kg/hour, plasma lidocaine levels remained < 5 μg/mL. The patient's cannot subjectively explain the side effects since the patient was under general anesthesia. Therefore, we can only assess the side effects using objective data such as ECG changes or hemodynamic instability. Based on intraoperative monitoring in this study, no side effects such as bradycardia or arrhythmia were found during intraoperative. All subjects given continuous lidocaine were fully conscious after extubation. Study limitation and recommendation There were several limitations in this study. First, in this study, a fundoscopic examination was not performed (due to hospital policy during a pandemic) to find the presence of papilledema, which indicates a significant increase in ICP as additional data besides symptoms and signs from a computed tomography scan or magnetic resonance imaging. Second, brain relaxation was not assessed by the same neurosurgeon. This assessment causes bias between observers. Third, this study only used subjective measurement for evaluating brain relaxation. Objective and measurable evaluations by measuring subdural pressure with a special needle and transducer were not carried out due to the lack of available tools. Nevertheless, surgeon's visual and tactile assessments are still the main foundation for evaluating brain relaxation during surgery, while objective measurement provides valuable and supplements information. Fourth, this study did not use any objective and measurable monitoring tools to assess the depth of anesthesia. Lastly, the measurement of lidocaine plasma levels was not carried out, so it is impossible to know with certainty the plasma levels of lidocaine with the dose of lidocaine in this study (1.5 mg/kg bolus and continued maintenance of 2 mg/kg/h during surgery). Recommendation Intraoperative continuous intravenous lidocaine can be used as an anesthetic adjuvant for tumor resection craniotomy surgery to improve brain relaxation, reduce intraoperative opioid consumption, and increase the surgeon's satisfaction while paying attention to contraindications lidocaine administration according to the patient's clinical condition. It is necessary to continue the same research using objective and measurable measurements to assess brain relaxation after dura opening (e.g., using a transducer to assess subdural pressure), evaluating the outcome of observing brain relaxation during dura opening by 2 or more assessors (neurosurgeon) for all subjects, using the measurement of the depth of anesthesia, as well as measuring plasma levels of lidocaine, so that it can be known objectively whether the dose is continuous intravenous lidocaine given is still within a safe range or below toxic plasma levels. Conclusion Intraoperative continuous intravenous lidocaine infusion in craniotomy tumor surgery resulted in better brain relaxation at dura opening, decreased intraoperative fentanyl opioid consumption, and improved surgeon satisfaction. In addition, lidocaine seemed to prevent intraoperative hemodynamic instability during noxious stimulation.
2022-09-10T14:30:21.286Z
2022-09-09T00:00:00.000
{ "year": 2022, "sha1": "179557e6b07e1dc91d67dbffd9fb2a0246d1fc1f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "WoltersKluwer", "pdf_hash": "179557e6b07e1dc91d67dbffd9fb2a0246d1fc1f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259130287
pes2o/s2orc
v3-fos-license
Pollen self-elimination CRISPR–Cas genome editing prevents transgenic pollen dispersal in maize This study reports the development of a programmed pollen self-elimination CRISPR–Cas (PSEC) system in which the pollen is infertile when PSEC is present in haploid pollen. PSEC can be inherited through the female gametophyte and retains genome editing activity in vivo across generations. This system could greatly alleviate serious concerns about the widespread diffusion of genetically modified (GM) elements into natural and agricultural environments via outcrossing. Genome editing with clustered regularly interspaced short palindromic repeats (CRISPR)-CRISPR-associated nuclease (Cas)-mediated technologies have revolutionized basic plant science and crop genetic improvement (Chen et al., 2019).Stable genetic transformation of CRISPR-Cas cassette(s) is the main approach to genome editing in planta.In many sexually reproducing plants, a major concern is the dispersal of genetically modified elements through pollen (Devos et al., 2005).Maize (Zea mays L.), a typical outcrossing crop species, can produce as many as two to five million pollen grains per plant (Goss, 1968) and has a recommended isolation distance of 200 m due to wind dispersal (Ma et al., 2004) or even >3 km due to foraging by insects like honey bees (Danner et al., 2014).A previously reported strategy using suicide transgenes effectively killed immature embryos and pollen harboring a Cas9 transgene produced by T 0 plants and produced transgene-free edited T 1 plants (He et al., 2018).Especially for vegetatively propagated plants, this technology solves the problem of removing transgenic components, as it is not feasible to remove them through meiotic recombination and segregation.However, genome editing has a number of useful applications for which the Cas transgene needs to be retained in the plants, including RNAguided Cas9 as an in vivo desired-target mutator (Li et al., 2017) and haploid induction-coupled editing (Kelliher et al., 2019;Wang et al., 2019) through the paternal haploid using a cenh3-null mutant as the female gametophyte (Ravi and Chan, 2010).In this correspondence, we present PSEC, which prevents pollen transgene dispersal from plants that harbor a T-DNA containing a pollen suicide cassette next to specific single guide RNA and Cas cassettes.At the same time, PSEC can still be inherited through the female gamete to the next generation and also retains CRISPR-Cas gene editing activity.Through sexual crossing, it acts in trans to induce efficient target mutations in the parental genome of crosses for breeding applications. To generate a programmed PSEC, we introduced a male gametophyte inactivation gene, the maize alpha-amylase gene ZmAA1, driven by the pollen-specific promoter (Polygalacturonase 47, ZmPG47) used in our previous study (Qi et al., 2020) into a T-DNA that also holds the CRISPR-Cas9 cassette (Figure 1A).The pollen derived from these PSEC plants was not viable when the PSEC transgene was present, but the transgene was inherited to the next generation through the female gametophyte (Figure 1B).In vivo Cas editing activity was retained to generate new allelic target mutations when crossed with the lines (Figure 1C).In this study, we designed PSEC to target genes encoding three growth-regulating factors, ZmGRF1, ZmGRF5, and ZmGRF6, and obtain single and/or multiple mutants (Figure 1A). We performed Agrobacterium (Agrobacterium tumefaciens)mediated stable transformation of immature embryos from the maize inbred line ZC01 with PSEC, as described previously (Li et al., 2017), resulting in the isolation of 25 independent T 0 transformants.After a preliminary assessment of the target mutations, we selected five transformants for characterization of PSEC copy numbers via digital droplet PCR (Figure 1D).Plants 3-1 and 24-1 harbored a single copy of the PSEC transgene (Figure 1D); we thus chose transformant 3-1 for further characterization. Plant 3-1 grew and flowered like the wild type (WT), with normal stamens and anthers.After KI/I 2 staining (Figure 1E), stamens from plant 3-1 were lighter than WT stamens, with nearly half of 3-1 pollen grains lacking purple staining.These observations were consistent with our previous study in which we produced sterile male flowers in maize with the same PG47pro:ZmAA1 transgene (Qi et al., 2020). Viable pollen represented half of all plant 3-1 pollen, as demonstrated by a chi-squared test (c 2 < c 2 0.05,1 ; Figure 1F), thus conforming to the expected segregation ratio for a single copy of PSEC.To confirm the presence/absence of PSEC, we carefully collected around 100 stained pollen and 100 unstained pollen from plant 3-1 under a stereomicroscope for genomic DNA extraction and Cas9 PCR amplification, with three replicates.All purple pollen lacked PSEC, and all unstained pollen contained the transgene (Figure 1G). Plant Communications Correspondence noticed high mutant activity for the retained PSEC, as discussed below. To investigate how the PSEC transgene can be spread and inherited, we used T 1 plants as pollen donors or receptors in crosses with the three maize inbred lines JD96M, JD96F, and B73 (Figure 1I).When using plants 3-2, 3-3, and 3-6 as the pollen donors, we genotyped 272, 357, and 272 F 1 seeds produced from JD96M 3 3-2, JD96F 3 3-3, and B73 3 3-6 crosses by PCR for Cas9.None of the seeds harbored the Cas9 gene, indicating that PSEC is not spread or inherited through 3-2, 3-3, or 3-6 pollen.By contrast, we detected PSEC in about half of all F 1 seeds produced from the 3-2 3 JD96M, 3-3 3 JD96F, and 3-6 3 B73 crosses.These data were consistent with our expectation that PSEC can be spread and inherited only through the female gametophyte. To test whether the inherited PSEC transgene showed efficient targeted mutation activity, we genotyped 40 F 1 seeds from each of the above crosses using plant 3-3, 3-3, 3-4 as the female parent.We identified plants with the desired homologous/bi-allelic mutations at each target site of ZmGRF1, ZmGRF5, and/or ZmGRF6 in 17.5%-95% of all F 1 seeds (Figure 1I).These data indicate that the desired mutations can be efficiently produced through crossing with PSEC as the maternal parent. In conclusion, we successfully developed a programmed PSEC system with a pollen-specific energy depletion cassette in which the pollen is eliminated, not the cassette, when PSEC is present in haploid pollen.PSEC can be inherited through the female gametophyte and perform Cas9-mediated genome editing activity across generations.This system could greatly alleviate serious concerns about the widespread diffusion of genetically modified elements into natural and agricultural environments via outcrossing.This technology should be applicable to other CRISPR-Cas systems and outcrossing plant species other than maize.
2023-06-12T06:16:56.213Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "2c77c14a5c1cee5825bcb58023db3e3458ed034e", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.xplc.2023.100637", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "347addafce0e9f2e2a833ab00f394e4bb420b4d8", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
240005733
pes2o/s2orc
v3-fos-license
The Power of Social Attribution: Perspectives on the Healing Efficacy of Ayahuasca During the last decades, ayahuasca gained much popularity among non-Indigenous and out-of-Amazonia based populations. In popular culture, it has been advertised as a natural remedy that was discovered by Indigenous peoples ante millennia and that has been used for shamanic healing of all kinds of ailments. This “neo-shamanic,” and often recreational, use of ayahuasca, however, has to be distinguished from traditional Indigenous praxes on the one hand, and, on the other hand, from medical investigation in the modern world. The former, Indigenous use mainly understands ayahuasca as an amplifying power for interacting with non-human beings in the animal, plant, or spirit realms. Within this paradigm, efficacy is not dependent on the drug, but on the correct communication between the healer (or sorcerer) and the non-human powers that are considered real and powerful also without resorting to ayahuasca. The latter, modern mode of understanding, contrastingly treats the neurochemical processes of MAO inhibition and dimethyltryptamine activity as trigger mechanisms for a series of psychological as well as somatic responses, including positive outcomes in the treatment of various mental conditions. I argue that there is an ontological incommensurability occurring especially between the Indigenous and medicinal concepts of ayahuasca use (with recreational use in its widest understanding trying to make sense from both sides). Modern medical applications of ayahuasca are so fundamentally different from Indigenous concepts that the latter cannot be used to legitimate or confirm the former (and vice versa). Finally, the deep coloniality in the process of appropriation of the Indigenous by the modern has to be questioned and resolved in any case of ayahuasca application. INTRODUCTION In recent times, ayahuasca has been in use by at least three large and overlapping groups: traditional Indigenous and Mestizo people within their communities, neo-shamanic and recreational users all over the world, and patients of mental health facilities in the realm of modern medicine and psychiatry. In this perspective paper, I will show that although all these forms of use are legitimate and meaningful, they are still situated in a deep colonial structure of power relations and appropriation. For example, in neo-shamanic and medical use, traditional healers are often employed, who are then missed in their original communities. In addition, it seems that the efficacy of ayahuasca in both neo-shamanic and medical use draws primarily from ascriptions of alterity to the substance: why do modern users so often resort to constructed images of Indigeneity, naturalness, and ritual when drinking or administering ayahuasca, while at the same time its efficacy is ascribed to its pharmacology, or to "doing ayahuasca"? In order to answer this question, I will compare traditional Indigenous concepts with neo-shamanic or recreational, and modern medical concepts of attributing efficacy to this substance. From the first half of the 20th Century on, reports of Westerners concerning ayahuasca placed it as a "medicine" within the construction of "ayahuasca shamanism, " that is a health-related use of this substance, although Indigenous or Mestizo people then mainly used ayahuasca for divination, (counter-)witchcraft, warfare, and communal religious rituals (Gow, 1994;Bianchi, 2005;Brabec de Mori, 2011). The idea that ayahuasca was used for "curing, " although not yet literally for medical purposes, probably dates back to the rubber boom and its exploitation and genocidal mistreatment of Indigenous populations that had to be "healed" in a political-metaphysical sense of empowerment and reconciliation (Taussig, 1987;Byrne, 2017). Indigenous medical concepts are much older and deeply rooted in an animist worldview and complex techniques of contacting, socializing with, and dealing out reciprocities with non-human spirits, animals, or other entities; techniques that completely lack any necessity of ayahuasca use. Despite the constructed quality of "ayahuasca shamanism, " it seems impossible to shed off implicit assumptions in contemporary Western or "modern" applications about ayahuasca's "mythical, " "spiritual, " "shamanic, " "ritual, " "Indigenous, " "entheogenic, " "ecological, " and similar qualities that are ascribed to this complex (see Gearin, 2015;Fotiou, 2020a,b). Especially neo-animist renderings of "mother ayahuasca" or "teacher" and "master plants" are impressively popular among Westerners. I argue, therefore, that ayahuasca's distinctive successes in both popular culture and medical investigation are indebted mainly to said social attributions of healing power grounded in intrinsic assumptions of Alterity. I use the term "social attribution" to denote the discourse about a specific item, in this case, ayahuasca, and what it is assumed to do to its users. 1 Different forms of discourse shape differing opinions about the item and thus inform different certainties about its qualities. In all cases discussed here, ayahuasca as a substance is constituted by basically the same, or similar, human pharmacology, but the qualities that are attributed to this substance show significant variation. In the following, I will describe three modes in which ayahuasca is used. Quite naturally, when one constructs categories, exceptions from the rule will be found: there are many instances where the three modes overlap and interact, and in addition, though very rarely, some people may use ayahuasca in other modalities than described here. SOCIAL ATTRIBUTIONS IN INDIGENOUS TRADITIONAL USE It is difficult to present Indigenous concepts around ayahuasca in a nutshell, so I will exemplarily explain traditional ayahuasca use among the Peruvian Shipibo-Konibo. This is where I conducted systematic fieldwork from 2000 to 2006, continued through visits until 2019. I describe the healing-sorcery practices as observed among Shipibo-Konibo healers (médicos) who in the early 2000s worked among their communities, but not (yet) with tourists or visitors. In traditional Shipibo medicine, an apprentice would embark on lengthy "diets" (samá; cf. Illius, 1987;LeClerc, 2003;O'Shaughnessy and Berlowitz, 2021) that do not involve the intake of ayahuasca but of other plants, and seldom animal or inorganic substances. These are ingested before and during a span of time when the apprentice would retire from much of social contacts and follow a set of alimentary and social taboos. Note here, that "diets" are also prescribed for patients after a curing session (in order not to disturb the songs that are thought to linger in the body), or for healing processes that apply any plant preparations. "Diets" are likewise essential for many forms of learning, e.g., for becoming a good hunter, for producing precious artwork, for toddlers to walk and talk more quickly, for being a good soccer player or musician, and so on. During a "diet" devoted to becoming a healersorcerer, however, the apprentice should make contact with the humanoid entities of the plant ingested, who are called "owners" (ibo) or "spirits" (yoshin), in dreams or wake-state visions. From these, the disciple obtains their powers which often take the forms of songs. Between "diets, " the apprentice would accompany his 2 teacher in curing sessions and thereby practice working with him and learn to apply what he obtained from his "diet." There was no formal initiation, so at some point the apprentice would start conducting healing or sorcery sessions on his own, thus stepping into the competitive ring of healer-sorcerers. Illness was understood as a reciprocal process, stemming from a source, which, in order to heal, has to be tricked, seduced, or overthrown. In cases when a competing specialist would be held to be the original causer of an illness, from the others' perspective, "my healer" is "his sorcerer" (Brabec de Mori, 2017). Efficacy is attributed to the power of a healer from close kin who accomplished many lengthy "diets, " has vast social relations with animals, plants, and spirits, and knows a great repertoire of magical songs -the main means of interacting with his non-human allies, and their foes. The correct use of melodies, language, codes, and metaphors was crucial. The powerful voice (see Brabec de Mori, in print) would be obtained through "diets, " too. I repeat that the use of ayahuasca was totally optional, restricted to the specialist himself, and the most powerful healersorcerers, the meraya, did not use it at all. It is important to note that in order to accomplish "diets, " and to be instructed by a teacher, one had to "stand firmly on both legs, " as one specialist put it. Therefore, the intake of ayahuasca was (ideally) socially restricted to healthy, psychologically and socially established well-trained individuals, because, as indicated above, only they would drink the brew, while patients or laypeople would never touch it -"why should I drink this, I am not a healer!" Among all the Indigenous healers I worked with, highintensity hallucinations were avoided. Most healers would drink fairly low doses in order to only "open up the world" (nete kepenti). They would retain control of their condition in order to channel their songs' power toward the patient. Finally, vomiting was not a topic. During around 100 ayahuasca sessions I witnessed among traditional Shipibo healers, no single one would ever vomit. Ayahuasca was considered a delicate tool to more easily reach the spirit world, nothing more, nothing less. SOCIAL ATTRIBUTIONS IN NEO-SHAMANIC AND RECREATIONAL USE In Indigenous use prior to the ayahuasca boom, the healer would take the substance, but not the sick. This constitutes the most prominently marked difference between Indigenous and modern understandings of ayahuasca: ayahuasca had to -by itself, as a substance -show therapeutic effect, and clients would be those who had to go through the hallucinatory experience. Thus, a "psychologization" of the whole process took place (Brabec de Mori, 2013;Labate, 2014). Remarkably, Indigenous thought attributes efficacy to the healer's power, knowledge, and experience, while Westerners cannot but attribute efficacy to the substance-as-ingested. The appearance of Western researchers and later drug and healing tourists and visitors triggered a transformation in Amazonian Indigenous and Mestizo "traditions" toward communal ayahuasca-drinking sessions that became known as "ceremonies." 3 This newly invented style of ayahuasca use was then exported to North America and Europe, and practically all over the world. Nowadays, a multitude of neo-shamans who either learned from the first generation of out-of-the-Amazon ayahuasqueros, or who created their own eclectic ayahuasca ceremony styles, offer their services to a growing general public. Most of these sessions are held exclusively for "white" or other non-Indigenous, non-local people. There is much work about these forms of ayahuasca use in the Amazon and beyond (see Gearin, 2015;Labate et al., 2016;Fotiou, 2020b). I will restrict my analysis here to my fieldwork among White neoshamans in Shipibo territory, and to phenomena that I consider most distinctive. In these neo-shamanic contexts, the importance of "diets" is varied, in some cases they are completely omitted (most often in foreign contexts), or they are recommended because the brew would cause stronger hallucinogenic effects after fasting. In the latter cases, the diet has to be held before the ayahuasca session, while in Indigenous use, it should follow the treatment. For "becoming a shaman, " the diet is usually prescribed very much in the Indigenous sense, but most often "mixed" with ayahuasca ingestion, or thought to "strengthen" one's control of ayahuasca hallucinations. Diets are rarely considered important for anything beyond the ayahuasca complex. Another main distinction is a certain "white-washing" of the practice: "shamans" are considered "good, " and ayahuasca use is by definition beneficial. Sorcery is not considered inherent (see Fotiou, 2010), and not connected to emotion control or psychological stability. Quite on the contrary, as I observed among "shaman apprentices" in Ucayali, many of them have a history of minor mental disorder or at least some issues with mental or social instability, not by any means "standing firmly on both legs." Many are seekers, with "alternative" worldviews, lacking or avoiding well-defined integrated professional or family life (of course there are some exceptions here). Therefore, ayahuasca by itself and its use are considered to be beneficial to health, spiritual growth, inspiration, and general well-being, promising a possible solution for preconditions like mental problems or social instability. This results in a very different kind of people drinking ayahuasca compared to Indigenous traditions: mainly people with a background of experience-seeking, of spiritual, psychological, ecological, or socio-economic discontent, or with diagnosed health problems tend to attend such ceremonies (Wolff and Passie, 2018). The attribution of efficacy goes to the substance itself, which is often worshiped as "mother, " "teacher, " or "master, " and to a complex of alleged authenticity in terms of eco-harmony, purity, shamanism, spirituality, and indigeneity, say: colonial style exoticism. Local ayahuasca ceremonies I observed in Central Europe are highly ritualized and exoticized, and immense healing power is attributed to the substance and its immediate effects: heavy vomiting, physical suffering, and harrowing, dreadful visions are seen as necessary for a cathartic experience which in turn is considered foundational for achieving spiritual, mental, or physical well-being. I could also observe that many White shaman apprentices in Peru were surprisingly uninterested in the lived world of the "non-ayahuasca-drinking" locals, who still constitute the vast majority of Indigenous or Mestizo populations, and their situation of marginalization and often extreme and illnesscausing poverty. One shaman apprentice even believed that "they are happy living in that way." There are many people who do care, of course, but I was harshly surprised that this attitude occurred among all (randomly selected) White shaman apprentices I interviewed in hostels and "albergues" during my fieldwork in 2019. SOCIAL ATTRIBUTIONS IN WESTERN THERAPEUTIC AND CLINICAL USE There is strong and growing support for a general efficacy of so-called "psychedelic, " hallucinogenic drugs including LSD, MDMA, mescalin, psilocybin, ketamine, and others, in the context of treating a variety of non-psychotic mental conditions including, for example, recurrent depression, anxiety and obsessive-compulsive disorders, and drug addiction. Ayahuasca, too, seems beneficial in therapies of various mental disorders (most solid in addiction treatments, see Bouso and Riba, 2014;Palhano-Fontes et al., 2019). Pioneer studies on the chemistry and human pharmacology of the substance were mostly undertaken in un-controlled settings, they had to deal with the difficult accessibility of the material (Rivier and Lindgren, 1972), or create alternatives to circumvent the problems of remoteness and prohibition (Ott, 1994). Similar problems appeared for investigations about the efficacy of ayahuasca on health-related conditions, though they still have to be acknowledged as most seminal in the field (Grob et al., 1996;Riba et al., 2001). Until today, many studies about the effects of ayahuasca on human health have to be conducted "in the wild" in (more or less legal) ayahuasca ceremonies. Uthaug et al. (2021), for example, "visited 6 ayahuasca retreats, hosted by a single organization, all taking place at several locations in Europe. . .." The same authors clearly state that "Only 'students of the ayahuasca school' (linked to the host organization) were invited to participate in the present singleblind, placebo-controlled study." That means that the population under study were exclusively European ayahuasca users who had previously decided to devote themselves to "studying" with the "host organization, " which -though un-attended by the authorscarries a strict dogma of ayahuasca's properties, and students probably feel obliged to reproduce these. Similar limitations (acknowledged by the authors) apply to the "global ayahuasca project" (Sarris et al., 2021) that based its analysis on 11.912 online survey responses by people from all over the world who, obviously, were mostly involved with groups of regular, and biased, ayahuasca users. Although most clinical studies are methodologically well designed and solid, there is a strong bias to be observed when reviewing the literature, that either researchers, the studied population, or both underlie the abovementioned assumptions that ayahuasca per se was a medicine. For example, in Labate and Cavnar's (2014) influential book The Therapeutic Use of Ayahuasca, where some of the finest scientists working on the human pharmacology of the substance contributed chapters, the foreword reproduces ethnographic myths, and rather naïvely states, without any reference, that among Indigenous people "ayahuasca is considered a medicine: the great medicine. Practically, in today's world, the shamanic model incorporates easily-transferable features, including group structure, attention to set and setting, import of intention, and proper preparation for and integration of the experience" (Grob, 2014: x-xi). In my perspective as a critical ethnographer, this attitude is way colonial. It does not listen to but models a Western imagination of, Indigenous people's opinions. Most chapters in Labate and Cavnar's book, and many studies on ayahuasca as a therapeutic tool, cling to un-referenced and never tested assumptions that this substance would have been used by Indigenous people during millennia for the purpose of (spiritual) healing -although this "healing" was recently constructed during the second half of the 20th Century. A psychedelic experience may, on the contrary, also be disturbing or frightening and even cause lasting mental health problems. This seems to occur very rarely and mostly through triggering pre-existing latent disorders, and could be statistically counterbalanced by positive effects in others (Krebs and Johansen, 2013). Heise and Brooks (2017) report acute intoxication crises after ayahuasca ingestion that were brought to US poison control, including three fatalities. Dos Santos et al. (2017) registered diagnosed psychotic episodes from published literature. Gearin and Calavia Sáez (2021: 147) found that personality disorders, especially "narcissism and related problems of the self " may be exacerbated by ayahuasca use. Although cases of less severe adverse effects may have escaped these studies, an overall beneficiary effect seems evident as such. In stark contrast to the Indigenous concept, the drug is now ingested by the patients, that is, exclusively by people with diagnosed mental health conditions. The (hopefully) psychologically and socially stable therapist, on the contrary, does not drink the brew together with the client. In most countries, healthy individuals are not allowed to drink ayahuasca while ill people may obtain a legal exception. In sum, I do not doubt that ayahuasca, along with other hallucinogens despite some counterindications may be very useful in treating certain mental disorders. However, the power of social attributions remains un-attended in many studies (but see e.g., Talin and Sanabria, 2017), and expectations aboundeven among the researchers themselves who are often quite fond of their own visionary experiences obtained through the brew, possible leading to a sense of mission, which is rather unsettling in terms of good scientific practice (cf. Grob, 2014: xiii). OUTLOOK People form general beliefs fed by their social environment, in this case, people (including scholars) believe that ayahuasca has intrinsic natural, medical, spiritual, ecological powers that had been suppressed through the Western world's (often radical) secular, disenchanted life-world history. People with such a general belief are then seeking and creating situations that trigger certain experiences, which in turn enable them to form a personal (or scientific) account, including mental states and personal development, that strengthens the general belief (Van Leeuwen and Van Elk, 2018). This should not be misunderstood as a generalizing critique on the use and study of ayahuasca and other hallucinogens for therapeutic applications. I do suggest, however, that there are more contextual (e.g., ritualization) and psychological (e.g., beliefs) factors to be understood than hitherto recognized. Furthermore, I think that the three modes of ayahuasca use I briefly presented here are ontologically different. Social attributions of efficacy range from interaction with spirits (requiring an animist ontology, see Descola, 2013), to worshiping neo-indigeneity and ecology (requiring a colonial world and ecological doom), to a substance that is therapeutically effective (a naturalist ontology). Therefore, it is not enough to heed some "cultural" factors. For serious medical studies, research designs have to be separated from Indigenous concepts, and, if possible, from ideologically biased recreational (in the widest sense) applications of ayahuasca. This is difficult indeed, given that many authors are themselves involved in some form of ayahuasca ritual circles, churches, or freelance use. Finally, the implicit coloniality in virtually all forms of ayahuasca use should largely be recognized, and strategies have to be developed to find a way of fair use of ayahuasca in an anti-colonial sense. Those regions where ayahuasca originated and where most ayahuasca centers are located are among those hardest hit by COVID-19 death tolls. How can this be? DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author. AUTHOR CONTRIBUTIONS BB was the sole author and responsible for all the content and manuscript redaction. FUNDING This publication was funded by the University of Innsbruck.
2021-10-28T13:41:08.243Z
2021-10-28T00:00:00.000
{ "year": 2021, "sha1": "9282cb0bae1078dd2ea842cbd85c0058c2b4c934", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2021.748131/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9282cb0bae1078dd2ea842cbd85c0058c2b4c934", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
257511253
pes2o/s2orc
v3-fos-license
Using Codesign to Develop a Novel Oral Healthcare Educational Intervention for Undergraduate Nursing Students To build a nursing workforce that is equipped to undertake oral health promotion and screening, an educational program was needed. With codesign being used in multiple settings, it was selected as the approach to use, with Mezirow’s Transformative Learning theory as the underpinning framework. This study aimed to develop an oral healthcare educational intervention for nursing students. Using a six-step codesign framework, nursing students and faculty staff were invited to participate in two Zoom™ Video Communication workshops to codesign the learning activities to be used in the classroom. The codesign process was evaluated through focus groups and analysed using a hybrid content analysis approach. A multifaceted oral healthcare educational intervention was developed. Learning material was delivered using a range of different learning and teaching resources such as dental models, podcasts, and an oral health assessment across two subjects. Multiple approaches to recruitment, the inclusion of participants, and good facilitation of workshop discussions were critical to the codesign of the educational intervention. Evaluation revealed that preparing participants prior to the workshops acted as a catalyst for conversations, which facilitated the codesign process. Codesign was a useful approach to employ in the development of an oral healthcare intervention to address an area of need. Introduction The World Health Organization reports that approximately 3.5 billion people worldwide are affected by oral disease, and the financial burden of oral disease has been estimated to equal almost $390 billion USD [1]. In addition to the financial burden, oral disease impacts an individual's systemic health [2]. Traditionally, oral disease is managed by dental professionals; however, with increasing prevalence, guidelines have been developed to expand the roles of other health professionals, such as nurses to screen for dental disease for early intervention [3]. To build a future workforce that is well equipped to undertake this role, the development of an educational program for nursing students is needed. While there have been reports of the integration of oral health programs in the United States and isolated reports in Canada, the United Kingdom, Norway, and Brazil, the integration of oral healthcare education in Australia has been reported only in midwifery education [3]. An oral healthcare educational intervention for Australian nursing students is, therefore, required. Such a program would enable students to take preventative action against oral disease in practice settings, reducing the disease burden that poor oral health presents to the population. To develop an oral healthcare educational intervention, a codesign approach has the potential to produce an intervention that is sustainable and translational in improving This study forms a part of a larger mixed-methods project aimed at integrating oral healthcare into the undergraduate Bachelor of Nursing (BN) curriculum at a large, multicampus Australian university. The BN is a three-year preregistration nursing program with limited oral healthcare education. This paper comprehensively reports on the process of developing the oral healthcare intervention using the six-step codesign framework of Dietrich, Trischler, Schuster, and Rundle-Thiele [10] within the methods and evaluates the process. Participants The participants in the codesign study were second-year undergraduate nursing students and faculty staff involved in the teaching of a second-year chronic illness theory subject. Those teaching in the concurrent practical (clinical) subject were also included in the study. Codesign Framework and Theory Underpinning the Codesign Process The six-step process of codesign Dietrich, Trischler, Schuster, and Rundle-Thiele [10], which is outlined in Figure 1, was the framework used to develop the teaching and learning activities for the oral healthcare intervention. This framework was selected because students are vulnerable consumers in education. As the intervention also required a theoretical underpinning that would allow students to transform their perspectives, Mezirow's Transformative Learning theory was selected for this study. This 7-phase theory enables learners to change and transform their mindset about a given topic [13]. In this case, transforming mindsets on oral healthcare may be required to transform current attitudes towards oral healthcare in nursing. subject. Those teaching in the concurrent practical (clinical) subject were also included in the study. Codesign Framework and Theory Underpinning the Codesign Process The six-step process of codesign Dietrich, Trischler, Schuster, and Rundle-Thiele [10], which is outlined in Figure 1, was the framework used to develop the teaching and learning activities for the oral healthcare intervention. This framework was selected because students are vulnerable consumers in education. As the intervention also required a theoretical underpinning that would allow students to transform their perspectives, Mezirow's Transformative Learning theory was selected for this study. This 7-phase theory enables learners to change and transform their mindset about a given topic [13]. In this case, transforming mindsets on oral healthcare may be required to transform current attitudes towards oral healthcare in nursing. Step One-Resourcing This first step is designed to enable the researcher to ascertain the resources that would be needed for the codesign workshops and that would enhance understanding of the problem and promote participation. To facilitate this step in this study, two robust literature reviews were conducted to identify which setting best enabled learning. Furthermore, activities which best aligned with Mezirow's Transformative Learning Theory [3,13] were identified. From the first review, it was identified that developing the education across more than one subject achieved better learning outcomes, such as increased knowledge and confidence with oral healthcare [3]. Based on this information, the researchers specifically targeted a second-year theoretical chronic illness subject and a concurrent second-year practical subject for the study. The second review identified transformative learning activities [13] which were good learning strategies to use to achieve transformation. These included group learning; experiential activities, such as simulation; and reflective activities. Step One-Resourcing This first step is designed to enable the researcher to ascertain the resources that would be needed for the codesign workshops and that would enhance understanding of the problem and promote participation. To facilitate this step in this study, two robust literature reviews were conducted to identify which setting best enabled learning. Furthermore, activities which best aligned with Mezirow's Transformative Learning Theory [3,13] were identified. From the first review, it was identified that developing the education across more than one subject achieved better learning outcomes, such as increased knowledge and confidence with oral healthcare [3]. Based on this information, the researchers specifically targeted a second-year theoretical chronic illness subject and a concurrent second-year practical subject for the study. The second review identified transformative learning activities [13] which were good learning strategies to use to achieve transformation. These included group learning; experiential activities, such as simulation; and reflective activities. 2.4.2. Step Two-Planning This step includes planning and preparation for the codesign sessions and was conducted by the researcher in collaboration with the research team and research assistant (RA). For this study, it was decided that two one-hour sessions would be conducted one week apart from each other. This was to enable participants to present other activities and ideas that they had not considered during the first workshops. The first session would be the codesign workshop, in which students and faculty participants would engage in the design of the learning activities. Additionally, as the oral healthcare educational intervention was focused on integrating oral healthcare in the undergraduate nursing curriculum, the team decided to source industry experts from aged care and intensive care to attend the codesign workshop. The industry experts would present the current oral healthcare practices in their respective settings for 10 min at the commencement of the workshop, providing context for the participants. The second session would be an evaluation session to evaluate the codesign process. Due to the large multisite nature of the university where the research was undertaken and the COVID-19 restrictions in place at the time, a decision was made among the team to conduct the codesign workshops online via Zoom™ Video Communications, which allowed greater flexibility for participants to attend. Step Three-Recruitment This third step involves the recruitment of participants. The sample included staff teaching in the theoretical and practical subjects, as well as second-year nursing students. At the time of the research, students had not yet undertaken the subjects in which the educational intervention was to be embedded. Convenience sampling was undertaken, and an email was sent to both staff and students informing them of the codesign workshops and inviting them to participate. The email detailed the purpose of the workshops, which was to codesign the learning activities which would be used to deliver the oral health education. A student RA also explained the study in a 1 min recruitment video (https: //youtu.be/qeBbh9HgBQo, accessed on 13 February 2023). This video was placed on students' Blackboard™ learning sites (digital learning platform). Students and staff that were interested in participating contacted the RA via email, and the consent process took place. Any additional queries that potential participants had were answered at this time. An electronic poll was also sent to participants which provided a range of workshop times; this provided greater options for the participants. Once the final workshop times were organised, calendar invites were sent to all the participants. Step Four-Sensitising This step relates to the researcher preparing the participants for what is to come, thereby introducing them to the topic for discussion. For this step, a resource booklet was developed (see Supplementary Document S1) that was emailed to the participants prior to the first workshop. The booklet provided details of the concept of codesign and explained the learning theory-Mezirow's Transformative Learning Theory-that was to be used. Based on Step One learnings, the resource provided guidance on the evidencebased educational activities that the participants could select. Activities included the 1 min essay, oral health assessment, and reverse case study. Participants were also encouraged to present their own ideas at the workshops. Reminder emails about the workshops were sent three times prior to the workshops, as well as the day before. Step Five-Facilitation This step includes the facilitation of the codesign workshops. For Workshop 1, students and staff were separated into two groups which included a mix of students and faculty staff. One group was facilitated by the RA, and the other was facilitated by the researcher. The first workshop ran overtime at 1 h and 30 min. In these groups, discussions were had regarding the learning activities, which included the dental models, podcasts, and oral health assessment. Other innovative ideas generated by students were the use of animations, videologs (vlogs), and videocasts (vodcasts). The learning activity discussion followed the phases of Mezirow's learning theory, to ensure that there was an activity addressing each phase. Participants were encouraged to discuss which learning activity was suitable for each phase of the learning theory based on the resource which was provided to them in Step Four. During Workshop 2, each group presented their codesigned learning activities for five minutes each, and the researcher facilitated the discussion, which compared the activities that had been selected by each of the groups. The comparison was then used to evaluate the process in Step Six. At the completion of these discussions, participants were then divided into staff-and student-only focus groups, and an evaluation of the codesign process was conducted. Step Six-Evaluation This final step enabled the researcher to evaluate the data from the workshops and design a plan of action. In this step, after the participant groups had presented their ideas in Workshop 2 (Step Five), a discussion was held between all participants, and a final sequence of activities was decided. The suggestions made by participants were collated, and the final activity series was set based on alignment to the seven phases of Mezirow's Transformative Learning theory (Table 1). At the completion of this step, the researcher organised a meeting with the Subject Coordinators of the theoretical and practical subjects to present the plan generated in the codesign process. The Subject Coordinators had been approached prior to the workshops and had agreed for this process to occur. Other prior agreements were obtained from the Deputy Dean, Associate Dean for Learning and Teaching, and Director of Academic Program. The researcher then developed the resources required, made changes to the student guides for the classes, purchased the models, and organised the subject-matter-expert-led podcasts to be recorded. These processes occurred four to five weeks before the teaching semester commenced. Plan a course of action Student self-initiates a course of action which will allow the student to identify gaps in learning and ways in which to bridge these gaps. Self-directed; however, there was a one-week gap between theoretical and practical classes. The Oral Healthcare Intervention Based on the completion of the six steps, the resultant oral healthcare intervention included the use of a case study without diagnosis in the theoretical subject, followed by a self-reflection activity and whole-class discussion (see Supplementary Document S1). Subject-matter-expert-led podcasts were created based on the theoretical aspects of periodontitis and type 2 diabetes mellitus due to the bidirectional relationship of these conditions, with periodontal treatment improving glycaemic control. The selection of this content was part of a suite of education to be delivered to undergraduate nursing students across multiple subjects. Prior to the education being delivered in this chronic disease subject, the nursing students had completed oral healthcare training in a first-year primary healthcare subject. These podcasts were made available on the Blackboard learning site. Practice quiz questions as knowledge checks were also included. For the practical subject, teaching and learning activities included the use of dental models, a video or tutor demonstration of an oral health assessment and simulated practice of an oral health assessment. The final resource that was developed as a learning activity in the practical subject was a picture guide of the different dental pathologies included in the oral health assessment, such as caries and receding gums (Table 1 and Figure 2). facilitator (non-assessable) (practical subject and clinical placement). 7. Becoming confident and competent Continued practice (clinical placement). No time limit The Oral Healthcare Intervention Based on the completion of the six steps, the resultant oral healthcare intervention included the use of a case study without diagnosis in the theoretical subject, followed by a self-reflection activity and whole-class discussion (see Supplementary Document S1). Subject-matter-expert-led podcasts were created based on the theoretical aspects of periodontitis and type 2 diabetes mellitus due to the bidirectional relationship of these conditions, with periodontal treatment improving glycaemic control. The selection of this content was part of a suite of education to be delivered to undergraduate nursing students across multiple subjects. Prior to the education being delivered in this chronic disease subject, the nursing students had completed oral healthcare training in a first-year primary healthcare subject. These podcasts were made available on the Blackboard learning site. Practice quiz questions as knowledge checks were also included. For the practical subject, teaching and learning activities included the use of dental models, a video or tutor demonstration of an oral health assessment and simulated practice of an oral health assessment. The final resource that was developed as a learning activity in the practical subject was a picture guide of the different dental pathologies included in the oral health assessment, such as caries and receding gums (Table 1 and Figure 2). Data Analysis Focus group evaluation data from the second workshop was analysed to explore the codesign process. A hybrid approach was adopted that enabled both deductive and inductive analysis, as described by Elo and Kyngäs [14] and conducted by Bray et al. [15]. This approach allowed for data to be organised and understood by researchers in a meaningful manner. The analysis was conducted in three stages: (1) Preparation; (2) Organisation; and (3) Reporting [14]. In stage 1, the researchers immersed themselves in the data to obtain an overall understanding of the whole codesign experience. In the second stage, using a deductive approach, a categorisation matrix using five of the six Data Analysis Focus group evaluation data from the second workshop was analysed to explore the codesign process. A hybrid approach was adopted that enabled both deductive and inductive analysis, as described by Elo and Kyngäs [14] and conducted by Bray et al. [15]. This approach allowed for data to be organised and understood by researchers in a meaningful manner. The analysis was conducted in three stages: (1) Preparation; (2) Organisation; and (3) Reporting [14]. In stage 1, the researchers immersed themselves in the data to obtain an overall understanding of the whole codesign experience. In the second stage, using a deductive approach, a categorisation matrix using five of the six steps of Dietrich's codesign process was conducted. Data were then coded into each respective category. Following this, using an inductive approach, data were recoded and recategorised into sub-categories. The third stage of the analysis, which involves the reporting of findings, is included in detail in the evaluation section of this paper. Ethical Considerations Ethical approval for this study was received from the Western Sydney University Human Research and Ethics Committee (H14177). Participation in this study was voluntary, and consent was obtained prior to the workshops and confirmed at the beginning of each workshop and focus group (which were recorded). Participants were assigned a pseudonym, and the transcripts were deidentified to maintain confidentiality. All participants received a certificate of participation in appreciation of their time and effort. Participants Ten participants were scheduled to attend the workshops; however, a total of eight participants attended: five academic staff members and three second-year nursing students. All eight participants were female, and all contributed throughout the workshops. Of the academic staff members, four were permanent, and one was casual. Of the two students that did not attend, one attended the second workshop only but did not remain for the evaluation. A second student withdrew on the day of the workshop due to other commitments. Evaluation of the Codesign Process Categorisation of data occurred in alignment with the first five stages of the codesign framework used in this study. The process is summarised in Table 2. Table 2. Categories and sub-categories. Sub-Category Category 1-Resourcing 1.1 Learning activities-"catalyst to make me start thinking" 1.2 Learning activities-"triggers that critical thinking" As the learning activities were the focus of the resourcing stage, this was what dominated the discussion. Providing numerous learning activity options in the booklet that was developed assisted in preparing the participants prior to attending Workshop 1. Examples of these activities included the oral health assessment, models, and videos. These options enabled the generation of discussions and prompted "new" ideas. Learning Activities-"Catalyst to Make Me Start Thinking" Both staff and students agreed that the learning activities that were presented to them acted as catalysts for discussion within their groups, which prompted further ideas to be Learning Activities-"Triggers That Critical Thinking" Overall, participants were happy with the selection of activities provided to them; they felt that there was a good range and this variety also stimulated engagement as well as critical thinking processes. The stimulation of critical thinking is important in the learning process, particularly when aligning activities to Mezirow's Transformative Learning Theory. The reverse case study was a popular point of discussion for both students and staff in that staff felt that this activity would pique critical thinking, while students felt it was a little confusing: "There's lots of really good examples. Like, I really particularly like the reverse case study, um, you know, it's not something that we get to see a lot, but it's quite relevant and triggers that critical thinking component." [Staff 2] "Um, there was one learning activity she put there that I was a bit confused about. Um, I think it was . . . it wasn't case study without a diagnosis, but it was something else. I think it was called reverse case study?" [Student 3] Category 2-Planning The planning of the workshops included the time allocation for the workshops, the use of Zoom™, and frequent reminders sent to participants regarding the workshops. During the planning phase, the original idea was to conduct the workshops face-to-face; however, due to the COVID-19 pandemic restrictions, these workshops were held on Zoom™. Workshop Timing-"I Didn't Realise We Went Over" Although the workshops were scheduled for one hour, the participants were engaged in the discussions and did not appear concerned that the first workshop went overtime by 30 min. In this case, increasing the timing of the workshop to one and a half hours would have been acceptable, although this may vary depending on the workshop content. "It was good. We went over, but actually it was great. I didn't realise we went over. Like, it wasn't boring, it was actually very engaging and I thought, yeah, an hour, and hour and a half is actually quite doable." [Staff 2] "It was, like, actively engaging. We were lost. Like, you know, we lost track of time and I was like 'Oh, an hour and a half already has passed by' and everybody was still talking which is really good" [Student 2] There was only one staff member that provided feedback that they felt that the time allocated for the workshops was too long: Benefits of Zoom™-"Convenience Definitely" The participants identified that convenience was one of the biggest benefits of using Zoom™ for codesign workshops, particularly when juggling multiple commitments, such as work. One student participant also expressed that Zoom™ made her feel comfortable to express her opinions freely and to use her voice. "I believe Zoom would have been the most desirable choice to hold the one-hour workshop due to its conveniences." [Student 1] "You know, um, time factor. You know, like not, if I'm at work, that's great and I can come in, but I if, you know, say I'm not working and I'm taking time to go to work particularly for a two-hour, you know, workshop may not be convenient." [Staff 2] "I find that if it is going to be like a, um, how do you say that, like a face-to-face, I more like have difficulty in expressing myself, I guess. So, for having the Zoom, like with a Zoom, it make me, like, it's more helpful on my side, I guess. You know, to think that I was, I'm going to say that I'm in my comfort zone as well, though, it makes me more relaxed. But I try, I try. I was speaking up so, yeah." [Student 2] Drawbacks of Zoom™-"Names with Black Boxes" While Zoom™ had the benefit of encouraging participation, there were also drawbacks, as identified by the staff participants. There were two staff members that highlighted the importance of being able to visualise other people when engaging on Zoom™. Another issue raised by a staff member was that there was limited privacy with cameras being turned on. One student expected that the workshops would not be as engaging due to previous experiences with Zoom™ classes, and this workshop experience was different, which emphasises the importance of good facilitation when using Zoom™. "Zoom is impersonal lends itself to stilted conversations. A person in our room had their camera off and because I could not visualise her, a comment she made." [Staff 5] "I think, as not a requirement, but just having, just putting faces to names, that's really important. Like, talking to names with black boxes I don't find that engaging." [Staff 2] "Other issues that we've always encountered with, you know, student privacy, living in shared accommodation, or I don't know how it would work with those other factors unless, you know, we were to provide them with a Zoom room on campus to say "here we go, Zoom is booked for you on campus" and then what's the point of having Zoom if we're going to have somebody come to campus. Why not come to campus?" [Staff 2] "Like I didn't expect it would be that good with Zoom, 'cause my previous Zoom classes, it's not, like, that engaging." [Student 3] Reminders-"I'm Quite Forgetful" While calendar invites were sent to all participants, the students found that the reminders sent to them alerting them of upcoming workshops motivated them and kept them interested. Students have many competing commitments, and engaging with them by providing reminders enabled them to participate; otherwise, they may have forgotten about the impending workshop. "I found it very helpful that you guys give us a email of the Zoom link just before it starts." [Student 1] "I really liked how [organiser], she always kept on giving me reminders of in two weeks' time I'm gonna have this, and she gave us a Zoom meeting and she was very nice in the way how she wrote the messages 'cause I think reminders is good for me because, like I said before, I'm quite forgetful on what to do." [Student 3] Category 3-Recruitment A multimodal recruitment strategy was the most effective form of recruitment, as it enabled participants to engage via their preferred manner. The use of a video recording on the Blackboard learning site, email, and direct approach were seen as effective strategies for recruitment. One participant found the recruitment video engaging and expressed that it generated interest in the workshops; however, the most preferred method of recruitment was the use of email, with most participants responding to the email which was sent to them. The participatory nature of the workshops was an attraction for one participant, as they expressed surprise that the email was not a request to complete a survey, but rather to participate in the workshops. A direct approach to recruitment was adopted for two staff participants who had previously been involved in the larger project. Inclusion of participants who had previously been involved in the project was an effective recruitment strategy to employ, as the insight provided by these participants was useful in generating meaningful discussions. "I was introduced to the study through Facilitator actually. Facilitator asked me if I wanted to be a part of a project, and I started working with Facilitator last year I think" Category 4-Sensitising To ensure that workshop participants were prepared for what would be covered in the workshops and to generate engagement and robust discussions, it was important to prepare the resources that would be effective to achieve this task. The booklet prepared was an effective resource, which was described as succinct and relevant by staff; however, a recommendation was given that it required clearer descriptions of the role and expectations for participants in the workshops. Student and Staff Expectations-"It May Not Have Been as Clear to Others" While the booklet was engaging and described Mezirow's Transformative Learning Theory as well as the best practice learning activities, student participants felt that a clearer explanation of the expectations and their role in the workshops was required, as these two aspects caused some confusion. One staff member identified that while the booklet was a good resource, without prior knowledge of the project, the booklet may not have been clear on its own. "I thought it was pretty engaging. Like, she listed the theory, why it's important for transformative learning, and then she had like Mezirow's diagrams and all and then she listed the activities, and she gave like the description for those activities like what do they mean. So, I think that was really good. But, yeah, it was just like the understanding what we should do. That's the only thing I was confused about." [Student 3] "It was ok for me, but I have a little bit of knowledge about your project, it may not have been as clear to others, but facilitator did set it up well." [Staff 5] Preparation Time and Relevancy-"It Didn't Take too Long to Prep" The booklet was deemed by staff to be relevant, and the preparation time for the workshops was adequate. The information in the booklets prompted discussions in the workshops, and the participants expressed that minimal reading time was required to understand the concepts. "I found it was just enough to prompt you to think about stuff before you came into the meeting without, sort of, having lots and lots of to read over." [Staff 1] "Yeah it was easy and you could just look at it and understand what you're trying to say, like, it didn't take too long to prep." [Staff 4] "Very succinct. Um, and quite relevant." [Staff 3] Category 5-Facilitation Being able to facilitate robust discussions is essential in generating good codesign outcomes. The inclusion of all participants by the facilitator in the discussions was well received by participants. Both student and staff participants enjoyed the discussions that were held in the workshops, and innovative ideas were developed because of these discussions. The workshop groups consisted of a mix of staff and students, and while this was received well by staff, the students were a little more apprehensive. Robust Discussions-"All of Us Were Given a Chance to Speak" Both student and staff participants were satisfied with the robust discussions that were held in the workshops, with participants from different backgrounds expressing their ideas, sharing their experiences, and generating discussion. One staff member expressed that inclusion of the teaching staff from both the theoretical subject and the practical subject was a very good idea, as the process gave insight into the breadth of oral health education. To enable robust discussions, it was important that the facilitator included all participants in the discussion. "Discussion went really well. I enjoyed the process. Like all of us were given a chance to speak up. Like, no one, you know, like, you know was monopolising the discussion." [Student 2] "Very engaging. Lots of great ideas. I was quite surprised, I think I told you, there was a particular student, and I was really surprised. I thought perhaps, you know, I was thinking perhaps the student has got some oral health background and it was really great, um, to see people from all sorts of backgrounds engaging with really good ideas." [Staff 2] Facilitation of the Discussions-"You Feel like You Are Included in the Discussion" Participants expressed that the facilitator gave all participants an opportunity to express their ideas and thoughts to the group. Being inclusive was an effective form of facilitation. One participant recommended that they would have preferred it if, while the facilitator was writing down their ideas, what was being written down was visible to all in the group to keep track of the discussion. For instance, if the workshop is conducted online, using the screen share option on Zoom was suggested. One staff participant felt that more dynamic discussions could have been held if the participants had been face-to-face and able to write on whiteboards. In the case of conducting the workshops online, the use of the annotate feature on Zoom may have proven to be useful to facilitate the discussion in this way. "I just feel like we probably would have had a more, um, dynamic discussion if we were all sort of sitting together in one room and, perhaps, like writing on the board or something where you can really let things flow out." [Staff 1] Staff and Students Working Together-"These Guys Are Thinking Outside the Box" Student participants were a little apprehensive about having staff in the same workshop, as it was felt that if there had been a prior conflict with a staff member that was present, this may have impacted their ability to participate. Another student participant felt a sense of slight intimidation with staff in the same workshop. On the other hand, staff participants did not express displeasure with students being present; however, they realised that they may not be up to date with innovation. The staff overall were greatly impressed with the novel ideas that the students were presenting and felt that the students had expertise to share and were "forward thinkers" by "thinking outside the box". Examples of these innovations include the use of a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis for the self-reflection and the use of animations, vlogs, podcasts, or vodcasts to deliver the content. "My thinking is very linear. Like when I think of curriculum, I know how I'm going to deliver it, but these guys are thinking outside the box. 'Oh, we should do this, and we should do this', I was like, what? I should know this stuff." [Staff 2] "You forget that there are students out there that have a lot to contribute. And then when you come across them, it's quite impressive and you think, well, I suppose the profession will be fine cause we do have students who are forward thinkers or think, just . . . think full stop." [Staff 1] "But, it's like everyone's just said, it was amazing to see how much of an insight they have actually and we probably miss on those things when we rush into the two-hour classes and we are just trying to tell them everything that we know. Maybe we need to more often get back from what they know as well, because it's been a wonderful experience. Yeah. Mind blowing." [Staff 4] Discussion This study aimed to develop an oral healthcare educational intervention using the codesign framework described by Dietrich, Trischler, Schuster, and Rundle-Thiele [10] and to evaluate the codesign process. Resourcing effectively ensured that participants had a range of learning activities to choose, and effective planning and good facilitation supported inclusive practices, giving participants an equal voice. Overall codesign was a useful method to use to develop the oral healthcare educational intervention in this study, and on the whole, the process was successful. An interesting finding in this study was the perceptions that students have regarding working with faculty staff to develop interventions. A degree of apprehension was reported by students in this study. While the study by Woods and Homer [16] with a codesign between staff and pre-arrival first-year students did not evaluate the students' experiences of working with staff, they were able to produce codesigned learning activities in a manner similar to this study. Students' perceptions in this study of perceived consequences of working with academics could be due to previous experiences with faculty staff within the classroom, including issues with staff commitment and rapport. Xiao and Wilkins [17] highlighted in their study on lecturer commitment that student satisfaction was determined by the level of commitment a lecturer had to their students. In addition to the study by Xiao and Wilkins [17], a study evaluating teaching from the perspective of students [18] identified that building lecturer-student rapport was very important, and while this study is dated, the concept remains relevant today. The dearth of literature surrounding this phenomenon highlights the need for further research on the topic. During the COVID-19 pandemic, Zoom™ video communications became a popular communication method in a number of settings [19,20], with the codesign workshops in this study being no exception. In this study, it was identified that the convenience of Zoom™ workshops enabled participation, for both student and staff participants. In addition to this, Anene and Idiedo [19] highlighted that participating in Zoom™ workshops from the comfort of home and eliminating travel risks were other benefits that came from using Zoom™ as a platform for conducting workshops. An interesting finding in this study was how students perceived the workshops via Zoom™. One participant in this study did not think that the workshops would be engaging based on previous classroom experiences. However, this participant reported the codesign workshop experience to be engaging, which is supported by the experiences of Kent, George, Lindley, and Brock [20], whereby the online teaching workshops were considered engaging. These findings highlight that the quality of the facilitation of online workshops plays a key role in the engagement of participants and needs to be considered when planning for codesign workshops. Building collaborative partnerships between service providers and consumers is a hallmark feature of codesign, and engaging students as partners in the codesign process provides students a voice in the development of curriculum [21]. Not only can students act as co-creators, but they also offer an expert voice [22]. This was demonstrated in this study, whereby faculty staff were impressed with the amount of insight students had and the contemporary ideas they had in the development of the oral healthcare educational intervention. While the piece by Cook-Sather et al. [23] discussed the integration of students in the publication process, they also highlighted the unique voice that students bring to co-creation and the importance of the expertise that students bring to a partnership. This is especially true in this study, whereby students were not only innovative in their approach but were contemporaneous with their ideas, adding richness to the experience of codesign as well as the discussions conducted in the workshops. Strengths and Limitations A limitation of this study is that the sample did not include men, as the only male participant that responded did not attend the first workshop. Another limitation of this study is that the sample size was small, particularly with student participants, thereby potentially lowering the representation of the entire student cohort. The implication of this is that the results generated in this study are specific to this study. Other studies may generate further insights, as different student cohorts may have differing needs and ideas. Thus, by considering the results of other studies, this would make the process transferable across different settings based on need, which is a strength of this study. Another strength of this study was that the approach to codesign was novel and involved using a learning theory to underpin the study and a codesign framework to develop the intervention. Additionally, this study could be used by other researchers and educators to serve as a blueprint for developing educational interventions. Future Areas of Research There is a dearth of research on the evaluation of codesign processes within higher education. Further research in this area may assist researchers in the planning and undertaking of codesign in a manner which is meaningful. Another area of further research is to evaluate the effectiveness of the oral healthcare intervention developed in this codesign process through a pre-post-test study. Conclusions Employment of a codesign approach was beneficial in the development of an oral healthcare intervention. Good preparation, planning, and facilitation achieved successful outcomes from the codesign process. The use of the six-step process allowed both staff and students to engage with the material and participate in rich, meaningful conversations. Each participant brought their unique experiences and expertise to the discussion, which led to collaboration between nursing students and faculty in the development of a novel oral healthcare educational intervention within an Australian nursing education context. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of Western Sydney University (protocol code H14177 16/12/2020). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: Not applicable.
2023-03-15T15:09:13.964Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "c14d099d030282308ca0be1334279d738aead57d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/20/6/4919/pdf?version=1678670613", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f1af474a8aa81d3c5bb5fe2f9d7cb17e4bc3f8f8", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5891448
pes2o/s2orc
v3-fos-license
Hilbert-Kunz Functions for Normal Rings Let (R,m,k) be an excellent, local, normal ring of characteristic p with a perfect residue field and dim R=d. Let M be a finitely generated R-module. We show that there exists a real number beta(M) such that lambda(M/I^[q]M) = e_{HK}(M) q^d + beta(M) q^{d-1} + O(q^{d-2}). In the situation of Theorem 1 it sometimes happens that β(M) = 0 provided that M = R (or more generally when M is torsion-free). Our results establish that β(M) = 0 whenever M is torsion-free and the class group of R is torsion. In particular this holds when (R, m, k) is a complete normal two-dimensional ring and k is finite-see Corollary 2.2. 1. We make use of various facts about divisor classes in integrally closed Noetherian domains. Our reference is [Bo], and we shall need in particular Proposition 18 and Theorem 6 of chapter VII, section 4. Let R be an integrally closed Noetherian domain. A Weil divisor on R is an element of the free abelian group on the height 1 primes of R. A principal Weil divisor is a divisor of the form P ord P (f ) · P with f = 0 in the field of fractions of R. C(R) is the quotient of the group of Weil divisors by the subgroup of principal divisors. Let M be a finite R-module. Then M admits a filtration with quotients (isomorphic to) R/P i where each P i is prime. Consider the Weil divisor − P i , the sum extending over those P i that are of height 1. The image of this divisor in C(R) is independent of the choice of filtration, and is denoted by c(M). The map c is additive on exact sequences and c(R) = 0. If P is a height 1 prime of R the exact sequence 0 → P → R → R/P → 0 shows that c(P ) = P . Suppose now that we are in the situation of Theorem 1 of the introduction. Lemma 1.1. Let (R, m, k) be a local ring of characteristic p. If T is a finitely generated torsion R-module with dim T = ℓ, then λ(Tor R 1 (R/I n , T )) = O(q ℓ ). Tensor with T and look at the following portion of the long exact sequence of Tors: We have λ(Tor R 1 (R/J n , T )) = O(q ℓ ) by induction. Also, since J : u = m, we have m n ⊆ J n : u q and λ(Tor R 0 (R/J n : u q , T )) ≤ λ(Tor R 0 (R/m n , T )). But λ(Tor R 0 (R/m n , T )) is the Hilbert-Kunz function for T , so λ(Tor R 0 (R/m n , T )) = O(q ℓ ). We have reduced to the case where λ(I/(x 1 , . . . , x d )) = 0. We need a theorem which is implicitly in Roberts [Ro] and explicitly given as Theorem 6.2 in [HH] (see also [Se,p278] for a theorem quickly giving an alternative proof with a sharper result on the growth of the size of the Koszul groups): Theorem. Let (R, m) be a local ring of characteristic p and let G • be a finite complex 0 → G n → · · · → G 0 → 0 of length n such that each G i is a finitely generated free module and suppose that each H i (G • ) has finite length. Suppose that M is a finitely generated R-module. Let d = dim M. Then there is a constant C > 0 such that ℓ(H n−t (M ⊗ R F e (G • )) ≤ Cq min(d,t) for all t ≥ 0 and all e ≥ 0, where q = p e . Consider K • ((x); R), the Koszul complex on (x 1 , . . . , x d ). Let H • ((x); R) denote the homology of the Koszul complex. We apply the above theorem to conclude that there exists a constant C > 0 such that λ(H d−t (T ⊗ F n (K • ))) ≤ Cq min(ℓ,t) for all t and for all n. Hence , which gives the stated result. (To see this, note that F n (K • ) is exactly the Koszul complex on the generators of I raised to the q = p n power, so that both of the complexes F n (K • ) and the minimal free resolution of R/I n begin with the same two free modules.) Proof. The primary decomposition theorem shows that J = (∩P are symbolic powers of finitely many height one primes P i , and dim( Proof. There is an exact sequence 0 → K → R r+s → M → 0 for some K and s ≥ 0. Then K is torsion-free of rank s and c(K) = 0. By Lemma 1.3, e n (K) ≤ se n (R) + O(q d−2 ). Evidently e n (K) + e n (M) ≥ (r + s)e n (R). So e n (M) ≥ re n (R) + O(q d−2 ); Lemma 1.3 provides the opposite inequality. To make further progress we shall use the pth power map F : R → R, assuming R is complete with perfect residue field. In this case F is finite of degree p d . Given a finite map R → R ′ between integrally closed Noetherian domains, we obtain induced norm maps from Weil divisors on R ′ to Weil divisors on R and from C(R ′ ) to C(R). For F : R → R we claim that these norm maps are just multiplication by p d−1 . For if P is a height 1 prime of R, the only prime lying over P is P itself, and the ramification degree is evidently p. So the residue class field degree is p d−1 by the discussion of Section 4.8, Chapter VII, page 535 in [Bo], and then the norm of P is p d−1 · P by the same discussion. If M is a finitely generated R-module of rank r let 1 M be M as additive group, but with Proof. We may complete R without changing the hypotheses or conclusions, and henceforth we assume that R is complete. Since the norm map C(R) → C(R) induced by F : R → R is multiplication by p d−1 , Proposition 18, section 4.8, Chapter VII of [Bo] for some real τ . The remarks after Definition 1.7 tell us that τ is independent of the choice of M and that c → τ is a homomorphism. We remark that it is immediate from this corollary that τ is the zero map whenever the class group of R is torsion. Theorem 1.11. Let (R, m, k) be an excellent, local, normal ring of characteristic p with a perfect residue field. Let dim R = d. Then there exists β(R) ∈ R such that e n (R) = e HK (I; Proof. Taking M = 1 R in Theorem 1.9 we find that e n+1 (R) − p d e n (R) = τ q , and arguing as in the proof of Theorem 1.9 we find In other words, e n (R) = α(R) Clearly α(R) = e HK (I; R) is forced. Proof. We again complete R and assume it is complete. Suppose first that M is torsionfree. Then the result follows from Theorems 1.9 and 1.11. In general there is an exact . Since T has dimension ≤ d − 1, [Mo1] shows that e n (T ) = cq d−1 + O(q d−2 ) for some c ≥ 0, and the result for M ′ yields the result for M. A corollary of the above results gives us similar growth conditions on certain Tor modules. Corollary 1.13. Let (R, m, k) be an excellent, local, normal ring of characteristic p with perfect residue field and with dim R = d. Let T be a torsion R-module. Then there exists Proof. We may complete R and henceforth assume R is complete. Consider an exact sequence, where M is torsion free. The long exact sequence on Tor after tensoring with R/I n shows that λ(Tor R 1 (T, R/I n )) = e n (M) + e n (T ) − se n (R). When d = 2 there are more general results. In particular the following Lemma seems to be known to experts, and we thank M. Artin and J. Lipman for pointing out relevant references and facts. Lemma 2.1. Suppose that (R, m, k) is a complete local normal two-dimensional ring, and k is the algebraic closure of the field with p elements. Then C(R) is a torsion group. 1 A recent preprint of H. Brenner [Br1] shows that the Hilbert-Kunz multiplicity of the ring is rational in the two-dimensional graded case. This result was obtained independently by V. Trivedi [Tr]. In another more recent preprint [Br2] Brenner proves that e n (R) = αq 2 + an eventually periodic function of n in the two-dimensional graded case over the algebraic closure of a finite field. Proof. The proof depends on the numerical theory of exceptional divisors (treated in full generality by Lipman), and arguments of Artin. An exposition is given by H. Göhner in [Gö], section 4, pages 423-426, which is independent of the rest of Göhner's paper. Note in particular the first part of Theorem 4.4 and corollary 4.5 in this paper. The hypothesis that there is a desingularization f : X → Spec(R), made at the beginning of section 4, is satisfied in this case, see [Li2]. Corollary 2.2. Suppose that (R, m, k) is a complete local normal two-dimensional ring, and k is finite. Then τ is the zero map. Remark 2.3. For general algebraically closed k there is an analog of Lemma 2.1. We adopt the notation of [Gö]. By (*) on page 425 there is an exact sequence with H finite; see page 425 for the definition of Pic 0 (X). To prove Lemma 2.1, Göhner uses Artin's result that there is a filtration of Pic 0 (X) with each quotient isomorphic to either the additive group of k, k * , or the group of k-valued points of the Jacobian variety of an irreducible component of the reduced special fibre of f . Somewhat more is true. There is a connected algebraic group G defined over k, built out of copies of the additive group, the multiplicative group and the above Jacobians, such that Pic 0 (X) identifies with G k . For more information concerning this topic, see [Li1], in particular Theorem 7.5. Remark 2.4. We believe that Corollary 2.2 holds even when k is infinite. Here's an intuitive argument. Suppose that P and Q are in some sense "generic points" of G k = Pic 0 (X). Because the definition of τ is purely algebraic, τ (P ) = τ (Q). Since the various P − Q with P and Q generic generate G k , at least when k is large enough, τ vanishes on the subgroup Pic 0 (X) of C(R) of finite index. Remark 2.5. The third author has made the idea of the above remark into a simple proof when R is the homogeneous coordinate ring of a smooth projective curve, localized at the homogeneous maximal ideal. In particular when R = k[[x 1 , x 2 , x 3 ]]/(F ), F a smooth form, τ is the zero map. As we've noted this is also true for 5 or more variables-the 4 variable case remains open.
2014-10-01T00:00:00.000Z
2004-04-01T00:00:00.000
{ "year": 2004, "sha1": "db21340cf8a666c479c5d42f9ade5ff4d5adb0eb", "oa_license": null, "oa_url": "http://www.intlpress.com/site/pub/files/_fulltext/journals/mrl/2004/0011/0004/MRL-2004-0011-0004-a011.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "db21340cf8a666c479c5d42f9ade5ff4d5adb0eb", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
239361433
pes2o/s2orc
v3-fos-license
What can the physiotherapist do for the child in palliative care? The World Health Organization defines Palliative Care as “an approach that improves the quality of life of patients (adults and children) and their families facing problems associated with life-threatening diseases”. The physiotherapist in palliative care aims to improve the quality of life and social life through behaviors that functionally rehabilitate the patient, as well as assisting the caregiver to cope with the rapid advance of the disease. INTRODUCTION Technological advances in pediatrics have brought great advances in all specialties, such as in neonatology, where low birth weight newborns have increasing survival rates, and in oncology where new therapeutic managements appear, enabling mortality reduction in children with cancer. 1 However, researchers are reporting a growing prevalence of chronic, degenerative and oncological diseases among children worldwide. 2 Such processes in children with complex or incurable diseases, life-threatening or limiting conditions, require frequent hospitalizations, consultations and examinations are a major challenge for healthcare authorities in many countries. 3,4 The World Health Organization (WHO) defines Palliative Care (PC) as "an approach that improves the quality of life of patients (adults and children) and their families when facing problems associated with life-threatening diseases. It helps prevent and relieve suffering through early identification, correct assessment and treatment of pain and other problems, whether physical, psychosocial or spiritual. 5 PC apply to six conditions: children in whom curative treatment is possible but may fail; children in need of longterm intensive care; children for whom there is no hope of improvement, with the goal of treatment being fully palliative and likely to last for years; children with severe neurological damage, leading to greater vulnerability and complications; newborns with limited life expectancy and relatives of children who suffered trauma, sudden death of the baby or early newborn death. 6 PC is not limited to specialized units, but can be held in a variety of locations, including inpatient wards, clinics, and home care. The physical therapist takes care of the child during all treatment stages, and it may work from the hospital to the child's home, depending on patient's needs and clinical conditions. Thus, children avoid isolation and parents can maintain their lifestyle and are usually recognized as part of the palliative care team. 7,8 Studies show that physical therapy in PC aims at improving the quality of life and social life through rehabilitating patient behavior, as well as helping the caregiver to cope with the rapid progress of the disease, and it is effective in addressing many associated symptoms, including cancer-related fatigue, pain, poor appetite, depression, dyspnea, and pulmonary hypersecretion. 9,10,11,12 Children get bored easily, for the physical therapist to reach his goals, a playful treatment is required. Physical therapy procedures should be adapted to patient's age group and mainly aim to delay clinical evolution and prevent secondary complications. 13 PAIN Pain is one of the most common symptoms experienced by children receiving PC. 14 It occurs in individuals who experience a series of physical, psychological, social and spiritual discomfort, such as skin lesions, unpleasant odors, anorexia, insomnia, fatigue, grief, depression, among others, and should be controlled because it generates disability in individuals, regardless of the underlying disease. 15,16 Using manual resources, physical and orthotic means minimize the symptomatic perception of pain. Among the physical therapy modalities, we can add: • Kinesiotherapy: movements are used as a means of treatment, based on movements that provide mobility, muscle flexibility, coordination, increased muscle strength and resistance to fatigue. • Electrotherapy: consists of the use of electric current through electrodes that are applied directly to the skin for therapeutic purposes promoting analgesia, resulting in activation of the pain suppressing system and producing a sensation that interferes with its perception. • Thermotherapy: this treatment modality enables vasodilation, muscle relaxation, improved metabolism and local circulation, extensibility of soft tissues, alteration of tissue viscoelastic properties and inflammation reduction. It is noteworthy that superficial heat thermotherapy is contraindicated when applied directly to tumor areas. • Massage Therapy: Consists of a series of massage techniques, which can induce a relaxation response, increased blood and lymphatic circulation, potentiates analgesic effects, increases endogenous endorphin release, and competing sensory stimuli that replace pain signals. Studies suggest that massage has been shown to have beneficial effects on pain and mood among patients with advanced cancer. Such resources may be used in association with acupuncture, relaxation and breathing techniques. 17,18,19 FUNCTIONAL MOBILITY Functional decline is common in patients dealing with advanced or end-stage systemic diseases. Identifying the cause of functional decline is useful in determining the functional recovery prognosis. 20 However, family members, caregivers and even healthcare professionals unnecessarily restrict many patients in PC when they are still able to perform their activities and have independence. The reinsertion of the patient in their activities of daily living restores the will to live and dignity. 21,22 ADAPTATIONS Adaptations, the use of orthoses and walking aids are often indicated to favor the patient's greater functionality, autonomy and decrease pain perception. These types of devices can be used permanently or not, because their use aims to align, prevent and/or correct possible deformities, enabling the patient greater functionality of the limb and the preservation of one's mobility and autonomy. 12,23 RESPIRATORY COMPLICATIONS Bedridden patients have pulmonary secretion buildup due to decreased mucociliary transport movement and weakened cough. 24 Pulmonary changes such as dyspnea, atelectasis, accumulation of secretions and other ventilatory symptoms or complications can be prevented, treated or alleviated by respiratory physiotherapy, i.e. ventilatory patterns and diaphragmatic awareness, unobstructive airway maneuvers, reexpansion maneuvers, postural orientation, relaxation techniques, oxygen therapy, noninvasive positive pressure ventilation. 25 Noninvasive positive pressure ventilation can be used in patients with ventilatory failure under three situations: as a life support that does not limit other healing approaches; life support when patients and family members decide not to undergo endotracheal intubation; as a palliative measure when patients and family members decide to avoid all life support, receiving only comfort measures. 26 FINAL CONSIDERATIONS Currently little is known about experiences, beliefs, and knowledge on PC among physical therapists in Brazil, especially when it comes to pediatric palliative care (PPC). Although PPC policies and services have been developed, research in this area remains behind schedule. Due to the potential benefit of adding physical therapy to PC, it is necessary to involve physical therapists in the discussion of topics associated with humanization, death, PC and the need for further investigations/studies in the field of pediatrics, to optimize their performance in the processes and thus corroborate with the multiprofessional and integrated treatment needed for the care of these children.
2020-04-16T09:14:51.808Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "5f3f2d23b1b241985ea1d22e1209e8dfb7d9b6da", "oa_license": "CCBY", "oa_url": "https://doi.org/10.25060/residpediatr-2019.v9n3-34", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e917401e675b917f4ef93f05deffdc89cae41d9a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
233893372
pes2o/s2orc
v3-fos-license
Modeling Transmission Dynamics and Risk Assessment for COVID-19 in Namibia Using Geospatial Technologies The SARS-CoV-2 infections continue to increase in Namibia and globally. Assessing and mapping the COVID-19 risk zones and modeling the response of COVID-19 using different scenarios are very vital to help decision-makers to estimate the immediate number of resources needed and plan for future interventions of COVID-19 in the area of interest. This study is aimed to identify and map COVID-19 risk zones and to model future COVID-19 response of Namibia using geospatial technologies. Population density, current COVID-19 infections, and spatial interaction index were used as proxy data to identify the different COVID-19 risk zones of Namibia. COVID-19 Hospital Impact Model for Epidemics (CHIME) V1.1.5 tool was used to model future COVID-19 responses with mobility restrictions. Weights were assigned for each thematic layer and thematic layer classes using the Analytical Hierarchy Process (AHP) tool. Suitably ArcGIS overlay analysis was conducted to produce risk zones. Current COVID-19 infection and spatial mobility index were found to be the dominant and sensitive factors for risk zoning in Namibia. Six different COVID-19 risk zones were identified in the study area, namely highest, higher, high, low, lower, and lowest. Modeling result revealed that mobility reduction by 30% within the country had a notable effect on controlling COVID-19 spread: a flattening of the peak number of cases and delay to the peak number. The research output could help policy-makers to estimate the immediate number of resources needed and plan for future interventions of COVID-19 in Namibia, especially to assess the potential positive effects of mobility restriction. Introduction The coronavirus disease 2019 (COVID-19) is a contagious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) (Huang et al. 2020). This virus which is believed to have been originally circulating in wild animals has a similar transmission route with the severe acute respiratory syndrome (SARS) virus . It is a respiratory illness with clinical symptoms such as cold, throat infection, cough, fever, and difficulty in breathing (Huang et al. 2020). The outbreak of COVID-19 was first reported on 31 December 2019, in Wuhan, China (WHO 2020). The virus spread rapidly throughout China and within 1 month, and several other countries, including Italy the United States, Germany, and United Kingdom reported their first cases (Giovanetti et al. 2020;CDC 2020;Rothe et al. 2020). The rapid outbreak and development of the epidemic is contributed to the disease characterization of its long incubation period, high infectivity, and difficulty in detection (Franch-Pardo et al. 2020). In Africa, the first reported case was in Egypt on 25 February 2020. Since then, the outbreak has spread across the continent (Rasheed et al. 2020). Namibia reported its first case on the 14 March 2020 and by the end of June 2020, the total number of confirmed cases remained below 300. However, it started to increase exponentially from the beginning of July. By the end of July 2020, the number of confirmed cases had reached 2052 (Worldometer 2020). As the curve increases exponentially for the number of confirmed cases, Namibia and any other African counties are in worries of possible overwhelming situation in the health care system. The World Health organization (WHO) (WHO 2020) released guidelines to be used to slow down transmission and manage confirmed cases such as case isolation, contact tracing and quarantine, physical distancing, hygiene measures, and improving the health care system. Due to the rapid and continuous spread of the COVID-19 epidemic, several countries or regions all over the world have been forced to take emergency measures such as closing cities, stopping production, suspending school classes, and restricting population movement, causing great harm to economic development and residents' health (An and Jia, 2020). Namibia enforced lockdown of cities from March 2020 and cascaded further control measures guidelines to its region. However, from May 2020 started to relax some measure stage by stage to save its economy (Republic of Namibia 2020). It has now emerged that the epicenter of in Namibia shifted to Erongo Region with more that 50% of the current confirmed cases (Worldometer 2020). Understanding future dynamics of the disease is important for public health planning and readiness. Several multidisciplinary studies on the epidemic spread have been done and achieved fruitful research results, which are of great guiding significance for the prevention and control of the epidemic (Franch-Pardo et al. 2020). A comprehensive review by Franch-Pardo et al. (2020) highlighted the importance of health geography in examining health policy interventions, control, and mapping/tracking through projection of spatial diffusion and temporal trends. To achieve this, geographic information systems (GIS) are currently recognized as a set of strategic and analytic tools for analyzing the spread and management strategy to allocate resources for diseases in both developed and developing countries (Wondim et al. 2017). Geography disciplines offer synthetic approach to the interplay between the biophysical and human variables (Turner 2002), and hence, the spatial and temporal changes of the COVID-19 epidemic spread are therefore a scientific problem to study (Xie et al. 2020). The COVID-19 pandemic have a spatial dimension that lead to understanding the transmission phenomenon as geographical and potentially mappable, and hence the need to include the ability to cross variables of different kinds to interpret the COVID-19 phenomenon, its spatial analysis and spatiotemporal dimensions, its geographical impact on decision-making and everyday life, and predictive modeling of the evolution of the disease (Franch-Pardo et al. 2020). For these reasons, the use of geospatial and statistical tools has become particularly relevant with the declaration of COVID-19 as a global pandemic. Mapping COVID-19 cases will help to understand more about spatial distribution of the disease in their area as well as its temporal occurrence and making forecast of its future burden. Mapping will also be used to locate the areas where outbreaks originate and effectively target high-risk areas for early prevention control. Despite the existence of various challenges of data sources, many countries or regions have published the epidemic spatial models in real time by making use of available data whilst estimating others based on available information (Xie et al. 2020). Sarfo and Karuppannan (2020) assert that geospatial technique is a tool for best practices in fighting COVID-19. In their study, they employed geospatial technologies in Ghana to model trends and mobility patterns. The results forecasted future spread through to the middle parts and then the northern parts. Another study by Ekumah et al. (2020) used a mixture of multivariate statistical and geospatial analyses to investigate the risk of COVID-19 infection in relation to the association of household family structure is associated with in-house access to basic needs in Sub-Saharan Africa (SAA). They used geo-maps to show how high spatial heterogeneity in terms of in-house access to basic needs in SSA. Since the beginning of the pandemic, there is little work done on geospatial research for COVID-19 in Africa despite evidence of its application in most parts of the world. This study therefore uses geospatial technologies to model current and future situations of COVID-19 in Namibia. Namibia This study was carried out in Namibia (Fig. 1). Namibia is part of the Southern African countries and has a land area of 825.419 km 2 . Namibia is surrounded by Angola and Zambezi to the North, Zimbabwe in the Northeast, South Africa to the south, and Botswana to the east and the Atlantic Ocean in the western direction with a heterogeneous population of about 2.4 million. There are 14 administrative regions in Namibia, with the capital town being Windhoek. Khomas and Ohangwena are the most populous regions. Transport systems, i.e., Harbors, airports, roads, and railways, including the center of each regions of Namibia are shown in the Fig. 1. COVID-19 Risk Assessments and Mapping To generate a risk mapping which might help in COVID-19 fight of Namibia, data on COVID-19 confirmed cases, population density, and spatial interaction index were collected to prepare different thematic maps. The population density and Namibian road network data were obtained from the Namibia Statistics Agency (NSA), National Spatial Data Infrastructure website (http://geofi nd.nsa.org.na/about ). Data on COVID-19 confirmed cases were obtained from Namibia Health Ministry COVID-19 dashboard (https ://namib ia.unfpa .org/en/event s/healt h-minis try-launc hes-covid -19dashb oard). On the other hand, the spatial interaction index was generated from a road network data using ARCGIS Pro. After having prepared all the necessary thematic map layers, each thematic layer based on their relative influence on the spread of COVID-19 was ranked and weighted using AHP pair-wise comparison matrix. Finally, all these prepared thematic layers were integrated using a weighted overlay tool in ArcGIS window to generate the COVID-19 risk assessment and map of the study area. COVID-19 Response Modeling COVID-19 response modeling was analyzed using ArcGIS Pro COVID-19 modeling toolbox, the COVID-19 Hospital Impact Model for Epidemics (CHIME) V1.1.5 tool. This tool leverages SIR (Susceptible, Infected, Recovered) modeling to assist hospitals, cities, and regions with capacity planning around COVID-19 by providing estimates of daily new admissions and current inpatient hospitalizations (census), ICU admissions, and patients requiring ventilation (COVID-19 response CHIME Model v1.1.5 manual, 2020). The CHIME tool predicts SIR a minimum of 30 days and a maximum of 365 days; however, short-period projections are recommended (COVID-19 response CHIME Model v1.1.5 manual, 2020). In this study, 60 and 90 day projection with and without social distancing was made to analyze the COVID-19 response which might be used by all responsible organizations for a better control of the disease and resource management. The tool uses parameters that describe the healthcare system or region being analyzed as well as the spread and contact input information for the disease. Spread and contact input information can be specified either as fields in the Input Feature Class or as constant values (COVID-19 response CHIME Model v1.1.5 manual, 2020). All data to run the model were obtained from the Namibia ministry of health website (https ://mfl.mhss.gov.na/locat ion-manag er/ locat ions) and tangible information on media briefing by the ministry of health. Discrete-Time SIR MODeling of Infections/Recovery The model consists of individuals who are susceptible (S), infected (I), or recovered (R).The epidemic proceeds via a growth and decline process. This is the core model of infectious disease spread and has been in use in epidemiology for many years. The dynamics are given by the following three equations (Weisstein 2019): Parameters The modelʼs parameters, β and γ, determine the severity of the epidemic. β can be interpreted as the effective contact rate: β = τ × c which is the transmissibility τ multiplied by the average number of people exposed c. The transmissibility is the basic virulence of the pathogen. The number of people exposed, c, is the parameter that can be changed through social distancing. γ is the inverse of the mean recovery time, in days. i.e., if γ = 1/14, then the average infection will clear in 14 days. An important descriptive parameter is the basic reproduction number, or R0. This represents the average number of people who will be infected by any given infected person. When R0 is greater than 1, it means that a disease will grow. A higher R0 implies more rapid transmission and a more rapid growth of the epidemic. It is defined as R0 = β/γ. R0 is larger when the pathogen is more infectious people are infectious for longer periods of time the number susceptible people is higher. A doubling time of 6 days and a recovery time of 14.0 days imply an R0 of 2.71 (Weisstein 2019). After the beginning of the outbreak, actions to reduce social contact will lower the parameter c. If this happens at time t, then the effective reproduction rate is Rt, which will be lower than R0 (Weisstein 2019). The Analytical Hierarchy Process (AHP) Multi-criteria decision analysis using the analytical hierarchical process (AHP) is the most common and well-known GIS-based method for delineating risk zones. This method helps to integrate all thematic maps. A total of three different thematic layers were considered for this study. These three thematic layers are supposed to control the factor of COVID19 spread in the area. The association of these influencing factors is weighted according to their reaction for COVID19 spread and expert opinion. A parameter with a high weight illustrates a layer with high impact and a parameter with a low weight illustrates a small impact. The weights of each parameter were assigned according to Saaty's scale (1-9) of relative importance value shown in Table 1 (Satty 1995). As per the classification, weights are assigned to the thematic layers based on their importance. Accordingly, all the thematic layers have been compared with each other in a pairwise comparison matrix: where A is a pair-wise comparison matrix of alternatives A i , i = 1, 2, 3, … n with respect to criteria K. The sub-classes of thematic layers were re-classified using natural break classification method in the GIS platform for assigning weight. The sub-classes of each thematic layer rank were allocated on a scale of 0-9, according to their relative influence on the groundwater development. For calculating the consistency ratio (CR) (Eq. 4), the following steps are adopted: (1) Principal Eigenvalue (ʎ) was computed by Eigenvector technique and (2) Consistency Index (CI) was calculated from the equation given below: where n is the number of factors used in the analysis. Consistency ratio is defined as CR = CI RCI , where RCI = random consistency index value, whose values were obtained from the Saaty's standard (Table 2). Saaty has opined that CR of 0.10 or less is acceptable to continue the analysis. If the consistency value is greater than 0.10, then there is a need to revise the judgment to locate causes of inconsistency and correct it accordingly. If the CR value is 0, it means that there is a perfect level of consistency in the pair-wise comparison. The threshold value is not exceeding above 0.1, which means that the judgments matrix is reasonably consistent. Spatial Interactions index Because of their practical prediction performance, trend analysis using spatial interaction index have been preferred for a few decades (Fotheringham and Webber 1980;Champion et al. 1998;Smith et al. 2001). Economy, job opportunity, or industrial structures of a region influence the regional population and its movement (Rogers 2008). Fundamentally, properties of a region attract population, and its influence is in inverse proportion to the distance. The condition of the regional industry decides the tendency of inter-regional migration, and its amount is determined by the population. It is based on an assumption that people are willing to move to well-structured regions for jobs, market, visit, etc. Those features experience spatial interactions and the terms in the introduced model are modified to reflect them. The conventional gravity model is composed of the populations of interacting regions, distance, and a constant that decides the strength of the interaction (Smith et al. 2001). The typical form is expressed as: where I ij is the interaction from origin i to destination j; m i and m j are, respectively, the population functions of regions i and j; d ij is the distance between regions i and j; and G ij is a constant determined through statistics of movement from region i to j. In this study, the spatial interaction layer was created from road connectivity. To create the spatial interaction layer from the road network, the road network was created using ARC-GIS. The spatial interaction index in ARCGIS platform was created using the following procedure. 1. The feature to point tool was used to create point's administrative boundary polygons of Namibia (Fig. 2). 2. Network spatial weights tool setting the Input Feature Class for Namibia administrative points was generated and providing a network data set. The driving distances for each point to every other point were computed. Inverse for the conceptualization of spatial relationships parameter and do not Row Standardize option was selected (Fig. 2). 3. The convert spatial weights matrix to table tool to export the inverse distances to a simple table was used (Fig. 2). 4. Summary statistics to sum the inverse distance weights associated with each administrative boundary was runned (Fig. 2). 5. Join Field to add the summed weights back to the administrative boundaries and alter field to give the joined field an appropriate name such as spatial interactive index was used (Fig. 2). (5) The produced spatial interaction index (Fig. 2) was incorporated in to AHP analysis together with other factors to predict the risk zones. The risk zones in this study can be interpreted only as areas with high transmission of COVID-19 and the number of affected population is high in comparisons to the other regions of the country. Results and Discussion In the following sections, the results of the analysis are presented for each of the three factors controlling COVID-19 and thereby the risk assessment and mapping. Results on modeling and projections of COVID-19 using CHIME V1.1.5 tool for 60 and 90 days with no social distancing and 30% social distancing are also presented. Population Density There are 14 administrative regions in Namibia, with the capital town being Windhoek. Khomas and Ohangwena are the most populous regions, while Karas and Omaheke are least densely populated of the 14 regions of Namibia (Table 3). The population density thematic map was created after forecasting the 2016 population census of Namibia using population growth rate of 1.19%. The population density data are re-classified in to six groups in the ArcGIS Pro platform for overlay analysis (Fig. 3). Total COVID-19 Cases in Namibia Geographically, the spread of the pandemic in Namibia has spatial connotations. From the Table 3, Khomas Region Fig. 2 Steps followed to create the spatial interaction index layer of Namibia greatly affected and recorded higher figures next to the hotspot Erongo region. Erongo and Khomas regions have accumulated 88.07% and 8.03% of the total COVID-19 cases in Namibia. The third highest hit region makes up just 1.38% of the confirmed cases. Comparing population distribution and COVID-19 cases (Table 3; Fig. 3), current information's shows that there is no a direct link in infection trends and the regional level of Namibia's population distribution; this might be because of the lockdown measures. Data on confirmed cases were then prepared and re-classified in ArcGIS platform for overlay analysis (Fig. 3). Mobility Patterns and Spatial Interaction Index Population movement triggers transmissions of COVID-19. In this study, Namibian mobility index on the four stages of lockdown was analyzed using Google mobility data (https ://www.googl e.com/covid 19/mobil ity/). Google calculate these insights based on data from users who have opted-in to Location history for their Google Account, so the data represent a sample of Google users. Google Calculate changes for Groceries & pharmacy, Retail & recreation, Transit stations, and Parks categories using the baseline as the median value, for the corresponding day of the week, during the 5-week period Jan 3-Feb 6, 2020. Google mobility index revealed that the average social distancing in the country was 45, 21, 15, and 10% for stage 1, stage 2, stage 3, and stage 4 lockdowns, respectively (Fig. 4). The stage 4 lockdown period is proposed to be extended up to 17 September 2020, but stage 4 mobility change map (Fig. 4) was processed using Google mobility index data till 8 May 2020. For the COVID-19 risk analysis, the spatial interaction layer of Namibia (Fig. 4) was produced from road connectivity data using ARCGIS Pro. The calculated interaction index then re-classified into six groups for overlay analysis. GIS Overlay Analysis for COVID-19 Risk Assessment and Mapping All the three thematic maps or layers (Fig. 3) were prepared in the re-classified raster format and were given the normalized weight (Fig. 5) in accordance with transmissions of COVID-19. Similarly, each thematic layer's classes were given the normalized rank or weight. Then, overlay analysis was carried out using the weight vectors (Fig. 5) using AHP. Risk mapping is a dimensionless quantity computed considering the weights for each layer and sub-classes in each thematic layer. After the overlay process has been completed, the COVID-19 risk zone map for Namibia was classified as highest risk, higher risk, high risk, low risk, lower risk, and lowest risk (Fig. 6). Risk zone designated as highest is mostly found in the Erongo Region which stretches from the Central Plateau across to the Central Namibian coast in the west, and with the Ugab River being the northern border, the higher COVID-19 risk zone is distributed along Khomas region which include the capital Windhoek. Future Trend of COVID-19 in Namibia (CHIME V1.15 model) The tempo and trend of COVID-19 were modeled using ArcGIS Pro COVID-19 modeling toolbox; the COVID-19 Hospital Impact Model for Epidemics (CHIME) V1.1.5 tool. The modeling was based on mobility dynamics, current COVID-19 cases, population dynamics and the rate of SARS-CoV2 infection, Number of Currently Hospitalized COVID-19 Patients, Social Distancing % (Reduction in Social Contact), Hospitalization % (Total Infections), ICU Hospital Impact Model for Epidemics (CHIME) V1.1.5 tool. Different scenarios are analyzed and projected for the next 60 and 90 days from the starting date 08 May 2020 with in lock down periods in Namibia. After lockdown ends on 17/ September 2020, the observed data of 18 September 2020 were used for the next 3-month projections after lockdown ends in Namibia. The metrics explained the maximum difference between projected needs and available resources, including the maximum difference as a total and as a percent, the day, and date in which the highest difference occurred, and the amount of days in which total projected needs exceeded available resources. The Fig. 3 Population density, confirmed cases, and spatial interaction index for COVID-19 overlay analysis to produce risk mapping model was processed in four different scenarios to analyze the spread and effect of COVID-19; (1) projection with no social distancing for next 60 and 90 days and (2) projection with 30% social distancing for next 60 and 90 days. The effectiveness of social distancing interventions to delay or flatten the Epidemic Curve of Coronavirus Disease in Namibia is modeled. Social distancing reduced by 30%. 60-Day Projection: With no Social Distancing and 30% Social Distancing The CHIME model result on transmission of COVID-19 with no social distancing interventions shows a peak curve on new daily admission, daily hospital census, and new hospitalizations for each region of Namibia (Fig. 7). The peak new daily admission will be occurred after 45 days from the starting date (08/05/2020) of modeling (Fig. 6a). The model result shows about 1524 new peak COVID-19 admissions (Fig. 7a). Out of the new peak admissions, about 400 admissions required ICU and about 200 admissions needed ventilation (Fig. 7a). The total daily hospital census projections including patients before the modeling period (Fig. 7d) indicate about 10,000 total cases after 50 days of modeling; 3200 ICU admissions and 2100 ventilated admissions. Khomas region shows high peak about 365 new hospitalizations on 2 September, if no social distance intervention applied (Fig. 7a, c); this is because of the high population density in the region. All other regions have also their peaks on the month of September. The second populous region, Ohangwena, has about 234 newly admission on September 17. The model result reveled that if no social distancing interventions applied, the most populous regions will have the highest COVID-19 cases and high number of COVID-19 cases recorded (Figs. 6c, 7a). Comparing Figs. 7 and 8, reducing peoples contact rates by 30% flatten the curve. The new daily admissions and the total daily hospital census projection reduced by 50%. The findings of the model are to be viewed with caution. Hospitalization increases in our model are likely to occur later if measures are lifted or social distance is decreased from 30% without further action, such as widespread testing, self-isolation of infected individuals, and contact tracing. As with any model, the impact of the interventions may be overestimated by our assumptions. However, quantifying the short-term effects of an intervention is vital to help decisionmakers estimate the immediate number of resources needed and plan for future interventions. 90-Day Projection: With no Social Distancing and 30% Social Distancing Similar to the 60-day model projections above, the 90-day projection also shows high-peak COVID-19 admissions and COVID-19 cases in Namibia with no social distancing interventions scenario (Fig. 9). The model result depicts that newly hospitalization census of each region of Namibia is associated with the number of population (Fig. 9a, c). Khomas region will have the highest peak, about 2500 new admissions during early September. Most of the other regions of Namibia will have their peak during late September (Fig. 9c). The new daily admissions and total daily hospitalized have their peaks after 50 days of projection. Model shows about 1512 new daily hospitalization of which 386 are ICU admissions and about 213 ventilated admissions. The effect of 30% social distancing intervention on 90-day COVID-19 response is shown in Fig. 9. Reducing peoples contact rates by 30% flattens the curve (Fig. 10) in comparison to not applying social distancing measures (Fig. 9). 90-Day Projection Using Data of 18 September 2020: With no Social Distancing Since March 2020, 13,134 people have been placed in to mandatory quarantine facilities around the Namibia (Table 4). In Namibia the fourth stage of lockdown ends on 17 September 2020. 90-Day projection was made using the observed data of 18 September 2020 (Fig. 11). The 90-day projection reveled Khomas region will have the highest peak, about 4300 new admissions during end of September (Fig. 10b). The projection shows different peak values for each regions due to difference in the no of population and mobility index (Fig. 11b). New daily hospitalized census projection result shows about 7843 new daily hospitalization of which 2301 are ICU admissions and about 2061 Ventilated admissions (Fig. 11d). More than 1810 new daily admission was also observed (Fig. 11c). Result findings of this research correlate with an increasing number of publications assessing the impacts of COVID-19 interventions. Several researchers have studied how social distancing measures could have influenced the epidemic (Prem et al. 2020;Wu et al. 2020;Kraemer et al. 2020). Others have investigated the effect of similar measures elsewhere and concluded that social distancing interventions alone will not be able to control the pandemic (Flaxman et al. 2020;Tuite et al. 2020). Conclusions In this study, an attempt was made to develop a spatial model for demarcating the COVID-19 risk zones in Namibia making use of three thematic layers. These were population distribution, current COVID-19 confirmed cases, and spatial interaction index. The COVID-19 risk zones' mapping was produced by integrating these thematic layers in ArcGIS overlay analysis. The identified COVID-19 risk zones in Namibia are highest, higher, high, low, lower, and lowest. Risk zone designated as highest is mostly found in the Erongo Region, the higher COVID-19 risk zone is distributed along Khomas region which include the capital Windhoek, and the high-and low-risk regions are Karas and Otjozonedjupa, respectively. The lower risk COVID-19 regions are Hardap, Kunene, Oshana, Omusati, Oshikoto, Ohangwana, and Kavango East, and lowest COVID-19 risk areas are Kavango West and Omaheke region. Different scenarios are analyzed and projected for the 60 and 90 days from the starting date 08 May 2020 within Fig. 11 a Hospitalized census, b new daily admission projection, c change in hospitalized census over date per region, and d daily hospital census projection in Namibia for next 90 days from 18 September 2020 lockdown periods in Namibia. After lockdown ends on 17 September 2020, the observed data of 18 September 2020 were used for the next 3-month projections. The CHIME model result on response of COVID-19 was analyzed with and without social distancing intervention for the next 60 and 90 days starting from 08 May 2020. The 60-and 90-day modeling result without social distancing interventions shows a peak curve on new daily admission, daily hospital census, and new hospitalizations for each region of Namibia. The peak new daily admission occurred after 45 days from the starting date of modeling for 60-day projection. The 60-day projection model result shows about 1524 new COVID-19 admissions, about 400 admissions required ICU, and about 200 admissions needed ventilation. The 60-day model output shows that Khomas region has high peak in September over the other regions; this might be due to the high population density in the area. All other regions have also their peaks on the month of September lately. The second populous region, Ohangwena, has about 234 newly admission on September 17. The 60-and 90-day model projection with 30% social distancing interventions shows flattening of the peak number of cases and delay to the peak number. In Namibia, the fourth stage of lockdown ends on 17 September 2020. After lockdown ends, 90-day projection was made using the observed data of 18 September 2020/.The 90-day projection reveled Khomas region will have the highest peak, about 4300 new admissions during end of September. The projection shows different peak values for each regions due to difference in the number of population and mobility index. New daily hospitalized census projection result shows about 7843 new daily hospitalization of which 2301 are ICU admissions and about 2061 ventilated admissions. More than 1810 new daily admission was also observed. The research output could help policy-makers to estimate the immediate number of resources needed and plan for future interventions of COVID-19 in Namibia. Future Scope of the Research The worldwide wide spread of COVID-19 peaks the importance of research, stable research infrastructure, and funding for public health emergency, response, and resiliency. Lives are lost, economies falter, and life has radically changed after the pandemic. Ultimate COVID-19 mitigation and crisis resolution are dependent on high-quality research aligned with top priority societal goals that yields trustworthy data and actionable information. While the highest priority goals are treatment and prevention, resource allocation and management require future projections based on the current infection rate. This study generated a risk zone and projected the new daily admission and daily hospitalized census which might help in COVID-19 fight of Namibia. The projection was mad using two observed data (05 May 2020) and (18 September 2020) for the next 90 days to accommodate CHIME tool projection recommendation. CHIME recommend short-period projections. The model tool and the input parameters can be used for future research and the model can be updated with the recent data at any time possible and future studies can be extended. The authors recommended that additional data and recent modern technological knowledge which adapts machine learning might increase the accuracy of the research and it is strongly recommended.
2021-05-08T00:03:51.275Z
2021-02-17T00:00:00.000
{ "year": 2021, "sha1": "9437d50f0e5f5e699601d8fc8ad794132979c028", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s41403-021-00209-y.pdf", "oa_status": "BRONZE", "pdf_src": "Adhoc", "pdf_hash": "2bf5b52270acd1d21e1f0b4d730157b4742cf5fa", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Geography" ] }
215770528
pes2o/s2orc
v3-fos-license
Some Remarks about Entropy of Digital Filtered Signals The finite numerical resolution of digital number representation has an impact on the properties of filters. Much effort has been done to develop efficient digital filters investigating the effects in the frequency response. However, it seems that there is less attention to the influence in the entropy by digital filtered signals due to the finite precision. To contribute in such a direction, this manuscript presents some remarks about the entropy of filtered signals. Three types of filters are investigated: Butterworth, Chebyshev, and elliptic. Using a boundary technique, the parameters of the filters are evaluated according to the word length of 16 or 32 bits. It has been shown that filtered signals have their entropy increased even if the filters are linear. A significant positive correlation (p < 0.05) was observed between order and Shannon entropy of the filtered signal using the elliptic filter. Comparing to signal-to-noise ratio, entropy seems more efficient at detecting the increasing of noise in a filtered signal. Such knowledge can be used as an additional condition for designing digital filters. Introduction Digital filters are discrete-time maps that perform mathematical operations on a sampled signal [1]. Frequency response is usually applied to characterize filters [2,3]. Two main classes of digital filters are generally used. When an impulse response is not zero for a finite number of samples, then we have the finite impulse response (FIR) filters. In the case where the impulse response produces an infinite number of non-zero samples, then we have the infinite impulse response (IIR) [4,5]. The great performance of digital filters is believed to be one of the reasons explaining the popularity of DSP devices [6]. The process of digital filtering is extensively used in many applications in communications, signal processing, electrical and biomedical engineering, and control [7][8][9][10][11][12][13][14][15]; for example, coding and compression, signal augmentation, denoising, amplitude and frequency demodulation, analog-to-digital conversions, shape detection, and extraction [16][17][18][19][20][21][22][23][24][25]. For some applications, nonlinearity is tailored to a specific purpose [26]. Recently, the authors of [27] designed a digital sigma-delta truncated infinite impulse response filter, which furnishes adequate rejection with a digital-to-analog converter of no more than 8 bits. The application in [27] is related to human body communication, which for many researchers is a promising research topic as it plays an important role in wireless body area networks because of its low power and hardware cost. In this area, it seems that digital filters of medium to low word length has again attracted the attention of researchers. When digital filters are employed under fixed-point arithmetic platforms, e.g., microcontrollers, DSP, and FPGA, or with very demanding performance specifications, the importance of filter coefficient accuracy increases, because the signal may be distorted [28][29][30]. Thus, a common goal in the finite precision analysis is to choose a word length such that the digital system presents sufficiently accurate realization. This design should consider the complexity and cost of hardware and software [31]. In digital signal processing, the issues of finite word length are some of the most significant components when the discrete poles are very close to the unit circle. Mullis and Roberts [32] and Hwang [33] have demonstrated that the influence of quantization errors on the digital filter performance depends on the filter implementation. In addition, Rader and Gold [34] have shown that for a given filter implementation it is possible that small errors in the denominator or numerator coefficients may cause large pole or zero offset. Moreover, Goodall and Donoghue [35] and Jones et al. [36] have observed a significant sensitivity of coefficient word lengths. This fact relates to the inability of computers to represent the infinite nature of real sets [37]. The influence of computer limitations opens a new perspective for computer environment simulation. For example, Nepomuceno [38] presents a theorem that identifies the reliability of calculations performed at fixed points; in [39,40], a technique has been developed to decline a simulation if a mandatory accuracy is greater than the lower bound error, growing numerical reliability in simulation, and still in [41], the authors show how sensitive a simulated system is in different processors. It seems clear that much research has been devoted to investigating the influence of finite precision on digital filters [32,34,36,42,43]. In those investigations, there are many cases where the quality of filter is measured using the filter response or signal-to-noise ratio (SNR) [43]. Despite the fact that the effect of filters on entropy has been pointed out since the work of Shannon [44], there is much less attention given to the entropy effects due to finite precision digital filters on the filtered signal. One work in this direction has been undertaken by Badii et al. [45], who show the influence of an infinite-impulse response in the fractal dimension of the attractor reconstructed from a filtered chaotic signal. Other works have employed entropy to the design of digital filters. For instance, Madan [46] has introduced the use of the maximum entropy method for the design of linear phase FIR digital filters. In [47], another attempt to use entropy in the design of digital FIR filters has been observed. However, no work has been found investigating the effects on entropy on a filtered signal by an IIR filter. This paper seeks to relate the computational limitations and the variation of the main parameters of a filter in the measured entropy. As entropy is a good index to detect increasing of noise in a signal, we have used a boundary technique to observe the effects of finite precision on the parameters of the filters according to the word length of 16 or 32 bits. We noticed that entropy is more sensitive than SNR. It was important to show that despite the ideal linear filter do not increase entropy, numerical experiments using the elliptic, Butterworth and Chebyshev filters have shown an increasing of entropy. Additionally, a positive correlation between order and entropy has been observed in the elliptic filter. This information can be useful to design or to evaluated digital filters in situations where the growth noise should be mitigated. The remainder of this paper is organized as follows. The definitions of IIR, FIR filters, quantization, and entropy are given in Section 2 as well as three scenarios of the simulation. Section 3 presents the results, where three filter types are investigated: Butterworth, Chebyshev, and elliptic. The remaining section is devoted to summarizing our results. IIR Filter IIR digital filters are characterized by having infinite impulse response [48]. They have output feedback, which makes them interesting because they allow achieving a more selective frequency response with lower number of coefficients. IIR digital filters are represented by the following transfer function, where N and M are the degree of the numerator and denominator polynomial, respectively; b k and a l are the filter coefficients. To find the difference equation of the filter, the inverse z-transform of each side of the Equation (2) is taken. The result is as follows. A more condensed form of the difference equation is and taking a 0 = 1, we have Quantization Error In the implementation of digital filters, the limitation of finite word length results in coefficient quantization errors, which may have unexpected effect in the frequency response [49]. This quantization error may be seen in a more realistic way if we consider the coefficients of the filter bounded from above and from below. Thus, quantizing can be seen in some way as adding a certain amount of noise. The fewer bits we use in quantization; the more noise is added. This is precisely the noise source shown in Figure 1. Using a fixed point representation, the quantization error is given by where Q = 2 b and b is the number of bits. Thus, the coefficients of Equation (5) present lower limits given by whereas the upper limits are given by This is equivalent to say that the quantization error produces an interval around the desired value of the coefficients. In other words, the approximated value of the coefficientsâ k andb k are given by [44]. In our case, we are interested in looking the channel as a filter and noise source as a consequence of finite precision implementation of the digital filters. Entropy Entropy reflects a direct relationship between the length of the information and its uncertainty. As entropy quantifies probabilistic and repetitive events, it is utilized so generally in different fields [50]. The maturation of the idea of entropy of random variables and processes by Claude Shannon furnished the origins of information theory. In fact, Shannon's first name for this concept was uncertainty and that was the reason for many to define entropy as "a measure of the uncertainty about the outcome of a random process" [51]. The connection with the digital filter becomes clear when the original scheme proposed by Shannon is noticed. This scheme has been adapted in Figure 1. Shannon was interested in how a message could be transmitted through a channel from a transmitter to a destination. In this process, a key a feature is to consider the presence of noise. Here, we see this scheme from the perspective of filtering. Thus, the channel is our filter, which takes the input and changes it into the output. The noise source in our case comes from the finite precision hardware/software where the digital filter is implemented. It is evident that in real applications many other sources of noise should be considered. Nevertheless, for the purpose of this work, we focus our attention only in the operation of the filter as source of noise. In Section 22, Shannon [44] states "The operation of the filter is essentially a linear transformation of coordinates." Shannon deduced this by considering the fact that if an ensemble having an entropy H 1 per degree of freedom in band W is passed through a filter with characteristic Y( f ) the output ensemble has an entropy given by Equation (13). In other words, the new component's frequencies are just the old ones multiplied by a gain. Moreover, Shannon has described this in such way that a filter presents a direct impact on the entropy of a signal. It is clear from Shannon's idea that signals filtered by ideal filters high-pass, low-pass, passband, or stopband should have their entropy decreased, as can be seen in [44] (p. 40) There are a few sorts of entropy characterized in the literature. With regards to thermodynamics, entropy alludes to the measure of disorder. In statistical mechanics, it refers to the amount of uncertainty in the system. In information theory, it is a proportion of the uncertainty related with a random variable [44,52]. Shannon provides the optimal number of binary digits to represent each event of a given message so that the average number of bits/events of the message is as small as possible. Shannon entropy is defined by [53] where H(X) is the entropy (bits), X is a symbol, P i is the probability value of symbol X, and L is the size of the signal. In our case, we measure the entropy for word lengths of 16 and 32 bits. In a complete random signal represented by a word length of 16 bits, the entropy is exactly 16 bits. To proceed with the calculation of Shannon entropy, we apply the following standardization process to the output signal as follows, where y k is the signal; ceil(x) is a function that returns the smallest integer not less than x; min and max return the lowest and the largest value from a vector, respectively; and W L is the word length given in bits. Figure 2a,b presents a sinusoidal wave y(k) = 2 sin(2π2)t sampled at δ = 0.01 to illustrate this procedure. For this sine the calculated entropy using W L = 8 and Equation (14) is H = 4.71. A uniform distributed random signal is shown in Figure 2c, for which the calculated entropy is H(k) = 7.59 ± 0.03. Increasing the number of samples of the random signal, the entropy value approaches 8, as expected. A last observation regarding this procedure is related to the need to discard the transient and limit the number of samples for calculating the entropy of filtered signals. The number of samples has been adopted as 2 10 , which limits the measure of the entropy up to 10. Only in one table, we have adopted 2 12 samples. Tests made with greater number of samples showed us that this limit is sufficient to a reliable estimation of Shannon entropy in this work. Entropy to Detect Noise Entropy has been widely used to detect noise in signal and images [52,[54][55][56]. To show the effectiveness of entropy as a way to detect growth of noise in a signal, we have calculated the entropy changing the variance in Gaussian noise from σ = 0.01 to σ = 0.02. The mean has been kept as µ = 0. A sine wave is shown in Figure 3a. Gaussian noise with σ = 0.01 and σ = 0.02 has been added to this sine wave and shown in Figure 3b,c, respectively. The calculated entropy are (a) 5.66, (b) 5.95 ± 0.03, and (c) 6.23 ± 0.04. The level of Gaussian noise is quite unseeingly, yet the entropy has been sensitive for the increasing of noise. Entropy is a sensitive way to measure uncertainty. To further show this property, let us compare this measure with the well-known signal-to-noise ratio (SNR) given in dB by the following equation where A is root mean square (RMS) amplitude. Let the relation between the entropy of signal and noise (ESN) be where H is the entropy of the signal and noise. Using these two equations, we are going to compare the sensitivity in a little variation of noise. Table 1 shows the difference between SNR and ESN for the signal of Figure 1b (sine wave with Gaussian noise of σ = 0.01) and the same signal but with a σ given in the first column of Table 1. The message of this table is simple. For the case of σ = 0.0200, the SNR gives a difference of 2.6359 ± 0.6920 dB, whereas the entropy for this difference is 15.9343 ± 3.3038 dB. When the difference between the variance of the noise of these two signals are only 0.0125 − 0.01 = 0.0025, we have more confidence to use the ESN to detect this level of difference of noise, as the difference between the SNR of these two signal is 0.527 ± 0.589, whereas for ENS we have 4.023 ± 2.866. For the SNR case, the interval given by one σ is (−0.062;1.116) and we have lost the confidence to ensure that one of the signals presents a higher level of noise than other. Numerical Experiments In this section, three numerical experiments are described. For each experiment, the main steps are outlined. All the numerical experiments have been performed in Octave [57] on a Windows computer. These routines are available upon request. These experiments have been designed to check some effects of finite precision in entropy of digital filtered signals. In the Numerical Experiment 1, poles and zeros are perturbed by quantization error due to a 16-and 32-bit fixed point representation. Numerical Experiment 2 aims at examining the increasing of entropy using the elliptic filter. The correlation between order and entropy increasing is verified in the Numerical Experiment 3. Numerical Experiment 1 The proposed scheme can be summarized in the following steps. Step 1: Use the commands butter, cheby, or ellip of Octave to generate the poles and zeros of the transfer function according to Equation (1). Step 2: Choose number of bits and calculate quantization error according to Equation (6). Step 3: Insert the quantization error at the poles and zeros. Using a similar strategy adopted in [49], Equation (5) can be rewritten as follows. Step 4: The signal is filtered using 50 different combinations described by Equations (7) and (8). Step 5: Apply the standardization procedure to the filtered signal according Equation (15). Step 6: Calculate the mean and standard deviation of the entropy from the 50 filtered signals. In Numerical Experiment 1, the filter poles and zeros are perturbed with the effects of 16-and 32-bit quantization. The input signal is composed as a sum of sinusoidal signals of 50, 75, 125, and 150 Hz. The order of the filters is given in Table 2. Table 2. Order of the filters for Numerical Experiment 1. We have adopted 100 Hz as cut-off frequency in the case of low and high pass. Passband and stopband filters have been designed with 70 and 130 Hz. Numerical Experiment 2 The following steps outline the Numerical Experiment 2. Step 1: Use the command ellip of Octave to generate the poles and zeros of the transfer function (Equation (1)). Step 2: Choice of input signal (3). Step 3: The signal is filtered using 50 values of W L within 1024 to 6024. Step 4: Apply the standardisation procedure to the filtered signal according Equation (15) Step 5: Filter signal using Equation (5). Step 6: Compute the mean and standard deviation of entropy of the filtered signal. In Numerical Experiment 2, entropy was calculated for the original signal 3 and the filtered signal using elliptic filters. To compare, the input signal was simulated without the filtered frequency components. The complete description of the input signal and the ideally filtered signal can be seen in Table 3. The variation of the length of the signal has been used here to calculate mean and standard deviation of the entropy. Table 3. Input signals for the Numerical Experiment 2. We have designed three types of signals composed by different summation of harmonics. The values of frequencies 1-6 are 40 Hz, 60 Hz, 80 Hz, 130 Hz, 150 Hz, and 170 Hz, respectively. To compare, the input signal was simulated without the filtered frequency components as shown in the third column. This is equivalent to produce an output by an ideal filter. In all cases, a sample rate of 0.001 s has been adopted. Different values or even variable sample rate has not been investigated in this work and let for future research. Numerical Experiment 3 The following steps describe the Numerical Experiment 3. Step 1: Use the command butter, cheby, or ellip of Octave to generate the poles and zeros of the transfer function, Equation (1). Step 3: The signal is filtered using 50 different values of W L within 1024 to 6024. Step 4: Apply the standardisation procedure to the filtered signal according Equation (15). Step 5: Compute the mean and standard deviation of entropy of the filtered signal. Step 6: Change the order of the filter from 1 to 8 for each of the steps 1 to 5. In this experiment, the filter order was varied for an input signal with frequencies of 20, 60, and 80 Hz and the cut-off frequency of 60 Hz. Results The results of Numerical Experiment 1 are shown in Table 4. Table 5 shows the result of the Numerical Experiment 2, whereas Table 6 and Figure 4 show the results of Numerical Experiment 3. Discussion and Conclusions This work has investigated the effects of finite precision in the entropy of digital filtered signals. This allows us to quantify the introduction of noise due the action of such filters. We have shown that entropy is a good alternative to identify the presence of noise. It has presented a better result than the signal-to-noise ratio for small amount of variance. To observe the effects of entropy in filtered signals, we have designed three numerical experiments. In Numerical Experiment 1, we have evidenced the increasing of the entropy in all types of filters investigated (Butterworth, Chebyshev, and elliptic) for 16 and 32 bits. The entropy of the input signal is H = 4.9255, whereas in all the filtered signal the entropy is H > 5.32. This is not what is expected for an ideal linear filter (see [44]). We should notice, according to Table 2, that elliptic has been set up with the lowest order. Even in such circumstances, this type of filter has shown practically the same level of entropy in the filtered signal. The results of Numerical Experiment 2 are shown in Table 5. In this case, an ideally filter is simulated by taking out some of the frequency components of the signal. The entropy of filtered signal has been significantly increased varying from 6.5 to almost 8. In Numerical Experiment 3, we have noticed another feature as described in Table 6. This experiment shows a significant positive correlation at the 0.05 level (2-tailed) for elliptic with p-value equals to 0.030. From these experiments, it seems clear that the elliptic filter introduces more uncertainty, that is, entropy, to the filtered signal when compared to Butterworth and Chebyshev filters. Figure 4 shows the FFT of the signals. It is possible to notice a slight difference between subfigures (b) and (c). The remarks made in this manuscript is coherent to what have been presented by DeBrunner et al. [47]. As we are focusing our attention to the source noise furnished by arithmetical operations (see Figure 1), design strategies that look for more efficient ways to implement mathematical expressions can be useful to reduce entropy. In our future work, we intend to test different topologies of filter (direct or cascade, for instance) to verify its influence in the increasing of entropy as done in this manuscript. This seems a quite reasonable pathway as the order is related to the increasing of the number of mathematical operations, which is a well-known source of the noise. We also intend to investigate the influence of sample rate and the number of samples in the computation of entropy.
2020-03-26T10:15:05.962Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "cfb33438800ae90be1394b37df5f9a350d0eb594", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1099-4300/22/3/365/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7d027887c95c76c86256e536008ada38685e27d5", "s2fieldsofstudy": [ "Engineering", "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science", "Mathematics" ] }
264045945
pes2o/s2orc
v3-fos-license
Tipping points in freshwater ecosystems: an evidence map Freshwater ecosystems face numerous threats, including habitat alteration, invasive species, pollution, over extraction of resources, fragmentation, and climate change. When these threats intensify and/or combine with each other, their impacts can shift the ecosystem past a tipping point, producing a major and potentially irreversible shift in state, called a regime shift. We generated an evidence map to assess the current state of knowledge on tipping points in freshwater ecosystems. Our evidence mapping exercise revealed large knowledge gaps. Specifically, there are relatively few studies that explore the effects of tipping points in relation to (1) lotic systems (i.e., rivers, streams), (2) amphibians, mammals, or reptiles, and (3) the interactive impacts of multiple threats. In addition, most studies tended to have short study durations ( < 1 year), and few studies explored the reversibility of an ecosystem change after a tipping point was crossed. Concentrating future research on these gaps to improve understanding of tipping points in freshwater ecosystems in a holistic manner is important to help develop tools to forecast (and thus mitigate) the emergence and effects of tipping points, as well as to guide restoration actions. . Introduction Crossing a "tipping point" can cause dramatic, long-term changes in ecosystem structure and function, resulting in entirely different ecosystems than were present before the transition.Tipping points have been identified in almost every ecosystem, with examples ranging from transitions of pine to broadleaf forest due to severe drought in terrestrial ecosystems (Haberstroh et al., 2022), to coral reefs being outcompeted by seaweed in systems with decreased herbivory in marine ecosystems (Holbrook et al., 2016).In freshwater ecosystems, examples of tipping points include switches in lakes from macrophytedominated, clear conditions to macrophyte-free, turbid conditions due to land-use changes (Schallenberg and Sorrell, 2009), post-invasion transitions from communities dominated by native species to invasive species communities (Hansen et al., 2013), and changes in species assemblage structure after dam construction (Gao et al., 2019), amongst other triggers. Tipping points (also called thresholds or breakpoints) are often triggered by small alterations in the drivers present in the ecosystem (i.e., natural, or anthropogenic environmental parameters outside the natural range of variation for the ecosystem; Hughes et al., 2013;Côté et al., 2016).They may also be triggered by the intensification of single or multiple drivers, or the addition of new drivers that were not present prior to the change.If these changes in drivers result in a non-linear response in ecosystem conditions and the tipping point is crossed, the ecosystem will shift between two (or more) alternative states.This transition is considered a "regime shift" (Scheffer et al., 2001;Hughes et al., 2013;van Nes et al., 2016).If reinforced by feedback mechanisms, these transitions may be irreversible (Hughes et al., 2013;van Nes et al., 2016), at which point the ecosystem has entered a new alternative stable state (Scheffer and Carpenter, 2003).We use the term "stable" with the understanding that this is dependent on the timescale considered.Hysteresis is present if a new state can persist even when drivers are relaxed, making it difficult to return the ecosystem to its original state (Litzow and Hunsicker, 2016).While some regime shifts can be reversed once a tipping point has been crossed, it is often extremely difficult, expensive, and time-consuming to do so, making early identification and action an important goal for ecosystem managers (Kelly et al., 2014;Selkoe et al., 2015). It is difficult to predict if tipping points will occur (Scheffer et al., 2009) or if an ecosystem has the capacity to be resistant or resilient to threats.The paths that can lead an ecosystem toward tipping are equally diverse and complex (Filbee-Dexter et al., 2017).Ecosystems that face numerous natural and anthropogenic drivers are particularly at risk of experiencing tipping points because of the increased likelihood of multiple drivers interacting additively or synergistically (Folt et al., 1999).This may push a system closer toward a tipping point than would otherwise be anticipated (Côté et al., 2016).Understanding if and when these interactions lead to tipping points, and how tipping points influence ecosystems, have been identified as research priorities across diverse ecosystems (Allsopp et al., 2019;Friedman et al., 2020;Dey et al., 2021). Identifying the conditions that have led to previous tipping points is an important step in determining if, when, and where tipping points are likely to occur, due to the risks associated with complete ecosystem change [i.e., decline in biodiversity or ecosystem function (Evans et al., 2017) or loss of ecosystem functions and associated services (Watson et al., 2021)].In many cases, the conditions leading to tipping points are identified only after a regime shift has occurred and therefore prediction of tipping points may not always be possible.Consequently, a better understanding of reversibility of tipping points is necessary to enable management entities to help bolster ecosystem components that facilitate management actions targeting reversibility, especially since restoration actions may not always have the expected results (e.g., Weber et al., 2020). The threat of tipping points in freshwater ecosystems is significant (Jackson et al., 2010;Robin et al., 2014;Griffiths et al., 2017), and likely to grow.Freshwater ecosystems are in a biodiversity crisis (Harrison et al., 2018;Albert et al., 2021), with monitored vertebrates experiencing an 83% biodiversity decline since 1970 according to the Living Planet Report, well outpacing terrestrial, and marine declines (WWF, 2022).This may reflect a loss of functional resilience in these systems (e.g., Oliver et al., 2015) and is one early warning signal commonly used to identify impending tipping points (i.e., a critical slowing down; Scheffer et al., 2009).This biodiversity crisis has spurred the development of Emergency Recovery Plans to reverse this trend (Tickner et al., 2020), and calls for actions to decrease the potential for future tipping points in freshwater systems, such as controlling non-native species invasions or improving water quality. Freshwater ecosystems are impacted by both persistent and emerging threats (e.g., Reid et al., 2019), which may interact to produce the conditions where tipping points are more likely to occur.Environmental decisions for freshwater systems will have to consider interactions of these drivers, whether they are between large-scale drivers such as climate change (IPCC, 2021), or smaller, local scale drivers such as increased boat traffic or construction of docks (Sagerman et al., 2020).With the status of many freshwater species still unclear due to lack of sufficient data (Desforges et al., 2022), improving our overall understanding of tipping points could inform future paths toward reducing the likelihood of pushing freshwater ecosystems past tipping points or in reversing the trajectory if a freshwater ecosystem has been pushed past a tipping point.Bodies involved in the regulation (i.e., government agencies), exploitation (i.e., developers or resource extractors), and protection (i.e., ecosystem managers or Indigenous communities) of freshwater ecosystems therefore require evidence of the threats, especially multiple and cumulative impacts of these threats, and of the potential for reversibility of the trajectory to and past tipping points across freshwater ecosystems. Past reviews have focused on the theoretical basis for tipping points, including defining tipping points in different contexts (Milkoreit et al., 2018), identifying and detecting early warning signals (Burthe et al., 2016;Litzow and Hunsicker, 2016), and identifying and detecting alternative stable states (Petraitis and Dudgeon, 2004).Past reviews have also considered tipping points in different ecosystems such as marine (Rocha et al., 2015), Amazonian forest (Nobre and Borma, 2009), and polar regions (Lenton, 2012).Some of these previous reviews have focused on specific drivers of ecosystem change in different ecosystems [i.e., invasive species (Gaertner et al., 2014;Reynolds and Aldridge, 2021)].While those reviews provide good evidence of the drivers and processes that ecosystems may experience prior to a tipping point, they do not summarize the specific evidence of freshwater tipping points for managers and practitioners faced with managing freshwater ecosystems.Freshwater tipping points are often used as examples of their occurrence in natural ecosystems (i.e., Scheffer and Carpenter, 2003), but freshwater ecosystems remain less represented in the tipping points literature than other ecosystems.For example, a recent bibliometric analysis of tipping points and related terms found marine ecosystems were represented more than twice as frequently in the literature as freshwater ecosystems (Carrier-Belleau et al., 2022).Identifying existing gaps in current knowledge of freshwater tipping points will be a valuable contribution to freshwater ecosystem management.Past reviews with a freshwater focus have considered experimental studies of tipping points due to multiple drivers across aquatic ecosystems (e.g., Carrier-Belleau et al., 2022), or field studies of regime shifts and alternative stable states in freshwater systems (Bayley et al., 2007;Capon et al., 2015), although without considering paleoecological data.Such syntheses provide an initial understanding of tipping points and regime shifts in freshwater ecosystems. An evidence map is a method used to identify and describe key concepts, types of evidence, and gaps in research related to a defined area or field.The evidence map presented here provides an update to previous tipping point research by including new evidence.The objective of this evidence map is to provide a collated summary of the existing body of literature addressing the effects of tipping points, with a specific focus on freshwater ecosystems, and was initiated to help support ecological management in Canadian freshwaters but was intentionally global in scope.We describe key characteristics of the evidence base, including the number of publications, the use of tipping points terminology in these publications, the study locations and designs, the habitats and aquatic taxa, the single and multiple anthropogenic drivers of tipping points, and the measured outcomes (at the population or community level) of tipping points.We build on previous research by including: (1) observational studies (in addition to experimental studies) and paleo-ecological evidence; (2) a more recent search that includes both gray and peer-reviewed literature that captures new evidence; and (3) a description of whether studies assess the reversibility of tipping points.For this evidence map, we focus primarily on tipping points and alternative stable states, using the terminology and definitions proposed by Carrier-Belleau et al. (2022). . Methods To improve the rigor, transparency, and repeatability of our methods, this synthesis was developed adopting best practices from the Collaboration for Environmental Evidence (2018) and ROSES reporting standards (Haddaway et al., 2018).At the beginning of this mapping exercise, we established an advisory team made up of Canadian scientists with knowledge of freshwater ecosystems and tipping points, and literature review experts.The advisory team consulted on all aspects of the work, including the development of the search string, inclusion criteria for article screening, and data extraction strategy. . . Searching for articles We conducted two literature searches in Web of Science Core Collection (WoSCC) accessed from the University of Ottawa's institutional subscription.The first search was conducted in October 2021 and the second in January 2022 (Table 1).We conducted a second search to capture additional terms identified from the work of Carrier-Belleau et al. (2022).A list of potentially relevant search terms was developed in consultation with the advisory team and broken into two components: population (freshwater ecosystem terms) and exposure (tipping points terms).The review team then developed a set of search strings that were modified and refined iteratively through a scoping exercise that evaluated the sensitivity of the search terms and associated wildcards.The comprehensiveness of the search strings was tested using a list of benchmark papers (Supplementary material 1) that were identified as relevant for this map by the advisory team.Search terms for both searches were limited to the English language due to project resource restrictions; however, no language, geographic, or document type restrictions were applied during the searches.We refined the results by using the post-query filter "research areas" and excluded papers from irrelevant disciplines such as medicine or criminology (Table 1).All articles found by WoSCC searches were exported into EPPI-Reviewer Web (Thomas et al., 2022).Prior to screening, duplicates were identified and removed.We also issued a call for evidence to target gray literature sources (i.e., thesis, government documents, consultant reports, etc.) and distributed to relevant mailing lists and social media platforms (Oct/Nov 2021). . . Screening and eligibility . . . Screening process We screened articles found in WoSCC at two distinct stages: (1) title and abstract; and (2) full text.We screened articles found via the gray literature call at the full-text stage and these were not included in consistency checks.We used a semi-automated approach for title and abstract screening by employing a textbased machine learning algorithm in the EPPI-Reviewer software to prioritize relevant articles (Thomas, 2013).During this priority screening we identified a logical cut off point (i.e., a plateau where new articles were no longer being included) at which point title and abstract screening was stopped.All full-text screening was done in Microsoft Excel. Prior to screening all articles, we performed a consistency check to ensure that consistent and repeatable decisions were being made by reviewers.This included allocating a subset of 167 articles at title and abstract screening (2% of all WoSCC search results) and 18 of 492 articles at full text (4% of included articles from WoSCC searches) screening stages for each reviewer to screen independently.After the consistency check screening, we did a comparison and any disagreements among reviewers were discussed and inclusion criteria clarified before moving forward.For complex cases, the review team consulted to discuss further.Reviewers did not screen any article at either stage to which they were an author.We made attempts to retrieve missing articles by requesting them via University of Ottawa and Carleton University Interlibrary Loans.No formal study validity assessment (i.e., study susceptibility to bias) was performed on included articles.However, the metadata extracted on study design allowed us to provide a very basic overview of the robustness and relevance of the evidence (i.e., internal validity) and incorporated into the discussion of results to provide recommendations for future research needs and considerations.AND AND Exposure TS=("cumulative effect$" OR "tipping point$" OR "regime shift$" OR "ecosystem shift$" OR "cascading effects" OR snowballing OR "alternative stable state$" OR "critical threshold$" OR "early warning$" OR "unstable equilibrium state$" OR "catastrophic bifurcation" OR "tipping elements") TS=("catastrophic shift$" OR "state shift$" OR "critical transition$" OR "phase transition$" OR "fold bifurcation$" OR "bifurcation point" OR breakpoint OR "punctuated equilibrium" OR "ecological threshold$") . . . Eligibility criteria NOT NOT Post-query filter: Research Areas SU=("BUSINESS" OR "BUSINESS FINANCE" OR "DENTISTRY ORAL SURGERY MEDICINE" OR "CRIMINOLOGY PENOLOGY" OR "CRITICAL CARE MEDICINE" OR "ECONOMICS" OR "EMERGENCY MEDICINE" OR "HEALTH CARE SCIENCES SERVICES" OR "HEALTH POLICY SERVICES" OR "GERIATRICS GERONTOLOGY" OR "HUMANITIES MULTIDISCIPLINARY" OR "HOSPITALITY LEISURE SPORT TOURISM" OR "MEDICINE GENERAL INTERNAL" OR "MEDICAL INFORMATICS" OR "MEDICAL LABORATORY TECHNOLOGY" OR "MEDICINE RESEARCH EXPERIMENTAL" OR "MEDICINE LEGAL" OR "OBSTETRICS GYNECOLOGY" OR "NUTRITION DIETETICS" OR "NURSING" OR "ORTHOPEDICS" OR "PEDIATRICS" OR "PERIPHERAL VASCULAR DISEASE" OR "PHARMACOLOGY PHARMACY" OR "PRIMARY HEALTH CARE" OR "PSYCHIATRY" OR "PSYCHOLOGY APPLIED" OR "PSYCHOLOGY CLINICAL" OR "PSYCHOLOGY EXPERIMENTAL" OR "PSYCHOLOGY MULTIDISCIPLINARY" OR "PUBLIC ADMINISTRATION" OR "SOCIAL WORK" OR "TELECOMMUNICATIONS" OR "SURGERY") SU=("BUSINESS" OR "BUSINESS FINANCE" OR "DENTISTRY ORAL SURGERY MEDICINE" OR "CRIMINOLOGY PENOLOGY" OR "CRITICAL CARE MEDICINE" OR "ECONOMICS" OR "EMERGENCY MEDICINE" OR "HEALTH CARE SCIENCES SERVICES" OR "HEALTH POLICY SERVICES" OR "GERIATRICS GERONTOLOGY" OR "HUMANITIES MULTIDISCIPLINARY" OR "HOSPITALITY LEISURE SPORT TOURISM" OR "MEDICINE GENERAL INTERNAL" OR "MEDICAL INFORMATICS" OR "MEDICAL LABORATORY TECHNOLOGY" OR "MEDICINE RESEARCH EXPERIMENTAL" OR "MEDICINE LEGAL" OR "OBSTETRICS GYNECOLOGY" OR "NUTRITION DIETETICS" OR "NURSING" OR "ORTHOPEDICS" OR "PEDIATRICS" OR "PERIPHERAL VASCULAR DISEASE" OR "PHARMACOLOGY PHARMACY" OR "PRIMARY HEALTH CARE" OR "PSYCHIATRY" OR "PSYCHOLOGY APPLIED" OR "PSYCHOLOGY CLINICAL" OR "PSYCHOLOGY EXPERIMENTAL" OR "PSYCHOLOGY MULTIDISCIPLINARY" OR "PUBLIC ADMINISTRATION" OR "SOCIAL WORK" OR "TELECOMMUNICATIONS" OR "SURGERY") Search date 25-Oct-21 27-Jan-22 . . Data coding strategy Following full-text screening, we conducted meta-data extraction on all included articles.When multiple relevant studies were reported in a single article, and/or multiple datasets were used to analyze responses of different taxa to a tipping point being reached, we entered each study and/or dataset as independent lines in the codebook (refer to Table 3, for definitions of terms used throughout the evidence map).We identified and combined articles that reported data that could be found elsewhere or that could be combined with another more complete source.Here, we identified the most comprehensive article as the primary study and less complete sources as Supplementary material. In developing the evidence map data extraction form and codebook (i.e., code sheet for all codes used in extraction form), we identified the following key variables through scoping activities and discussion with the advisory team: (1) bibliographic information; (2) study location [i.e., country, type of freshwater ecosystem, habitat type sensu (WWF/TNC, 2019)]; (3) system information (i.e., type of driver, tipping points terminology, reversibility); (4) taxonomic information (i.e., type of organism, number of focal species); (5) study design (i.e., study duration, type of study); and (6) outcome information (i.e., biological indicators).We developed the coding options within these key variables in a partly iterative process, adding new categories and options as consistency checks, and then data extraction proceeded.When determining whether a study had assessed the reversibility of tipping points, we used a "Yes" code if the study incorporated reversibility into a before/after study design or reported data on a shift back to the "original" state (e.g., articles that evaluated the success of a restoration action, before/after state of an ecosystem).We used a "No" code when reversibility was not assessed, or authors only suggested possible reversibility but their statements were not supported by empirical data.In most cases, we extracted data based on author reported information, although we identified major habitat types (e.g., temperate floodplain rivers and wetlands using the Freshwater Ecoregion of the World Interactive Map (Abell et al., 2008).We coded data that were missing or unclear as "unclear". We conducted a consistency check (i.e., cross-checking) at the data extraction stage with a subset of nine articles to ensure that data extraction was conducted in a consistent and repeatable manner.When inconsistencies arose, we discussed discrepancies amongst the reviewers and included additional guidance in the codebook. . . Data mapping method We created the evidence map database in Microsoft Excel and provided the number and key characteristics of the studies found on tipping points in freshwater ecosystems (see https://www.feow.org/ecoregions/interactive-map . Included Excluded Population (freshwater ecosystems) (i) any freshwater ecosystem (stream, lake, river, reservoir, canal, etc.); (ii) within any climate; (iii) in any geographical range; (iv) in which any taxa that are aquatic or have aquatic stages in their life cycle are being studied. Terrestrial ecosystems or aquatic non-freshwater ecosystems such as estuaries, marine, and terrestrial fringe habitats. Exposure (tipping points) Tipping points caused by: (i) humans; (ii) climate change; (iii) single events that accelerated a trophic collapse, in which that single event served as a final push for the collapse but was not the sole cause.And in which there was a direct link between freshwater ecosystems and their tipping points.For example, cumulative effects on ecosystems over time, alternative stable states, or trophic cascades.These cumulative effects were: (i) the result of multiple drivers, or (ii) single driver that affected entire ecosystems (e.g., eutrophication). Tipping points caused by: (i) nature (e.g., Holocene shifts in stable states) (ii) sudden catastrophes (e.g., oil spill) instead of cumulative effects Or studies (i) in which single species collapses were not linked to ecosystem changes; (ii) that talked about species or population changes without talking about the driver(s) that caused them and their system-wide implications; (iii) that mentioned cumulative effects as potential outcomes or next steps toward future research. Study design Primary research studies that report empirical findings (qualitative and/or quantitative data) involving field-based experimental manipulations or observations, laboratory experiments, mesocosms, and combinations thereof. Studies that: (i) were not supported by empirical data, (ii) reported anecdotal evidence; (iii) were purely theoretical, review papers, and policy discussions. Language English or Spanish at full text Studies written in other languages at full text Term Definition Article An independent publication (primary source of relevant information).The publication could be commercially published or gray literature.Used throughout the review. Study An experiment or observation that was undertaken over a specific time period at particular sites that were reported as separate water bodies, and not treated as replicates within a single article. Case Situationally defined in text/visuals (e.g., separate counts of outcomes for taxa) with an independent study.The case was used in descriptive statistics and narrative review. Dataset (1) A single independent study from a single article; or (2) when a single study reported separate comparisons for different taxa. Supplementary material 2).We used descriptive statistics to describe the key variables, summarizing information in figures. We compiled the distribution and frequency of the evidence base into structured heatmaps showing linkages between two categorical variables [e.g., tipping point drivers (columns) and taxonomic responses (rows) in the presence of tipping points, and the associated regime shift components (such as alternative stable states or hysteresis)].Because studies could include multiple interactions between drivers and responses, we mapped individual studies to more than one cell if applicable.We describe results narratively at the level of the study (see definitions in Table 3).Note that the evidence map did not estimate or validate the direction, magnitude (including effect size) or statistical significance of the effect of drivers resulting in tipping points and causing taxonomicspecific responses, but was used to identify potential gaps in the available evidence base of tipping points research (i.e., subtopics that are un-or under-reported in the evidence base that may benefit from further primary research) and evidence clusters (i.e., areas with a higher frequency of studies that may be suitable for deeper synthesis). . . Literature searches and screening The searches in WoSCC yielded 10,782 articles (Figure 1), of which 267 were identified as duplicates and removed from the screening process.This resulted in 10,515 articles for title and abstract screening.Using EPPI-Reviewer Priority Screening (Thomas, 2013), we stopped title and abstract screening after 95% of articles (9,977/10,515) were screened as a plateau was reached with no new inclusions made after 2,084 consecutive articles.This resulted in 538 articles assumed to be irrelevant and excluded at the title and abstract stage.Of the 9,977 articles screened at title and abstract, 9,522 were excluded, yielding 455 articles for full-text screening.Of these articles, one was not retrievable through University of Ottawa or Carleton University subscriptions or Interlibrary Loans.Full text screening removed 281 articles, most of which were excluded because of an irrelevant exposure (e.g., article did not look at a cumulative impact at the ecosystem level and instead focused on sudden catastrophes or single species collapse), target ecosystem (e.g., article examined a tipping point or associated regime shift studied in a non-freshwater ecosystem), or outcome (e.g., article only measured impacts in abiotic factors such as turbidity and nutrient levels).Articles excluded at full text with reasons for their exclusion can be found in Supplementary material 3.An additional seven research items from pre-screened gray literature submissions were obtained via social media/email and were moved forward to data extraction. A total of 181 articles were initially included for data extraction.We excluded four articles at this stage, including three that were considered Supplementary material and one that was a missed duplicate.This resulted in 219 studies from 177 articles included in the evidence map after data extraction (see Figure 1 for flow diagram of inclusion/exclusion process results). . .Summary of the evidence base . . .Publication trends Article publication dates ranged from 1993 to 2022.Most of the articles were published after 2016, suggesting an increased focus on tipping points in the last seven years (Figure 2). The terminology that authors used to describe tipping point research changed over time (Figure 3), from being dominated by three terms (i.e., alternative stable state, cascading effects, and stable state) in the 1990s, to 16 terms in use after 2016.Frequency of use varied depending on the time period, and new terms came into use at various periods.For example, the use of the term "alternative stable state" was common between 1996-2000 (of the eight articles published during this time period, this was the only term used), but its proportional use was less common in other time periods, and "tipping points" did not become common usage until after 2011. Of the author terms used five or more times, the most frequently used terms were: regime shift (125/431 cases; 29%); alternative stable state (96; 22%); and thresholds (65; 15%).All other terms were used <30 times each.When comparing author terms that were the same as terms proposed in Carrier-Belleau et al. ( 2022), called "standardized terms" hereafter, breakpoint was most frequently consistent with this terminology (23/23 cases), followed by regime shift (121/125 cases), alternative stable state (89/96 cases), and tipping point (21/26 cases).Hysteresis was consistently used 50% of the time or did not match any definition clearly enough to be assigned (Figure 4). . . . Study location A total of 234 cases were studied in 44 countries (one additional case did not report the study location).Cases took place in Europe (83 cases; 35%), North and Central America (72 cases; 31%), and Asia (47 cases; 20%) (Figure 5).Cases in South America, Africa, Oceania, and Eurasia accounted for the remaining 14% (Figure 5).The most frequently studied countries were the United States (47 cases; 20%) where the most represented state was Michigan, China (38 cases; 16%) where the most represented province was Hubei Province, and Canada (21 cases; 9%) where the most represented province/territory was Nunavut (Supplementary Figures S1a-c; Supplementary material 4). . . . Study design Study designs were either field-based, mesocosm experiments, laboratory-based, or a combination of different study designs.Among studies with a single study design, 157 studies (72% of 219 studies) were field-based (149 observational, eight experimental), 26 studies (12%) were mesocosm experiments, and five studies (2%) were laboratory experiments.For studies conducted using a combination of different study designs (31 studies; 14%), the majority combined field-based assessments with a laboratory or modeling component. FIGURE Change in author terminology for tipping point literature through time.The proportion of cases was determined from the total number of cases per five-year publication increment.Note that -and -are partial increments due to data availability.Only author terminology used five or more times in the database (i.e., ≥ % of all cases each) was considered.). FIGURE Geographic distribution of evidence, displaying the number of cases per country.Since some studies were conducted in more than one country, counts are the number of cases.Esri Inc. ( ).ArcGIS Pro (Version . .).Software.Redlands, CA: Esri Inc. (https://www.esri.com/en-us/arcgis/products/arcgis-pro/overview). was insufficient information to determine study duration for 28 cases (12%).For studies using reconstructed sediment or peat cores or other historical sources (38 cases; considered here separately from other field-based monitoring studies for this key variable), the range in duration of years reconstructed was 60 to 2,000 years. Frontiers in Freshwater Science frontiersin.orgHernández Martínez de la Riva et al. . . . Ecosystems and major habitat types The most studied ecosystems were lakes (109/219 studies; 50%), followed by rivers (22; 10%), and streams (20; 9%).Wetlands, ponds, peatlands, and reservoirs were considered in eight or fewer studies each (12%).A total of 31 studies (14%) were carried out in mesocosm/laboratory settings and could therefore not be attributed to a particular ecosystem.Nine studies (4%) took place in more than one type of ecosystem, and three studies (1%) took place in other types of ecosystems, such as navigation pools (a cross between a river and a reservoir), bay transition zones, or ditches.Therefore, of the studies that reported specific individual freshwater ecosystems, most occurred in lentic systems (61%) (Figure 7). FIGURE Frequency of study duration in years.An additional cases did not specify study duration.Studies using reconstructed sediment or peat cores, or other historical data are not included in counts. Regarding major habitat types, most of the cases took place in temperate floodplain rivers and wetlands (72/222 cases; 32%), followed by temperate coastal rivers (33; 15%).A similar number of cases took place in large lakes, polar freshwaters, temperate upland rivers, and tropical and subtropical coastal rivers, with between 13 to 17 cases each.Other major habitat types studied included tropical and subtropical floodplain rivers and wetlands, tropical and subtropical upland rivers, montane freshwaters, xeric freshwaters and endorheic basins, and large river deltas, with seven or fewer cases each.No cases took place on oceanic islands.Thirty-nine cases (18%) had either insufficient information to determine the type of major habitat type considered (i.e., information was unclear or not reported), or consideration of major habitat type was not directly applicable (i.e., in the case of mesocosm or laboratory studies; Figure 7). When considering ecosystems within major habitat types, the most common combination was lakes in temperate floodplain rivers and wetlands (48 cases; 22%).Other common combinations included lakes in temperate coastal rivers (18 cases; 8%) and mesocosm or laboratory studies not attributed to a particular ecosystem (28 cases; 13%) (Figure 7). Distribution and frequency of cases (n = ) occurring in di erent freshwater ecosystems (Abell et al., ) and major habitat types (WWF/TNC, ). Frontiers in Freshwater Science frontiersin.org . . . Anthropogenic drivers There were 147 studies (67%) that investigated a single driver, 66 studies that investigated multiple drivers (30%), and six studies (3%) that did not provide sufficient information to determine the type of driver studied.The most common single driver studied was chemical (67 studies; 31%), followed by climate change (28 studies; 13%), physical (25 studies; 11%), and biological (24 studies; 11%).Three studies (1%) examined other types of drivers such as flooding (potentially linked to climate change; Laine and Frolking, 2019), land-cover changes (including a gradient of agricultural, urban and impervious surfaces; Utz et al., 2009), and unspecified anthropogenic impacts due to agriculture (Krynak and Yates, 2018).The most common multiple driver combinations studied were biological/chemical and chemical/physical (18 studies; 8% each), and biological/chemical/climate change (10 studies; 5%).All other combinations were considered in fewer than 10 studies each.A single study considered four drivers (Kovalenko et al., 2018). Figure 9 summarizes the distribution and frequency of tipping points studies for various intersections of taxonomic groups, drivers, and outcomes.The number of cases per category are shown in brackets. . . . Tipping point identification and reversibility of e ects Most studies identified a point or date at which a tipping point occurred (142/221 cases; 64%), while 67 cases (30%) did not identify a specific moment, and 12 cases (∼5%) presented unclear information regarding the event or events that led to the tipping point. A few cases (38/221; 17%) explored reversibility after a tipping point for a given driver and its outcomes on the studied taxa.Chemical drivers were the most studied type of driver for which reversibility was assessed (17 of 38 cases). Out of the 38 cases that assessed reversibility after a tipping point, 27 cases identified a specific point in time at which the tipping point occurred (71%), 10 (26%) cases did not, and one case (3%) provided unclear information.Out of the 183 cases that did not assess reversibility after a tipping point, 115 cases identified a specific point in time at which a tipping point occurred (63%), 57 (31%) cases did not, and 11 cases (6%) provided unclear information (Figure 10). . Discussion . . Review limitations Although our search strategy did not intend to impose any regional restrictions on the captured evidence, it may have been inherently biased toward North American studies.For instance, due to project resource limitations, we could not conduct searches in additional databases or in other languages.Furthermore, most authors and advisory team members contributing to this map were Canadian.We did attempt to mitigate some potential bias, in part, by supplementing the search with a broad call for gray literature on social media platforms.However, we acknowledge that the existing literature base is likely broader than what we captured through our search strategy and that the comprehensiveness of this map could be further improved by: (1) conducting searches in multiple databases, thesis repositories, and languages; (2) including information found in theoretical modeling studies; (3) incorporating a citation chasing strategy (backwards and forwards citation), in addition to searching bibliographies of relevant reviews; and (4) searching websites of key organizations specializing in the study of tipping points to further capture additional gray literature.With that being noted, to our knowledge, this evidence map provides the most comprehensive and up-todate overview of the existing literature of tipping points research in freshwater ecosystems, identifying 219 studies from 177 articles. . . General observations regarding the evidence base Publication rates within the topic of tipping points appear to be increasing linearly since 2007, suggesting a stable (albeit slow) growth in research on the topic (i.e., at a rate of ∼1 article per year on average; Figure 2), rather than increasing exponentially as witnessed in many evidence synthesis (e.g., Bernes et al., 2015;Haddaway et al., 2017).However, this increase in the number of tipping points papers might be an artifact of an overall increase in the number of published articles and emergence of new journals over the last few decades.According to the metrics reported in SJR Scopus data , the total number of documents published in 2007 in the list of journals captured https://www.scimagojr.com/ by this evidence map was 20,412 whereas by 2022 that number had more than quadrupled to 89,352 (Supplementary material 5).Furthermore, the use of tipping point terminology has varied over time (Figure 3), with fewer terms more historically used than others (i.e., alternative stable state, cascading effects, and stable state) and currently more than 16 terms in use.Terms such as "tipping point" have gradually become more popular.Carrier-Belleau et al. (2022) found similar patterns in the frequency in which terms were used in publications, also noting that some terms were more frequently used than others depending on the habitat being studied (e.g., "tipping point" more commonly used in terrestrial habitats compared to freshwater and marine ecosystems).In addition to changes in the frequency of term use, when comparing the context in which authors used terms with standardized definitions, terms such as "hysteresis" only matched those definitions half of the time.This implies that some terms can have multiple meanings and nuances, leading to confusion.For example, the term "threshold", as per the definition used in this evidence map, is a synonym for tipping point (the value/zone along an environmental gradient where small changes in driver(s) cause non-linear responses in systems conditions, which lead to different states that are often irreversible) (Milkoreit et al., 2018;Carrier-Belleau et al., 2022).However, Suding and Hobbs (2009) defined "thresholds" as "points where even small changes in environmental conditions (underlying controlling variable) will lead to large changes in system state variables".Both definitions are similar, in that "threshold" is a specific value or point, but they emphasize different aspects of the state change.Specifically, while our definition, in line with recent reviews on this subject, specifies that the new state is stable and/or potentially irreversible, the definition of Suding and Hobbs (2009) identifies these points as those that cause outsized alterations in state variables regardless of final state. Studies evaluating tipping points were most commonly conducted in the United States, China, and Canada (Figure 5), and focused primarily on lakes (Figure 7), suggesting geographical and ecosystem biases in the evidence base.This focus on tipping points in lakes was also reported in Carrier-Belleau et al. (2022), and expected, as these systems are often considered models of complex dynamical systems reflecting how other freshwater ecosystems may work (Scheffer, 2009).However, lakes might have aroused particular research interest because they are big water bodies (and thus on the radar of publics and politicians) and some of them have immense social-economic value (e.g., the Laurentian Great Lakes, Lake Veluwe, Lake Atitlan).Most studies used observational fieldbased methods for assessments and only lasted a short duration (i.e., <1 year).Impacts from crossing tipping points on microbiota were most frequently studied, followed by invertebrates and plants, commonly measured by productivity (e.g., abundance, biomass) and/or diversity outcomes (Figures 8, 9).There was a general paucity of studies related to all other taxa.Two-thirds of the evidence base focused on investigating a single driver, focusing most frequently on chemical drivers, followed by climate change drivers.Additionally, there was a lack of data on the reversibility or restoration of freshwater ecosystems after a tipping point had been reached (Figure 10). . . Implications for management and research Our evidence map provides an overview of the scope and limitations of the tipping points literature in freshwater ecosystems, highlighting several points of consideration for managers and researchers. First, most studies that we identified for the evidence map had sampling periods of <1 year.A similar pattern of short monitoring durations was also noted by Smol (2019), finding ∼60% of the environmental monitoring programmes published in the 2018 volume of the Environmental Monitoring and Assessment journal, were <1 year long and over 80% <3 years in duration.This suggests that most studies examined a tipping point after or while it occurred without collecting information about the events leading to the tipping point or about the long-term impacts of that tipping point.In addition, in short-term studies it might be difficult to distinguish between the occurrence of tipping points altering the state of an ecosystem, and increased temporal variability in the response metrics that may not result in a change in state.An exception, in which sampling periods were short but reconstructed data accounted for long periods of time, were sediment and peat cores.For example, Monchamp et al. (2021) took sediment cores from Lake Joux (Switzerland) during 2016 and 2017 and reconstructed paleoecological data from approximately 1000 to 2015 CE using molecular techniques; the authors found that a change in nutrient regime had led to a regime shift during 1963-1969 CE.We acknowledge that there are multiple obstacles in designing both experimental and long-term studies, such as cost and logistics, but understanding the context in which tipping points occur can provide valuable information for the design of effective management strategies.Although long-term studies cannot be substituted by paleoecological studies, one strategy for overcoming the difficulties in establishing long-term studies could be to use paleoecological data for covering longer timescales and complementing the findings of shorter duration studies. Second, most studies took place in lakes located in temperate regions and assessed the impacts of chemical drivers on microbiota.Accumulating knowledge of tipping points in lakes and chemical drivers can potentially result in the creation or improvement of mathematical models to assess the state of ecosystems.For example, Janssen et al. (2019) developed a model to better understand the effects of eutrophication and monitor water quality in stratified and non-stratified freshwater lakes.This kind of model can be useful for managers and policymakers in the development of early warning tools.However, for these tools to be more widely applicable and accurate, we need to improve our understanding of tipping points for: (i) different ecosystems (i.e.lotic systems, such as rivers and streams, but also other lentic systems such as wetlands, especially outside of temperate regions); (ii) un-or under-represented taxonomic groups (i.e.amphibians, mammals, and reptiles); and (iii) other types of drivers (i.e.physical and biological drivers, such as the creation of dams or the impacts of invasive species).In addition, most studies focused on productivity measures on target taxa, such as abundance and biomass.We were unable to find any studies that explicitly assessed population viability.This may have profound implications for conservation since population viability analysis can potentially be used in the tipping points context for identifying thresholds or evaluating the feasibility and success of recovery actions (Boyce, 1992). Third, this evidence map suggests a small number of evidence clusters (i.e., most studied subtopics) that may warrant future evidence synthesis.We used an arbitrary cut-off of > 25 cases to suggest these subtopics, acknowledging that there are currently, to our knowledge, no Collaboration for Environmental Evidence standards or guidelines for setting quantitative thresholds to identify knowledge clusters and gaps.We chose a relatively large threshold of >25 cases to identify knowledge clusters to increase the chance that these subsets will have a sufficiently large sample size to conduct future secondary reviews (e.g., narrative synthesis approach or even meta-analysis).We identified two potential subtopics of interest: (1) evaluations of a single chemical driver leading to a tipping point considering (a) all aquatic taxa combined in relation to measured outcomes of abundance (53 cases), biomass (39 cases), and diversity (28 cases), and (b) microbiota alone in relation to changes in abundance (27 cases); and (2) evaluations of a (single) climate change driver leading to a tipping point considering all available aquatic taxa combined in relation to changes in abundance (26 cases) (Figure 9).From these subtopics, we could ask questions such as: How does eutrophication alter the abundance of different taxonomic groups in freshwater ecosystems?What is the effect of pesticides on freshwater biodiversity?Digging deeper into these questions might be useful for understanding the effects of tipping points at broader ecosystem scales.For example, Lewis et al. ( 2021) conducted a mesocosm experiment in which they tested the effects of different types of insecticides on zooplankton abundance, phytoplankton biomass, and leopard frog mass, and the potential interactive effects of these insecticides with different road salt concentrations.The authors found that not all insecticides had the same effects on the taxa studied and that salt concentrations didn't have the same interactive effects with the insecticides.Extracting data on similar studies could provide some insights about the direction of driver effects. Lastly, few studies explored the effects of multiple drivers combined, and those that analyzed the interactions of various drivers focused on chemical drivers combined with most often, biological, physical, or climate change drivers.Disentangling the effects of multiple driver interactions on target taxa and ecosystems can be challenging (Ormerod et al., 2010), not only because of logistical constraints in experimental designs (i.e., data availability for all drivers at appropriate temporal scales), but also because drivers can act synergistically or antagonistically with each other, and in non-linear or consistent directions (Folt et al., 1999).For instance, Hoveka et al. (2016) found that the distribution of the top five freshwater invasive plants in South Africa could be expanded for some species and diminished for others due to the effects of climate change; therefore, tipping points in these freshwater ecosystems could be due to the interaction between invasive species and climate change drivers.In addition, there is also a lack of studies examining the reversibility of tipping points.Understanding if and how an ecosystem can return to its original (or near original), usually more desirable, state requires a deep understanding of the pathways that led to the tipping point and the pathways that can reverse that tipping point.The challenge is that often the pathway to reverse the tipping point is not the same as the pathway that led to the tipping point in the first place [i.e., hysteresis is present in the system (Beisner et al., 2003)].For example, Jones (2020) examined the recovery of a section of the Potomac River (Virginia, USA).This river had a history of eutrophication due to phosphorus loading, in which subsequent reductions in phosphorus loading (up to a 90% reduction in 1980s) didn't translate to a shift back to a clear state until ∼25 years later.This lagged response following an alternative pathway for recovery is an example of hysteresis; however, full recovery back to the original state may not always be possible or feasible, and we should strive to prevent an ecosystem from reaching a tipping point whenever possible. . Conclusion The evidence base regarding tipping points in freshwater systems is growing rapidly yet there are still numerous deficiencies in our knowledge that make it difficult for researchers and managers to understand uncertainty and make evidence-informed decisions.It is our hope that this evidence map will identify opportunities for researchers to address research gaps and for funding bodies to prioritize efforts to address those gaps.Identifying thresholds for where tipping points occur so they can be predicted and avoided is a logical starting point for decision makers that are attempting to apply tipping point concepts and evidence to their given context.In the current context of climate change timely action is needed, however, a look into the future needs to be accompanied by reflecting on the past.It is possible that due to shifting baselines (Pauly, 1995), what we now consider a healthy ecosystem was considered degraded in the past and we need to generate intergenerational learning experiences for future generations of researchers and managers to be aware of current and past conditions. FIGURE FIGUREROSES flow diagram(Haddaway et al., ) indicating the results of the literature search, and the number of articles included or excluded at the screening and data extraction stages. FIGURE FIGUREPublication years for the included articles.An additional three articles published in early were included but not shown since searches only included month in . FIGURE FIGUREComparison of author terms usage to standardized terminology as defined by the review team and based on Carrier-Belleau et al. (). FIGURE FIGURENumber of datasets (n =) for each taxonomic group.No studies considered mammals or reptiles. FIGURE FIGUREDistribution and frequency of cases (n =) examining single and combined drivers of ecosystem change and resulting tipping points on taxonomic outcomes by taxa.Reptiles and mammals were not present in the captured evidence base.The only two cases considering amphibians are denoted with an asterisk (*).Bio, Biological; Chem, Chemical; CC, Climate change; Phys, Physical. Table 2 criteria had to be met for articles to be included in the evidence map.TABLE Search strings used to execute searches in Web of Science Core Collection.Population TS=(aquatic OR "fresh water" OR freshwater OR stream$ OR river$ OR fluvia * OR lake$ OR pond$ OR wetland$ OR reservoir$ OR canal$ OR marsh * OR swamp$ OR fen$ OR bog$ OR mire$ OR riparian OR tributar * OR effluent OR lentic OR creek$ OR brook$ OR basin$ OR ditch * OR pool$ OR "Headwater Drainage Feature" OR lotic) TS=(aquatic OR "fresh water" OR freshwater OR stream$ OR river$ OR fluvia * OR lake$ OR pond$ OR wetland$ OR reservoir$ OR canal$ OR marsh * OR swamp$ OR fen$ OR bog$ OR mire$ OR riparian OR tributar * OR effluent OR lentic OR creek$ OR brook$ OR basin$ OR ditch https://eppi.ioe.ac.uk/EPPIReviewer-Web/homeFrontiers in Freshwater Science frontiersin.orgHernández Martínez de la Riva et al. * OR pool$ OR "Headwater Drainage Feature" OR lotic) TABLE Definition of terms used throughout the review. TABLE Outcome categories and definitions used to measure biological responses to ecosystem drivers and resulting tipping points.
2023-10-14T15:38:05.248Z
2023-10-11T00:00:00.000
{ "year": 2023, "sha1": "962d03dd197bee8a01c09e55c579d1de313bde47", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/ffwsc.2023.1264427/pdf?isPublishedV2=False", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "e569069db3486f8107514380f5114a6da853e116", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
244803207
pes2o/s2orc
v3-fos-license
Ipatasertib plus paclitaxel for PIK3CA/AKT1/PTEN-altered hormone receptor-positive HER2-negative advanced breast cancer: primary results from cohort B of the IPATunity130 randomized phase 3 trial Purpose PI3K/AKT pathway alterations are frequent in hormone receptor-positive (HR+) breast cancers. IPATunity130 Cohort B investigated ipatasertib–paclitaxel in PI3K pathway-mutant HR+ unresectable locally advanced/metastatic breast cancer (aBC). Methods Cohort B of the randomized, double-blind, placebo-controlled, phase 3 IPATunity130 trial enrolled patients with HR+ HER2-negative PIK3CA/AKT1/PTEN-altered measurable aBC who were considered inappropriate for endocrine-based therapy (demonstrated insensitivity to endocrine therapy or visceral crisis) and were candidates for taxane monotherapy. Patients with prior chemotherapy for aBC or relapse < 1 year since (neo)adjuvant chemotherapy were ineligible. Patients were randomized 2:1 to ipatasertib (400 mg, days 1–21) or placebo, plus paclitaxel (80 mg/m2, days 1, 8, 15), every 28 days until disease progression or unacceptable toxicity. The primary endpoint was investigator-assessed progression-free survival (PFS). Results Overall, 146 patients were randomized to ipatasertib–paclitaxel and 76 to placebo–paclitaxel. In both arms, median investigator-assessed PFS was 9.3 months (hazard ratio, 1.00, 95% CI 0.71–1.40) and the objective response rate was 47%. Median paclitaxel duration was 6.9 versus 8.8 months in the ipatasertib–paclitaxel versus placebo–paclitaxel arms, respectively; median ipatasertib/placebo duration was 8.0 versus 9.1 months, respectively. The most common grade ≥ 3 adverse events were diarrhea (12% with ipatasertib–paclitaxel vs 1% with placebo–paclitaxel), neutrophil count decreased (9% vs 7%), neutropenia (8% vs 9%), peripheral neuropathy (7% vs 3%), peripheral sensory neuropathy (3% vs 5%) and hypertension (1% vs 5%). Conclusion Adding ipatasertib to paclitaxel did not improve efficacy in PIK3CA/AKT1/PTEN-altered HR+ HER2-negative aBC. The ipatasertib–paclitaxel safety profile was consistent with each agent’s known adverse effects. Trial registration NCT03337724. Supplementary Information The online version contains supplementary material available at 10.1007/s10549-021-06450-x. Introduction The phosphoinositide 3-kinase (PI3K)/AKT pathway is frequently upregulated in cancer [1,2]. Activation of AKT, the central node of the PI3K/AKT pathway, promotes cell survival, proliferation, metabolism and growth [1,3], and is implicated in resistance to endocrine therapy [4]. PIK3CA/AKT1/PTEN alterations are frequently observed in breast cancer, including approximately 50% of patients with hormone receptor-positive (HR+) breast cancers, and contribute towards a negative prognosis and resistance to endocrine therapies [5][6][7][8][9]. Ipatasertib is a highly selective oral ATP-competitive smallmolecule inhibitor of all three AKT isoforms [10]. Ipatasertib is being developed for the treatment of cancers in which PI3K/ AKT pathway activation may be relevant for tumor growth or therapeutic resistance, and has demonstrated PI3K/AKT pathway inhibition in preclinical studies [10][11][12]. PTEN protein loss and PTEN or PIK3CA genetic alterations appeared to be associated with enhanced sensitivity to single-agent ipatasertib in cell lines and preclinical models [10,13]. In a phase 1b study, the combination of ipatasertib and paclitaxel was well tolerated and showed radiographic responses in patients with advanced/metastatic breast cancer, including HR+ disease [14]. In the randomized, phase 2 LOTUS trial, the addition of ipatasertib to paclitaxel improved progression-free survival (PFS) compared with paclitaxel alone in metastatic triple-negative breast cancer (TNBC), especially in patients whose tumors harbored alterations in PIK3CA, AKT1 and/or PTEN [15]. The phase 3 IPATunity130 trial included two independent randomized cohorts (Cohort A in TNBC and Cohort B in HR+ HER2-negative [HER2-] unresectable locally advanced or metastatic breast cancer [aBC]) evaluating ipatasertib plus paclitaxel combination therapy and a third single-arm signalseeking cohort in patients with TNBC whose tumors did not have PIK3CA/AKT1/PTEN alterations (Cohort C) evaluating a triplet combination of ipatasertib, paclitaxel and atezolizumab. The two randomized cohorts are powered independently and designed to be analyzed separately. Here we report results from Cohort B, which evaluated ipatasertib in combination with paclitaxel for HR+ HER2-PIK3CA/AKT1/PTEN-altered aBC. Study design and participants In Cohort B of the IPATunity130 (NCT03337724) randomized, double-blind, placebo-controlled, phase 3 trial, eligible patients had to have HR+ (≥ 1% staining) HER2-PIK3CA/AKT1/PTEN-altered measurable aBC according to Response Evaluation Criteria in Solid Tumors (RECIST; version 1.1). Tumor PIK3CA/AKT1/PTEN alteration status (i.e., activating alterations in PIK3CA and/or AKT1, and/or inactivating alterations in PTEN, described in detail in Supplementary Table S1) was determined from the most recently available tumor tissue sample using the Foundation Medicine Inc (Cambridge, MA) next-generation sequencing Clinical Trial Assay (CTA). In addition, patients had to be inappropriate for endocrine-based therapy (i.e., demonstrated insensitivity to endocrine therapy or visceral crisis), a candidate for taxane monotherapy and have Eastern Cooperative Oncology Group performance status 0 or 1. Patients who had previously received chemotherapy for aBC or whose diagnosis of aBC was < 1 year since their last (neo)adjuvant chemotherapy were ineligible, as were patients with a history of or known presence of brain or spinal cord metastases. Prior cyclin-dependent kinase (CDK)4/6 inhibitors and PI3K/mammalian target of rapamycin (mTOR) inhibitors were permitted. Procedures Patients were randomized in a 2:1 ratio by investigators using an interactive web-response system to receive either oral ipatasertib (400 mg daily on days 1-21) plus intravenous paclitaxel (80 mg/m 2 on days 1, 8 and 15) of a 28-day cycle, or placebo plus the same paclitaxel regimen. Randomization was stratified by three criteria: (neo)adjuvant chemotherapy (yes vs no), prior PI3K/mTOR inhibitor (yes vs no) and region (Asia-Pacific vs Europe vs North America vs rest of the world). To improve the management of diarrhea (commonly associated with ipatasertib and/or paclitaxel therapy), antidiarrheal prophylaxis (loperamide) was mandated for the first cycle for all patients, where permitted locally. Treatment was continued until disease progression (RECIST; version 1.1), unacceptable toxicity or patient withdrawal. Patients discontinuing paclitaxel or ipatasertib/placebo because of toxicity could continue on single-agent treatment. Crossover from placebo to ipatasertib was not permitted. Tumors were assessed every 8 weeks by the investigators according to RECIST (version 1.1). After discontinuing treatment, patients were followed up every 3 months for survival and subsequent anticancer therapies. Patientreported outcomes (PROs) were assessed using selected scales of the European Organisation for Research and Treatment of Cancer Quality of Life Questionnaire Core 30 (EORTC QLQ-C30), administered at baseline, at day 1 of each subsequent cycle, and at the treatment discontinuation visit. Adverse events (AEs) were assessed and graded according to Common Terminology Criteria for Adverse Events (version 4.0). Endpoints The primary objective was to assess the efficacy of the ipatasertib plus paclitaxel combination as determined by investigator-assessed PFS. PFS was defined as the interval between randomization and the first occurrence of disease progression, as determined by the investigator according to RECIST (version 1.1), or death from any cause, whichever occurred first. A sensitivity analysis of PFS according to independent review committee (IRC) assessment was performed in a similar manner. Overall survival (OS; defined as the interval between randomization and death from any cause) was the key secondary endpoint. Other secondary endpoints included confirmed objective response rate (investigator assessed per RECIST [version 1.1]), duration of response in responding patients, clinical benefit rate (complete or partial response, or stable disease sustained for ≥ 24 weeks) in patients with measurable disease at baseline, PROs, and safety. Statistical analysis The planned sample size was 201 patients. For the primary analysis, 150 PFS events in the intent-to-treat (ITT) population were required to detect a hazard ratio of 0.62 with 80% power at a two-sided significance level of 5%. This corresponds to an increase in median PFS from 8.5 months in the control arm to 13.8 months in the ipatasertib-containing arm. If investigator-assessed PFS in the ITT population was significant at 5%, OS was to be tested hierarchically at the same significance level. Efficacy analyses were based on all randomly assigned patients (ITT population) according to the treatment arm to which patients were allocated. PRO analyses of Global Health Status/Quality of Life (GHS/QoL) were performed on randomized patients who had a baseline and at least one post-baseline PRO assessment (PRO-evaluable population); PRO analyses of time to ≥ 11-point confirmed deterioration in pain [16] were performed on the ITT population. Safety analyses were based on all patients who received at least one dose of ipatasertib, placebo or paclitaxel; patients were analyzed based on the treatment actually received. Patient population Between January 6, 2018 and March 29, 2019, 782 patients were screened for the trial, of whom 560 were considered screen failures, most commonly because of absence of PIK3CA/AKT1/PTEN alteration (n = 303). Ultimately, 222 patients were randomized: 146 to ipatasertib plus paclitaxel and 76 to placebo plus paclitaxel. Two patients in the ipatasertib plus paclitaxel arm were included in the efficacy analyses despite the most recent hormone receptor status identifying tumors as TNBC. Two patients (one in each arm) received no treatment and were therefore excluded from the safety analysis population (Fig. 1 Baseline characteristics were generally well balanced, except for a higher proportion of patients in the ipatasertib plus paclitaxel arm with a disease-free interval of > 3 years (40% vs 29% in the placebo plus paclitaxel arm) or a chemotherapy-free interval > 3 years (31% vs 24%, respectively) ( Table 1). Prior therapy was balanced between arms and included (neo)adjuvant chemotherapy in 55% of patients, endocrine therapy for aBC in 46%, PI3K/ mTOR inhibitor in 24% (predominantly everolimus) and CDK4/6 inhibitor in 26%. According to European Society for Medical Oncology (ESMO) definitions [17], 18% of patients had primary endocrine resistance and 45% had secondary endocrine resistance. A further 18% of patients did not meet the ESMO definitions for endocrine resistance but were deemed by the investigator to have visceral crisis. Within the subset of 120 patients who had received no prior endocrine therapy in the advanced setting, 18 (15%) had primary endocrine resistance, 34 (28%) had secondary endocrine resistance and 39 (33%) had visceral crisis without endocrine resistance. Among the 144 patients in the ipatasertib plus paclitaxel arm and 75 in the placebo plus paclitaxel arm with measurable disease, the objective response rate was 47% in both arms (95% CI 38-58% and 35-59%, respectively), including complete response in four patients (3%) in the ipatasertib plus paclitaxel arm versus none in the placebo plus paclitaxel arm. The median duration of response was 9.2 months in both arms (95% CI 7.2-11.3 months in the 67 responders in the ipatasertib plus paclitaxel arm; 95% CI 6.8-12.5 months in the 35 responders in the placebo plus paclitaxel arm). The clinical benefit rate was 69% (95% CI 61-76%) in the ipatasertib plus paclitaxel arm and 65% (95% CI 53-76%) in the placebo plus paclitaxel arm. OS results were immature (deaths in 23% of the ipatasertib plus paclitaxel arm vs 29% of the placebo plus paclitaxel arm). At this interim analysis, median OS was not evaluable in ipatasertib-treated patients and 20.9 months (95% CI 17.3-not evaluable) in the placebo plus paclitaxel arm (hazard ratio, 0.72, 95% CI 0.42-1.24). Patient-reported outcomes Completion rates for PRO questionnaires exceeded 80% in each arm up to cycle 23 and at the study drug discontinuation visit. Overall, 207 patients were evaluable for mean change from baseline in GHS/QoL (134 in the ipatasertib plus paclitaxel arm, 73 in the placebo plus paclitaxel arm). Patients' GHS/QoL mean scores at baseline were 68.8 in the ipatasertib plus paclitaxel arm and 63.7 in the placebo plus paclitaxel arm, and were maintained in both treatment arms until Cycle 10 (at which point, less than half of the PRO-evaluable population in each arm remained on treatment, precluding meaningful analysis beyond Cycle 10) (Supplementary Figure S1). No clinically meaningful deterioration (i.e., a ≥10-point decrease [18]) from baseline values was observed in either arm. Median time to confirmed deterioration in pain (as measured by the pain scale of the EORTC QLQ-C30) was not evaluable in either treatment arm (confirmed deterioration in 38% of patients in the ipatasertib plus paclitaxel arm versus 30% in the placebo plus paclitaxel arm). However, the Kaplan-Meier plot of time to confirmed ≥11-point deterioration in pain from baseline showed a sustained separation of the curves at 6 months in favor of the placebo plus paclitaxel arm (hazard ratio, 1.36; 95% CI 0.83-2.22) (Supplementary Figure S2). Most patients (92% in the ipatasertib plus paclitaxel arm vs 82% in the placebo plus paclitaxel arm) received at least one dose of loperamide for diarrhea prophylaxis or treatment. The proportion receiving prophylactic loperamide was similar in the two treatment arms (61% vs 64%, respectively) but a higher proportion of patients in the ipatasertib plus paclitaxel arm received loperamide to treat diarrhea (78% vs 29%, respectively). In the ipatasertib plus paclitaxel arm, 96% of diarrhea episodes (431 of 448) resolved. The median time to resolution of the first episode of diarrhea (any grade) was 15 days (95% CI 8-18 days) and the median duration of the first episode of grade ≥ 3 diarrhea was 3 days (95% CI 1-6 days). Serious AEs were more common in the ipatasertib plus paclitaxel arm (19%) than in the placebo plus paclitaxel arm (12%). AEs were fatal in five patients (3%) in the ipatasertib plus paclitaxel arm and one patient (1%) in the placebo plus paclitaxel arm; two of these deaths were considered related to study treatment (grade 5 febrile neutropenia related to both drugs in the ipatasertib plus paclitaxel arm and grade 5 sepsis related to paclitaxel in the placebo plus paclitaxel arm; both patients had visceral crisis at screening). The remaining four deaths in the ipatasertib plus paclitaxel arm were from hospital-acquired pneumonia, respiratory distress, unexplained death, and general physical health deterioration/ road traffic accident (each reported in one patient). Discussion In Cohort B of the randomized phase 3 IPATunity130 trial, adding ipatasertib to paclitaxel did not improve PFS in PIK3CA/AKT1/PTEN-altered HR+ HER2-aBC. Ipatasertib plus paclitaxel was well tolerated, and the safety profile of the regimen was consistent with the known risks of each agent. No new safety signals were identified. OS follow-up is ongoing. The results from IPATunity130 Cohort B are consistent with findings from the randomized, phase 2 BEECH trial of the oral AKT inhibitor capivasertib in combination with first-line paclitaxel in HR+ HER-aBC [19]. In BEECH, similar to the present trial, combining an AKT inhibitor with paclitaxel did not significantly improve PFS in either the overall population or the PIK3CA-altered population. The target patient population for IPATunity130 Cohort B was patients with endocrine-resistant disease; however, only a quarter of patients had received prior CDK4/6 inhibitors. Although it is tempting to hypothesize that enrollment of a less endocrine-resistant patient population could explain the lack of PFS benefit, subgroup analyses do not support this hypothesis. The subgroup of patients with greater exposure to prior endocrine therapy did not show enhanced benefit from ipatasertib (Fig. 3). Another possible explanation for the lack of benefit is the higher proportion of patients discontinuing paclitaxel because of AEs in the ipatasertib arm, which may have compromised the efficacy of paclitaxel. Patients in the ipatasertib plus paclitaxel arm received a shorter duration and lower cumulative dose of paclitaxel than those in the placebo plus paclitaxel arm. This may have limited the ability to isolate the effect of ipatasertib. Of note, median PFS was identical in the two treatment arms and there was no signal of benefit from ipatasertib. There may be important lessons to learn from the duration and intensity of paclitaxel exposure, and the challenges of introducing a drug in HR+ HER2-breast cancer with side effects differing from those of endocrine therapies. Of note, a similar proportion of patients (approximately one-third) in each arm experienced a confirmed deterioration in pain, and patients' baseline quality of life was maintained while receiving ipatasertib plus paclitaxel treatment, showing no detrimental effect on patients' overall quality of life with ipatasertib. Consistent with the safety profile of ipatasertib plus paclitaxel observed in the LOTUS trial [15], there was more allgrade diarrhea, nausea and vomiting with ipatasertib. The incidence of diarrhea was lower in IPATunity130 Cohort B than in LOTUS, with only half as many ipatasertib-treated patients experiencing grade 3 diarrhea (11% in IPATunity130 Cohort B vs 23% in LOTUS). The observed reduction may be explained by the implementation of several diarrhea management measures in the IPATunity130 trial design, including prophylactic loperamide administration, improved patient education and AE management guidance, as well as greater investigator awareness and familiarity with the drug. Hyperglycemia has been observed in various clinical trials of drugs targeting the PI3K/AKT pathway [20][21][22] and is generally considered to be a class effect of these therapies. However, in IPATunity130 Cohort B, the proportion of patients experiencing hyperglycemia was lower than in trials of other PI3K/AKT inhibitors [20,[23][24][25] and similar in the two treatment arms (14% with ipatasertib plus paclitaxel vs 12% with placebo plus paclitaxel). The proportion of patients with grade ≥ 3 hyperglycemia was low (2% vs 0%, respectively). Overall, results from IPATunity130 Cohort B and the BEECH [19] trial differ from findings of trials combining a PI3K/AKT inhibitor with endocrine therapy (SOLAR-1 [20] and FAKTION [26]). PI3K/AKT signaling promotes estrogen-independent growth of HR+ HER2-breast cancer cells, which can be inhibited by combining PI3K inhibitors with anti-estrogens [8,92728,29]. The SOLAR-1 randomized, phase 3 trial combined the PI3K inhibitor alpelisib with fulvestrant in patients with PIK3CA-mutant HR+ HER2-breast cancer [20] and the FAKTION trial combined the oral AKT inhibitor capivasertib with fulvestrant after relapse or progression on an aromatase inhibitor [26]. In line with preclinical findings, both of the trials showed a PFS benefit from the addition of a PI3K/AKT pathway inhibitor to endocrine therapy. Considering all available data for AKT inhibition in HR+ HER2-aBC, it appears that endocrine blockade may be essential for efficacy in this setting. AKT induces endocrine receptor signaling, which may counter the potential benefit of an AKT inhibitor. Taken together, these results suggest that the benefit of AKT inhibition will be greatest if estrogen receptors are targeted alongside AKT inhibition. Ongoing trials of ipatasertib in breast cancer focus on combinations with endocrine therapy and/or immunotherapy. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2021-12-03T14:49:41.001Z
2021-12-03T00:00:00.000
{ "year": 2021, "sha1": "f8da7b0ef14fd08195043fd04cc756c91fec7c50", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10549-021-06450-x.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "f8da7b0ef14fd08195043fd04cc756c91fec7c50", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16959138
pes2o/s2orc
v3-fos-license
Optical performance monitoring in coherent optical OFDM systems Optical performance monitoring is an indispensable feature for optical systems and networks. In this paper, we propose the concept of optical performance monitoring through channel estimation by receiver signal processing. We show that in coherent-optical-orthogonal-frequencydivision-multiplexed (CO-OFDM) systems, critical optical system parameters including fiber chromatic dispersion, Q value, and optical signal-to-noise ratio (OSNR) can be accurately monitored without resorting to separate monitoring devices. ©2007 Optical Society of America OCIS codes: (060.0060) Fiber optics and optical communications; (060.1660) Coherent communications; (0606.5060) Phase modulation References and links 1. D. C. Kilper, R. Bach, D. J. Blumenthal, D. Einstein, T. Landolsi, L.Ostar, M. Preiss, and A. E. Willner, “Optical performance monitoring,” J. Lightwave. Technol 22, 294–304 (2004). 2. S. K. Shin, K. J. Park, and Y. C. Chung, “A novel optical signal-to-noise ratio monitoring technique for WDM networks,” in OFC 2000 2, Mar. 7–10, 182–184 (2000). 3. W. Chen; R. S. Tucker, X. Yi, W. Shieh, and J. Evans, “Optical signal-to-noise ratio monitoring using uncorrelated beat noise,” IEEE Photon. Technol. Lett. 17, 2484 – 2486 (2005). 4. G. Lu, M. Cheung, L. Chen, and C. Chan, “Simultaneous PMD and OSNR monitoring by enhanced RF spectral dip analysis assisted with a local large-DGD element,” IEEE Photon. Technol. Lett. 17, 2790 – 2792 (2005). 5. B. Fu, and R. Q. Hui, “Fiber chromatic dispersion and polarization-mode dispersion monitoring using coherent detection,” IEEE Photon. Technol. Lett. 17, 1561-1563 (2005). 6. C. Xie, D. Kilper, L. Moller, and R. Ryf, “Orthogonal polarization heterodyne OSNR monitoring technique insensitive to polarization effects,” Tech. Dig., Optical Fiber Communication Conference, paper PDP10, Anaheim, California (March 2006). 7. T. Luo, Z. Pan, S. Nezam, L. Yan, A. Sahin, and A. Willner, “ PMD monitoring by tracking the chromaticdispersion-insensitive RF power of the vestigial sideband,” IEEE Photon. Technol. Lett. 16, 2177 – 2179 (2004). 8. S. Dods, D. Hewitt, P. Farrell, and K. Hinton, “A novel broadband asynchronous histogram technique for optical performance monitoring,” Tech. Dig., Optical Fiber Communication Conference, paper OThH2, Anaheim, California (March 2005). 9. Z. Li, and G. Li, “Chromatic dispersion and polarization-mode dispersion monitoring for RZ-DPSK signals based on asynchronous amplitude-histogram evaluation,” J. Lightwave Technol. 24, 2859 – 2866 (2006). 10. F. Buchali, “Electronic dispersion compensation for enhanced optical transmission,” Tech. Dig. Optical Fiber Communication Conference, paper OWR5, Anaheim, California (March 2006). 11. W. Shieh, and C. Athaudage, “Coherent optical orthogonal frequency division multiplexing,” IEE Electron. Lett. 42, 587-589 (2006). 12. W. Shieh, W. Chen and R. S. Tucker, “Polarization mode dispersion mitigation in coherent optical orthogonal frequency division multiplexed systems,” IEE Electron. Lett. 42, 996 – 997 (2006). 13. Y. Li, L. J. Cimini, and N. R. Sollenberger, “Robust channel estimation for OFDM systems with rapid dispersive fading channels” IEEE Trans. Commun., 46, 902-915 (1998). 14. S. Hara, and R. Prasa, Multicarrier techniques for 4G Mobile Communications, (Artech House, Boston, 2003). 15. J. Proakis, Digital Communications, (WCB/McGraw-Hill, New York, 3 ed.), Chap. 5. 16. N. S. Bergano, F. W. Kerfoot, and C. R. Davidsion, “Margin measurements in optical amplifier system” IEEE Photon. Technol. Lett. 5, 304 – 306 (1993). 17. J. D. Berger and D. Anthon, “Tunable MEMS devices for optical networks” Opt. Photon. News 14, 43-49 (2003). #76836 $15.00 USD Received 6 November 2006; revised 25 December 2006; accepted 26 December 2006 (C) 2007 OSA 22 January 2007 / Vol. 15, No. 2 / OPTICS EXPRESS 350 18. E. Ip, J. Kahn, D. Anthon, and J. Hutchins, “Linewidth measurements of MEMS-based tunable lasers for phase-locking applications,” IEEE Photon. Technol. Lett. 17, 2029 – 2031 (2005). 19. A. Liu, G. J. Pendock, and R. S. Tucker, “Improved chromatic dispersion monitoring using single RF monitoring tone ,” Opt. Express, 14, 4611-4616 ( MAY 2006). Introduction Optical performance monitoring is an indispensable feature for future optical networks that are envisioned to be all-optical in the core [1].The fundamental challenge for all-optical networks hinges upon how to monitor, maintain and control the optical signals along intermediate paths.The pertinent parameters that affect the system performance include the signal power, wavelength, optical signal-to-noise ratio (OSNR), polarization-mode-dispersion (PMD) and polarization-dependent-loss (PDL).Various devices and subsystems have been proposed to monitor one or multiple parameters [2 ,3 ,4, 5 6, 7,8,9 ].In the mean time, there has been rapid advancement in receiver electrical equalization which takes advantages of powerful and cost-effective silicon signal processing capability [10].Additionally, we have recently proposed a novel modulation format of coherent optical orthogonal frequency division multiplexing (CO-OFDM) [11].This is essentially an optical equivalent of RF OFDM that has been widely adopted into numerous communication standards such as WiFi (IEEE 802.11a).We showed that with CO-OFDM, the signal can tolerate a chromatic dispersion equivalent to 3,000 km standard-single-mode-fibre [11].We also showed that the CO-OFDM signal is robust against PMD and may provide a practical solution for complete PMD mitigation [12].In the context of RF OFDM, channel estimation has been an actively pursued research topic [13].In this paper, we propose a similar concept of optical channel estimation (OCE) as one approach to optical performance monitoring.Specifically, we show that with CO-OFDM, various important parameters such as Q margin, OSNR, and chromatic dispersion can be monitored through receiver signal processing.Most importantly, performance monitoring by OCE is basically free because it is embedded as a part of the intrinsic receiver signal processing.Such a monitoring device could be also placed anywhere in the network without concern about the large residual chromatic dispersion of the monitored signal. The principle of OCE in a CO-OFDM system OFDM is a special form of multi-carrier modulation where a single data stream is transmitted over a number of lower rate orthogonal subcarriers [14].It is worth mentioning that OFDM has been extensively investigated as a means of combating RF microwave multi-path fading, and has been widely implemented in various digital communication standards such as wireless local area network standards (WiFi IEEE 802.11 a) [14].Figure 1 shows a time and frequency domain representation of an OFDM signal.The OFDM signal in the time domain consists of a continuous stream of OFDM symbols with a regular period T s, each containing an observation period t s and a guard interval Δ G .A complete CO-OFDM system consists of an electrical OFDM transmitter, an OFDM RF-to-optical up converter, an optical link, an OFDM opticalto-RF down converter, and an electrical OFDM receiver [11].The principle of OCE is to treat the large number of subcarriers as probes by processing the received subcarrier information symbols, and subsequently the overall channel characteristics are accurately estimated.In this paper, for the sake of simplicity, we limit the study of OCE to the monitoring of CD, OSNR and system Q.Other parameters such as PMD/PDL can be monitored in a similar fashion.The optical channel model for a CO-OFDM signal is given by [11] ( ) ( ) where ki c / ki c′ is the transmitted/received symbol for the kth subcarrier in the ith OFDM symobl, i φ is the phase noise from transmit/receive lasers, and transmit/receiver RF local oscillators (LO), ( ) is the subcarrier phase from CD, presumably a quadratic function of the subcarrier frequency k f , and ki n is the noise from the accumulated amplified- spontaneous-emission (ASE) noise.Equation ( 2) shows the quadratic expression of ( ) as a function of k f consisting of a zero-order dc term 0 φ , a linear term proportional to the time delay of the first subcarrier 0 τ , and a quadrature term proportional to the fiber chromatic dispersion t D in the unit of ps/pm.Although the subcarrier phase ( ) can be an arbitrary function of the subcarrier frequency k f , for the sake of simplicity, we have assumed that in Eq. (2) a quadratic dependence of ( ) on k f , i.e., the chromatic dispersion t D is constant within OFDM spectrum.The estimation of fiber chromatic dispersion t D is of the interests of this paper. In order to perform the channel estimation, the phase noise i φ for each OFDM symbol has to be obtained [11].For the sake of simplicity, BPSK encoding for each subcarrier is assumed.The phase noise can be estimated by averaging over all the subcarriers given by [11] ( ) ( ) where we perform the signal processing in a block of OFDM symbols consisting of a large of number of OFDM symbols, for instance, 100 OFDM symbols.This implies that the channel estimation is averaged and updated every 100 OFDM symbols, which is in the order of μs for 10 Gb/s OFDM systems.This monitoring speed could be sufficient to accommodate the chromatic dispersion and OSNR change from the environment disturbance. For chromatic dispersion monitoring, we first assume that the transmitted symbol ki c is known.This is the case when (i) the pilot-assisted channel estimation is used, where a known training sequence is used [14], and (ii) the data-assisted channel estimation is used and the decision of the transmitted signal based on the receiver one has already made.In either case, from Eq. ( 4) the subcarrier phase is given by where ( ) arg stands for the phase for a complex signal, i stands for the mathematic mean over multiple OFDM symbols or over index i.The chromatic dispersion t D is estimated by a simple second-order curve fitting of ( ) as a function of the subcarrier frequency k f .Another important parameter to monitor is the system Q margin.A live system could run error-free even without FEC for an extended period, making it hard to detect the system margin by measuring BER directly.From Eq. ( 4), we can see that each subcarrier channel is essentially a linear channel with an additive white Gaussian noise.Subsequently the bit-errorratio (BER) of the system is given by [15] ( ) where ESNR is the (electrical) signal-to-noise ratio per bit, k stands for the averaging over the subcarriers or the index k, i ki c′ is the expectation value of the received symbol for subcarrier k, and is the standard deviation of the received symbol for subcarrier k.Equation (7) shows that ESNR can be obtained by first constructing the constellation of the received symbol and then performing the computation of ESNR for symbol '0' and 'π' separately.We further convert the BER in Eq. ( 6) into the Q value [16] which is commonly used in optical community.From Eq. (6), the system Q is thus given by ( ) From Eqs. ( 7)-(8), the system Q can be effectively monitored by computing the subcarrier symbol spread in the constellation diagram. We have shown that in a CO-OFDM system, by choosing the guard interval to be larger than the CD-induced delay spread, the inter-symbol-interference (ISI) can be completely removed [11].Subsequently the electrical noise characterized by δ is predominately from the accumulation of the ASE noise from optical amplifiers, and it can be shown that where A is a proportional constant between ESNR and OSNR, and B is attributed to the background noise not counted for by ASE noise, which is mainly from the phase noise of the transmit/receive lasers.From Eqs. ( 6)-( 9), we can see that by acquiring i ki c′ and k δ through receiver signal processing, the ESNR for the OFDM signal can be computed, and subsequently both the system Q and the OSNR can be monitored.The coefficients A and B in Eq. 9 can be obtained practically with a calibration procedure by measuring ESNRs against a series of known OSNRs and performing a linear fit between 1/ESNR and 1/OSNR. It is quite instructive to explicitly write out the ideal coherent detection performance for CO-OFDM systems where the linewidths of the transmit/receive lasers are assumed to be zero.From Eq. ( 4), the corresponding BER, Q and ESNR in this ideal condition can be shown given by where 0 B is the optical ASE noise bandwidth used for OSNR measurement (~12.5 GHz for 0.1 nm bandwidth), is the total system symbol transmission rate, sc N and f Δ are the number of the subcarriers and channel spacing of the subcarriers, respectively.To demonstrate the proposal, we carry out a Monte Carlo simulation with an OFDM symbol period of 25.6 ns, a guard time of 3.2 ns, and 256 subcarriers.BPSK encoding is used for each subcarrier resulting in a total bit rate of 10 Gb/s.The linewidth of the transmitter and receiver lasers are assumed to be 100 kHz each, which is close to the value achieved with commercially available semiconductor lasers [17][18].The link ASE noise from the optical amplifiers is assumed to be white Gaussian noise and the phase noise of the lasers is modelled as white frequency noise characterized by its linewidth.The chromatic dispersion is assumed to be constant within OFDM spectrum.A total 8 blocks of OFDM symbols each consisting of 100 OFDM symbols are used for extracting various parameters including CD, system Q and OSNR.In the following text, we use 'calculate' to mean the BER results obtained by Monte Carlo simulation, and 'monitor' to mean the interpolation results obtained by Eqs. ( 5)-( 9). Figure 2 shows the monitored CD from the receiver signal processing.The input OSNR is set at 3.8 dB, which gives a BER of 10 -3 for a CD below 34,000 ps/nm.We can see that the chromatic dispersion up to 50,000 ps/nm can be monitored with an accuracy of 50 ps/nm.The simultaneous large dynamic range and good accuracy of CD monitoring is the unique feature of the OFDM modulation format, namely, a large number of subcarriers spread cross wide spectrum of 10 GHz resulting in good accuracy, and narrow subcarrier channel spacing of 44.6 MHz resulting in wide dynamic range.This wide dynamic range is over one-order of magnitude improvement over a prior report using single or a few auxiliary subcarriers [19].Figure 3 shows the monitored system Q and OSNR though OCE.The Q is calculated from 7 dB to 12 dB by Monte Carlo simulation, i.e., direct BER simulation with a signal duration of 20.5 μs, shown in solid square in Fig. 3.This demonstrates a good agreement with the monitored Q by Eq. (8).Beyond that, we rely on Eq. (8) for system Q estimation.To appreciate the advantage of this approach, for instance, at an input OSNR of 20 dB, the system Q for this OSNR is monitored to be 21.3 dB, which gives a Q margin of 11.5 dB over a BER of 10 -3 .Such a method of Q margin prediction at high OSNRs is similar to that in the direct-detected systems [16].Thus the margin monitoring is achieved non-intrusively.Note that this level of system margin can not be measured directly.Additionally, the OSNR is monitored by computing ESNR and estimating OSNR using Eq. ( 9).The curve with solid triangle in Fig. 3 shows that the OSNR can be monitored with errors within 0.5 dB for an input OSNR dynamic range of 1 dB to 20 dB.The maximum OSNR that can be monitored is limited by the laser phase noise. Simulation model and results Although there is a great advantage of performing the OCE in the receiver, it might be beneficial to move the monitoring function out of the receiver and distribute the monitoring devices somewhere within the transmission link.The benefit of distributed monitoring is that the fault can be accurately located once it takes place.The cost of the monitoring device, essentially a high speed coherent receiver for CO-OFDM, might be a disadvantage for this approach, although this could be partially mitigated by sharing one such monitoring device for multiple wavelength channels or even multiple fibers.Another important advantage of performance monitoring based upon OCE is that the monitoring device can be placed anywhere in the system without concern about the dispersion compensation due to its enormous dynamic range over chromatic dispersion, whereas with alternative monitoring techniques, for instance, asynchronous histogram approach [9], the signal needs to be dispersion pre-compensated before being monitored, which is due to the limited dynamic range for that approach.The nonlinearity impact on the channel estimation and performance monitoring is of the great importance and will be discussed in a subsequent submission. Conclusion We have proposed the concept of optical performance monitoring through channel estimation by receiver signal processing.We show that with CO-OFDM, the various critical optical system parameters can be accurately monitored without resorting to separate monitoring devices. Fig. 1 . Fig. 1.Time and Frequency representations of an OFDM signal k in Eq. (3) stands for the averaging over the index k or the subcarriers.Removing the phase noise i φ from Eq. (1), we obtain the received symbol and noise after Fig. 3 . Fig. 3.The monitored system Q and OSNR as a function of input OSNR
2017-04-04T12:52:54.029Z
2007-01-22T00:00:00.000
{ "year": 2007, "sha1": "71843804432416bfd703c3d9ab7e5788e7866929", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.15.000350", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "71843804432416bfd703c3d9ab7e5788e7866929", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
240353833
pes2o/s2orc
v3-fos-license
Smallness of Faltings heights of CM abelian varieties We prove that assuming the Colmez conjecture and the ``no Siegel zeros"conjecture, the stable Faltings height of a CM abelian variety over a number field is less than or equal to the logarithm of the root discriminant of the field of definition of the abelian variety times an effective constant depending only on the dimension of the abelian variety. In view of the fact that the Colmez conjecture for abelian CM fields, the averaged Colmez conjecture, and the ``no Siegel zeros"conjecture for CM fields with no complex quadratic subfields are already proved, we prove unconditional analogues of the result above. In addition, we also prove that the logarithm of the root discriminant of the field of everywhere good reduction of CM abelian varieties can be ``small". Introduction Let E be a CM-field, and let Φ be a CM-type of E. Let A be an abelian variety over a number field K such that we have an embedding i : O E ֒→ End K (A) such that (A, i) has CM-type Φ. It is proved by Colmez in [Col93] that the stable Faltings height h st Falt (A) of the abelian variety A depends only on the CM-field E and the CM-type Φ and not on the abelian variety A. We denote it as h Falt (E,Φ) . In [Col93] Colmez proposed a conjecture relating h Falt (E,Φ) to the logarithmic derivatives at s = 0 of certain Artin L-functions defined by (E, Φ). We will refer to this conjecture as the Colmez conjecture. The precise statement is as follows: We define a function A 0 (E,Φ) from Gal(Q/Q) to C by When E is a complex quadratic field, the Colmez conjecture is the same as the classical Chowla-Selberg formula (see for example Page 91 and 92 of [Wei76]), and so it is in fact a theorem. Colmez [Col93] and Obus [Obu13] proved that the Colmez conjecture is true if the extension E/Q is Galois with abelian Galois group (Theorem 4.8). Yuan-Zhang [YZ18] and Andreatta-Goren-Howard-Madapusi-Pera [AGHMP18] independently proved that the Colmez conjecture is true when one averages over all CM-types of a given CM-field (Theorem 5.2). Let −d ∈ Z ≤−2 be a fundamental discriminant, so disc(Q( √ −d)) = d. Let χ d be the quadratic character associated to the quadratic field extension Q( √ −d)/Q, so χ d (p) = (−d|p) for any prime p. Let L(s, χ d ) be the Dirichlet L-function of the character χ d , so L(s, It is known that there is at most one zero of L(s, χ d ) in the region 1 − 1 4 log(d) ≤ Re(s) < 1, | Im(s)| ≤ 1 4 log(d) , and if such a zero exists it is real and simple. For any 0 < c ≤ 1 4 , we define the c-Siegel zero of L(s, χ d ) to be the zero of L(s, χ d ) in the region 1 − c log(d) ≤ Re(s) < 1, | Im(s)| ≤ 1 4 log(d) (if it exists). We define the Siegel zero of L(s, χ d ) to be the 1 4 -Siegel zero of L(s, χ d ). The conjecture No 1 O(1) -Siegel zero of L(s, χ d ) is as follows: Conjecture 1.1 (No 1 O(1) -Siegel zero of L(s, χ d )). There exists some effectively computable absolute constant C zero ∈ R ≥4 such that for any fundamental discriminant −d ∈ Z ≤−2 , the Dirichlet L-function L(s, χ d ) has no zeros in the region 1 − Let L(s, χ E/F ) be the L-function of the character χ E/F , and so L(s, χ E/F ) = ζE (s) ζF (s) . Similarly to the case where E is a complex quadratic field, by Lemma 3 of [Sta74], for any CM-field E with maximal totally real subfield F , L(s, χ E/F ) has at most one zero in the region 1 − 1 4 log |disc(E)| ≤ Re(s) < 1, | Im(s)| ≤ 1 4 log |disc(E)| . If such a zero exists, it is real and simple. For any 0 < c ≤ 1 4 , we define the generalized c-Siegel zero of L(s, χ E/F ) to be the zero of L(s, χ E/F ) in the region 1 − c log |disc(E)| ≤ Re(s) < 1, | Im(s)| ≤ 1 4 log |disc(E)| (if it exists). We define the generalized Siegel zero of L(s, χ E/F ) to be the generalized 1 4 -Siegel zero of L(s, χ E/F ). The conjecture No generalized 1 Og (1) -Siegel zero of L(s, χ E/F ) is as follows: Conjecture 1.2 (No generalized 1 Og (1) -Siegel zero of L(s, χ E/F )). For any g ∈ Z ≥1 , there exists some effectively computable constant C zero (g) ∈ R ≥4 depending only on g such that for any CM-field E with maximal totally real subfield F such that [F : Q] = g, the function L(s, χ E/F ) has no zeros in the region 1 − 1 C zero (g) log |disc(E)| ≤ Re(s) < 1, | Im(s)| ≤ 1 4 log |disc(E)| . It is proved by Stark (Lemma 9 of [Sta74]) that Conjecture 1.1 implies Conjecture 1.2. He also proved that Conjecture 1.2 is true whenever the CM-field E contains no complex quadratic subfields. We show that assuming the Colmez conjecture, the nonexistence of the generalized Siegel zero of L-functions of quadratic characters associated to CM extensions is closely related to the stable Faltings height of CM abelian varieties being bounded by the logarithm of the root discriminant of the field of definition. More precisely, we prove the following theorem: Theorem 1.3. Suppose that the Colmez conjecture holds. Suppose further that No 1 O(1) -Siegel zero of L(s, χ d ) holds. Then for any g ∈ Z ≥1 , there exist effectively computable constants C 1 (g) > 0, C 2 (g) ∈ R depending only on g such that for any dimension-g abelian variety A defined over a number field K with complex multiplication by O E for some CM-field E. Since the Colmez conjecture for abelian CM-fields is already proved, and since Conjecture 1.2 is true when the CM-field E contains no complex quadratic subfields, we can also prove an unconditional version of the theorem above: Theorem 1.4. For any g ∈ Z ≥1 , there exists effectively computable constants C 3 (g) > 0, C 4 (g) ∈ R depending only on g such that for any dimension-g abelian variety A over a number field K with complex multiplication by O E for some CM-field E such that the extension E/Q is Galois with abelian Galois group and E does not contain any complex quadratic subfields. Remark 1.5. To show that the condition "E does not contain any complex quadratic subfields" in the hypotheses in Theorem 1.4 is possible, we give examples of CM fields E containing no complex quadratic subfields such that the extension E/Q is Galois with abelian Galois group. Let n be an integer greater than or equal to 3 such that the group (Z/nZ) × is a cyclic group and such that #(Z/nZ) × divides 4. (Equivalently, n = p k or n = 2p k for some odd prime p such that p ≡ 1 mod 4.) Let E be the n-th cyclotomic field Q(µ n ), where µ n denotes a primitive nth root of unity. Then E is a CM-field with maximal totally real subfield F = Q(µ n + µ −1 n ). The extension E/Q is Galois and Gal(E/Q) is isomorphic to (Z/nZ) × . Since Gal(E/Q) is cyclic and of even order, there is a unique subgroup H of Gal(E/Q) of index 2, and so there is a unique quadratic subfield K of E. Let ι be the nontrivial element of Gal(E/F ) ⊂ Gal(E/Q). Then ι is the unique element in Gal(E/Q) of order 2. Since #Gal(E/Q) = #(Z/nZ) × divides 4, we have ι ∈ H. Thus, K is fixed by the element ι. Therefore, K is a real quadratic field and so E contains no complex quadratic subfields. More generally, let E be any totally imaginary number field such that the extension E/Q is Galois with abelian Galois group. Then E is a CM-field (any abelian extension over Q is either totally real or a CM field). We know that where q 1 , q 2 , · · · q m are powers of prime numbers (q 1 , q 2 , · · · q m are not necessarily distinct). Suppose further that the number "2" does not appear in q 1 , q 2 , · · · q m , i.e. each q i is either 2 ki for k i ≥ 2 or a power of an odd prime. Then each component Z/q i Z such that q i is 2 ki (k i ≥ 2) contains a unique subgroup H i of index 2 and a unique element σ i of order 2, and σ i ∈ H i . Let ι be the nontrivial element of Gal(E/F ) ⊂ Gal(E/Q), where F is the maximal real subfield of E. Then ι ∈ H for any subgroup H of Gal(E/Q) of index 2. Therefore, E contains no complex quadratic subfields. Since the averaged Colmez conjecture is already proved, we can also prove averaged analogues of the theorems above. Theorem 1.6. For any g ∈ Z ≥1 , there exists effectively computable constants C 5 (g) > 0, C 6 (g) ∈ R depending only on g such that log |disc(K 2 )| + C 6 (g), for any pair A 1 , A 2 of dimension-g abelian varieties defined over number fields K 1 , K 2 respectively, such that the following holds: • There exists a CM-field E of degree [E : Q] = 2g and embeddings i 1 : such that E does not contain any complex quadratic subfields and the CM-type Φ 1 of (A 1 , i 1 ) and the CM-type Φ 2 of (A 2 , i 2 ) satisfy: Theorem 1.7. Let g be a positive integer. Suppose that there exists some effectively computable constant C zero (g) ∈ R ≥4 depending only on g such that for any CM-field E with maximal totally real subfield F such that [F : Q] = g, the function L(s, χ E/F ) has no zeros in the region i.e. the conjecture No generalized 1 Og (1) -Siegel zero of L(s, χ E/F ) holds for g. Then there exist effectively computable constants C 7 (g) > 0, C 8 (g) ∈ R depending only on g such that for any pair A 1 , A 2 of dimension-g abelian varieties defined over number fields K 1 , K 2 respectively, such that the following holds: • There exists a CM-field E of degree [E : Q] = 2g and embeddings i 1 : such that the CM-type Φ 1 of (A 1 , i 1 ) and the CM-type Φ 2 of (A 2 , i 2 ) satisfy: It might be interesting to know that if we only make use of the (proved) averaged Colmez conjecture, then we cannot obtain results stronger than Theorem 1.6 and Theorem 1.7 (i.e. the "average" condition in these theorems cannot be dropped), even if we further assume that the abelian variety over the number field has everywhere good reduction. In particular, we prove the following theorems, which show that the logarithm of the root discriminant of the field of everywhere good reduction of CM abelian varieties can be "small". Theorem 1.8. Assume the Generalized Riemann Hypothesis. For any g ∈ Z ≥1 , there exist effectively computable constants C 13 (g) > 0, C 14 (g) ∈ R, such that for any CM-field E with [E : Q] = 2g, for any CM-type Φ of E, there exists a number field K ′ and a CM abelian variety (A, i : O E ֒→ End K ′ (A)) over K ′ of CM-type Φ such that the abelian variety A over K ′ has everywhere good reduction and (1) Theorem 1.9. For any g ∈ Z ≥1 , there exist effectively computable constants C 15 (g) > 0, C 16 (g) ∈ R, such that for any CM-field E with [E : Q] = 2g, for any CM-type Φ of E, there exists a number field K ′ and a CM abelian variety (A, i : O E ֒→ End K ′ (A)) over K ′ of CM-type Φ such that the abelian variety A over K ′ has everywhere good reduction and This will be discussed in detail in section 6. In Theorem 6(ii) of [Col98], Colmez has proved that there exist effectively computable absolute constants C Col,1 > 0, C Col,2 ∈ R such that for any CM-field E of degree [E : Q] = 2g and any CM-type Φ of E such that the following hold: The proofs of Theorem 1.3 and Theorem 1.4 show that that we can actually remove the second hypothesis that the Artin L-functions involved satisfy the Artin conjecture. Moreover, in the remark after Theorem 6 of [Col98], Colmez asked whether it is possible to remove the third hypothesis that the Artin Lfunctions involved have no zeros on the ball of radius 1 4 centered at 0 and making use of "no Siegel zeros" instead. The proofs of Theorem 1.3 and Theorem 1.4 is more or less a positive answer to this question. Acknowledgements The author is deeply grateful to Professor Wei Zhang for suggesting this problem to the author, supervising the author on this project, and teaching, mentoring and guiding the author all along. For many times the author would have been in distress had it not been for the kind and generous help of Professor Zhang. The author also thanks the Undergraduate Research Opportunities Program of MIT for providing this opportunity for the author to do undergraduate research. The Faltings height Let A be a dimension-g abelian variety defined over a number field K. Let π : A → Spec(O K ) be the Néron model of A, and take ω to be any global section of L := π * Ω g A/SpecOK . We define the unstable Faltings height of A as follows: This definition is independent of the choice of ω ∈ H 0 (SpecO K , L). We define the stable Faltings height of A to be where K ′ is a finite extension of K such that A K ′ /K ′ has everywhere semistable reduction. This definition does not depend on the choice of the finite extension K ′ /K. Unlike the unstable Faltings height, the stable Faltings height does not depend on the field of definition of the abelian variety. The following is a theorem of Bost ([Bos96]). Theorem 2.1. There exists an effectively computable absolute constant C lower > 0 such that for any dimension-g abelian variety A over a number field, we have As we have mentioned in section 1, it is proved by Colmez in [Col93] that for any CM-field E and any CM- ) are CM abelian varieties over number fields K 1 and K 2 , both with CM-type Φ, then . We denote this stable Faltings height as h Falt (E,Φ) . The Colmez conjecture revisited Throughout this section g is an arbitrary positive integer, E is an arbitrary CM-field of degree [E : Q] = 2g and Φ is an arbitrary CM-type of E. We denote as E * Φ the reflex field of (E, Φ). where we denote as E * Φ the Galois closure of the extension E * Φ /Q. For the following we view the function A 0 (E,Φ) as a (class) function from We fix an embedding Q ֒→ C and let ι be the element of Gal(Q/Q) induced by complex conjugation. Let χ be an irreducible Artin character. We say that χ is odd if χ(ι) = −χ(1). Some computations show that for the trivial character χ = 1, we have m (E,Φ) (1) = 1 2 g; and for any nontrivial irreducible Artin character χ, m (E,Φ) (χ) = 0 unless χ is odd. ( This implies that for any irreducible Artin character χ such that m (E,Φ) (χ) = 0, the Artin L-function L(s, χ, Q) is defined and nonzero at s = 0. Let Z (E.Φ) and µ (E,Φ) be as in section 1. Since we can deduce that and The zero of the Artin L-function near 1 4.1 Relation between the zero near 1 and the logarithmic derivative at 0 of the Artin L-function Throughout this subsection g is an arbitrary positive integer, E is an arbitrary CM-field of degree [E : Q] = 2g and Φ is an arbitrary CM-type of E. We denote as E * Φ the reflex field of (E, Φ). We denote as E * Φ the Galois closure of the extension E * Φ /Q. By Chapter 2, Section 5 of [MM97], for any nontrivial irreducible character χ of are both holomorphic except for a simple pole at s = 1. By Lemma 3 of [Sta74], for any number field K such that K = Q, the function ζ K (s) has at most one zero in the region If such a zero exists, it is real and simple. Therefore, the function L(s, χ, Q) has at most one zero in the region If such a zero exists, it is real and simple. Proposition 4.1. Let χ be a nontrivial odd irreducible character of Gal( E * Φ /Q). Denote as β 0 the (necessarily real and simple) zero of L(s, χ, Q) in the region Let δ χ be 1 if β 0 exists, and let δ χ be 0 otherwise. We have Proof. We define the function Λ(s, χ, Q) to be We have the functional equation for some W (χ) ∈ C with absolute value 1. We define the function ξ E * Φ to be We have the functional equation First consider the function f 1 (s) := (ξ E * Φ (s)) 2 Λ(s, χ, Q)Λ(s, χ, Q). It is entire and satisfies the functional equation Since f 1 (s) is real for s real, for any ρ ∈ C the order of the zero of f 1 (s) at s = ρ is equal to that at s = ρ. Moreover, all zeros of f 1 (s) lie in the critical strip 0 < Re(s) < 1. Therefore, by logarithmically differentiating the Hadamard product formula for f 1 (s) at s = 1 we get Let − δ χ is equal to 0 or 1 and the order of the zero of f 1 (s) at s = β 0 is equal to Since the function is holomorphic on 0 < Re(s) < 1, for any ρ ∈ C such that 0 < Re(ρ) < 1, the order of the zero at s = ρ of the function f 1 (s) is less than or equal to 4 times the order of the zero at s = ρ of the function ζ E * Φ (s). In view of the fact that all zeros of f 1 (s) lie in the critical strip Then consider the function f 2 (s) := and By logarithmically differentiating the functional equation of Λ(s, χ, Q)Λ(s, χ, Q) at s = 1, we have The result then follows from subtracting Equation (8) by Equation (10) and the following Lemma 4.2. Lemma 4.2. Let K be a number field such that K = Q. Denote as β 0 the (necessarily real and simple) zero of ζ K (s) in the region for any s real with 1 < s < 2. Taking s = 1 + 1 log |disc(K)| in Equation (12), we get: Corollary 4.3. Let c be a real number such that 0 < c ≤ 1 4 . Suppose that for any nontrivial odd irreducible character χ of where c ′ is defined to be 1 c if c < 1 4 , and 0 if c = 1 4 . By the definition of A 0 (E,Φ) we have Since for any χ ∈ Irr(Gal( E * Φ /Q)), m (E,Φ) (χ) is a non-negative real number and χ(1) ≥ 1, we have By the following Equation (17) we have The reflex field E * Φ is contained in the Galois closure E of the extension E/Q, Hence, we get our claim. Lemma 4.4. Let K 1 and K 2 be number fields. Let K 1 K 2 be the compositum of K 1 and K 2 . Then we have and In particular, let K be a number field and let K be the Galois closure of the extension K/Q. Then and Proof. This is Lemma 7 of [Sta74]. Sufficient conditions for the nonexistence of the zero near 1 of the Artin L-function By Theorem 3 of [Sta74], we have the following theorem. Theorem 4.5. Let L/K be a finite Galois extension of number fields. Let s 0 ∈ C be a simple zero of ζ L (s). (1) For any irreducible character χ of Gal(L/K), L(s, χ, K) is defined at s = s 0 . There is a (unique) irreducible character X s0,L/K of Gal(L/K) such that for any irreducible character χ of Gal(L/K), L(s 0 , χ, K) = 0 if and only if χ = X s0,L/K . X s0,L/K is a linear character of Gal(L/K) (so X s0,L/K is a group homomorphism from Gal(L/K) to C × ). (2) There is a (unique) subfield K s0,L/K of L containing K such that for any field K ′ containing K and contained in L, ζ K ′ (s 0 ) = 0 if and only if K ′ contains K s0,L/K . The extension K s0,L/K /K is cyclic. (3) K s0,L/K is the fixed field of the kernel of X s0,L/K . (4) Suppose further that s 0 is real. Then exactly one of the following holds: 1. K s0,L/K is equal to K and X s0,L/K is the trivial character. 2. K s0,L/K is quadratic over K and X s0,L/K is the group homomorphism from Gal(L/K) to C × with kernel Gal(L/K s0,L/K ) and image {±1}. In particular, X s0,L/K is a nontrivial real linear character. For the rest of this subsection E is an arbitrary CM-field and Φ is an arbitrary CM-type of E. We denote as E * Φ the reflex field of (E, Φ). We denote as E * Φ the Galois closure of the extension E * Φ /Q. Corollary 4.6. Suppose that one (or two, or all) of the following conditions hold: 1. The Galois closure E of the extension E/Q does not contain any complex quadratic subfields. 2. E * Φ does not contain any complex quadratic subfields. 3. There does not exist a nontrivial irreducible real linear character χ of (Note that Condition 1 implies Condition 2 since E * Φ ⊂ E, and Condition 2 implies Condition 3.) Then for any nontrivial odd irreducible character χ of Gal( E * Φ /Q), there is no zero of L(s, χ, Q) in the region Proof. Let χ be a nontrivial odd irreducible character of Gal( E * Φ /Q) such that such a zero exists. Denote this zero as β 0 . Then β 0 must be real and β 0 is also a simple zero of ζ E * Φ (s). Therefore, by Theorem 4.5, χ is a real linear character of Gal( E * Φ /Q), and the homomorphism χ from Gal( E * Φ /Q) to C × has image {±1} and kernel Since χ is an odd character, we have χ(ι) = −χ(1), where ι is the element in Gal( E * Φ /Q) induced by complex conjugation, and so K/Q must be a complex quadratic extension. Therefore, our claim follows. Since the compositum of two CM-fields is also a CM-field, the Galois closure of a CM-field (viewed as an extension over Q) is also a CM-field. We know that the reflex field E * Φ of (E, Φ) is a CM-field. Therefore, E * Φ is also a CM-field. We denote as ( E * Φ ) + the maximal totally real subfield of E * Φ . Proposition 4.7. Let c be a real number such that 0 < c ≤ 1 4 . Suppose that the function L(s, χ E * has no zero in the region Then for any nontrivial odd irreducible character χ of Gal( E * Φ /Q), there is no zero of L(s, χ, Q) in the above region either. Proof. Let χ be a nontrivial odd irreducible character of Gal( E * Φ /Q) such that such a zero exists. Denote this zero as β 0 . Then β 0 must be real and β 0 is also a simple zero of ζ E * Φ (s). By our assumption on L(s, . Therefore, the field K β0, E * Φ /Q in Theorem 4.5 must be contained in the field ( E * Φ ) + , and so K β0, E * Φ /Q is a real quadratic field. By Theorem 4.5, since L(β 0 , χ, Q) = 0, χ is a group homomorphism from Gal( E * Φ /Q) to C × with kernel Gal( E * Φ /K β0, E * Φ /Q ), and so χ(ι) = χ(1) = 1, where ι is the element in Gal( E * Φ /Q) induced by complex conjugation. This is a contradiction since the character χ is assumed to be odd. Proofs of Theorem 1.3 and Theorem 1.4 Proof of Theorem 1.3. Let g be a positive integer. Let E be a CM-field with maximal totally real subfield F of degree [F : Q] = g. Let (A, i : O E ֒→ End K (A)) be a CM abelian variety over a number field K and let Φ be the CM-type of (A, i). Then the field K contains the reflex field E * Φ . Thus, we have where the last inequality follows from the fact that the reflex field E * Φ is contained in the Galois closure E of the extension E/Q. By Lemma 8 and Lemma 9 of [Sta74], suppose that there is a (necessarily real and simple) zero β 0 of L(s, then there exists a complex quadratic subfield K of E * Φ such that ζ K (β 0 ) = 0 also. Since the Riemann zeta function ζ Q (s) has no real zeros in the range 0 < s < 1, this means that β 0 is a zero of the function L(s, χ K/Q ) = ζK (s) ζ Q (s) . Since K is contained in E * Φ , we have |disc( E * Φ )| ≥ |disc(K)|. Therefore, β 0 is a Siegel zero of L(s, χ K/Q ). The result then follows from Proposition 4.7 and Corollary 4.3. It is proved by Colmez ([Col93]) and Obus ([Obu13]) that the Colmez conjecture is true when the CM-field is abelian: Theorem 4.8. Let E be a CM-field such that the extension E/Q is Galois with abelian Galois group. Then we have for any CM-type Φ of E. As a corollary, we can prove an unconditional analogue of Theorem 1.3. Proof of Theorem 1.4. Similar to the above proof of Theorem 1.3, the statement follows from the above-mentioned Lemma 8 and Lemma 9 of [Sta74], Corollary 4.6, Corollary 4.3, and Theorem 4.8. The (proved) averaged Colmez conjecture Although the formula −Z (E,Φ) − 1 2 µ (E,Φ) in the Colmez conjecture appears very complicated, the average over all CM-types Φ of a CM-field E is much simpler: As is conjectured in Page 634 of [Col93] and proved in [YZ18] and [AGHMP18], we have the following proposition. Proposition 5.1. Let E be a CM-field with maximal totally real subfield F . Then we have where the sum on the left-hand-side is over all CM-types Φ of E. In other words, the Colmez conjecture implies the (Proved) averaged Colmez conjecture stated below. Theorem 5.2 ((Proved) averaged Colmez conjecture). Let E be a CM-field with maximal totally real subfield F . Then we have where the sum on the left-hand-side is over all CM-types Φ of E. This is proved independently by Yuan-Zhang [YZ18] and Andreatta-Goren-Howard-Madapusi-Pera [AGHMP18]. In the following, we use the proved averaged Colmez conjecture to prove averaged analogues of Theorem 1.3 and Theorem 1.4. Proposition 5.3. Let g be a positive integer. Suppose that there exists some effectively computable constant C zero (g) ∈ R ≥4 depending only on g such that for any CM-field E with maximal totally real subfield F such that [F : Q] = g, the function L(s, χ E/F ) has no zeros in the region then there exist effectively computable constants C 9 (g) > 0, C 10 (g) ∈ R depending only on g such that We define δ χ E/F to be 1 if β 0 exists, and we define δ χ E/F to be 0 otherwise. By an argument similar to the proof of Proposition 4.1, we have By our assumption, we then have Let (A, i : O E ֒→ End K (A)) be any CM abelian variety over a number field K. Let Φ 0 be the CM-type of (A, i). By Theorem 5.2, we have Let C lower > 0 be as in Theorem 2.1. Then by Theorem 2.1 we have Proposition 5.4. For any g ∈ Z ≥1 , there exist constants C 11 (g) > 0, C 12 (g) ∈ R depending only on g such that h st Falt (A) ≤ C 11 (g) log |disc(E)| + C 12 (g) for any CM-field E of degree [E : Q] = 2g such that E has no complex quadratic subfields and for any abelian variety A over a number field with complex multiplication by O E . Proof. Let g be a positive integer. Let E be a CM-field with maximal totally real subfield F with degree [F : Q] = g. By Lemma 9 of [Sta74], suppose that there exists a (necessarily real and simple) zero β 0 of L(s, χ E/F ) in the range then there exists a complex quadratic subfield K of E such that ζ K (β 0 ) = 0 as well. So if E does not contain any complex quadratic fields, then there is no such zero. The rest of the proof is similar to that of Proposition 5.3. Lemma 5.5. Let E be a CM-field with maximal totally real subfield F of degree Let ϕ 0 be the unique element in Hom Q (F, R) such that the element φ 1 in Φ 1 lying above ϕ 0 is not equal to the element φ 2 in Φ 2 lying above ϕ 0 . We have φ 1 = φ 2 • ι, where ι is the nontrivial element of Gal(E/F ). It is easy to see that the subfield φ 1 (E) of C is equal to the subfield φ 2 (E) of C. Let E * Φ1 , E * Φ2 be the reflex fields of (E, Φ 1 ), (E, Φ 2 ), respectively. Then the compositum of fields E * Φ1 E * Φ2 contains the field φ 1 (E) = φ 2 (E). Proof. Since E is a totally complex quadratic extension of the totally real field F , we can write . By our assumption on Φ 1 , Φ 2 and ϕ 0 , we have Therefore, the compositum of fields E * Φ1 E * Φ2 contains the element φ 1 ( Let α F be an element of F such that F = Q[α F ]. Then similar to above, since Combined with above, we have: the compositum of fields E * Φ1 E * Φ2 contains the element ϕ 0 (α F ) and the element φ 1 ( √ −α E ) = −φ 2 ( √ −α E ), and so it contains the field φ 1 (E) = φ 2 (E). Proof of Theorem 1.6. This follows from Corollary 5.7, Proposition 5.4, and the fact that the field of definition of any CM abelian variety contains the reflex field. Proof of Theorem 1.7. This follows from Corollary 5.7, Proposition 5.3, and the fact that the field of definition of any CM abelian variety contains the reflex field. 6 Field of everywhere good reduction of CM abelian varieties We know that any abelian variety over a number field with complex multiplication by a CM-field has potential good reduction everywhere. In this section, we show that the logarithm of the root discriminant of the field of everywhere good reduction can be small compared with the logarithm of the discriminant of the CM-field. Lemma 6.1. Let A be an abelian variety over a number field K. Let L 1 , L 2 be number fields containing K. If the abelian variety A L1 /L 1 and the abelian variety A L2 /L 2 both have everywhere good reduction, then the abelian variety A L1∩L2 /L 1 ∩ L 2 has everywhere good reduction. Proof. This follows from the Neron-Ogg-Shafarevich criterion. By part (b) of Corollary 2 to Theorem 2 of [ST68], we have the following theorem: Theorem 6.2. Let A be an abelian variety over a number field K. Let p be a prime ideal of O K . Let p be the characteristic of the residue field O K /p. Suppose that A/K has potential good reduction at p. Let m be any integer ≥ 3 and prime to p. (b) The abelian variety A/K has good reduction at p. Corollary 6.3. Let K be a number field. Let A be an abelian variety over K with potential good reduction everywhere. Let S A/K be the set of all prime ideals of O K where the abelian variety A over K does not have good reduction. There exists a finite Galois extension L/K, L/K unramified at all primes p of O K with p / ∈ S A/K , such that the abelian variety A L /L has good reduction everywhere. Proof. We first fix a prime p 1 such that the abelian variety A/K has good reduction at every prime ideal p 1 of O K above p 1 . Let L 1 := K(A[p 2 ]) be the minimal field of definition of the set of p 1 -torsion points A[p 1 ] of A. By Theorem 6.2, we can show that L 1 /K is a finite Galois extension unramified at any prime ideal p of O K such that p / ∈ S A/K and the characteristic of the residue field O K /p is not equal to p 1 , and the abelian variety A L1 /L 1 has everywhere good reduction. Next, we fix a prime p 2 not equal to p 1 such that the abelian variety A/K has good reduction at every prime ideal p 2 of O K above p 2 . Let L 2 := K(A[p 2 ]) be the minimal field of definition of the set of p 2 -torsion points A[p 2 ] of A. Again by Theorem 6.2, we can show that L 2 /K is a finite Galois extension unramified at any prime ideal p of O K such that p / ∈ S A/K and the characteristic of the residue field O K /p is not equal to p 2 , and the abelian variety A L2 /L 2 has everywhere good reduction. Now consider the extension L 1 ∩ L 2 of K. It is a finite Galois extension unramified at any prime ideal p of O K such that p / ∈ S A/K (since p 1 = p 2 ). Since the abelian varieties A L1 /L 1 and A L2 /L 2 both have everywhere good reduction, by Lemma 6.1, the abelian variety A L1∩L2 /L 1 ∩ L 2 also has everywhere good reduction. Taking L = L 1 ∩ L 2 , we get our claim. The following lemma shows that in terms of unramifiedness, the extension L/K in Corollary 6.3 is the "best possible". Lemma 6.4. Let K be a number field. Let K ′ /K be a finite extension. Let p ′ be a prime ideal of O K ′ , lying above a prime ideal p of O K . Let A be an abelian variety over K. Suppose that the extension K ′ /K is unramified at p ′ , and the abelian variety A K ′ /K ′ has good reduction at p ′ , then the abelian variety A/K has good reduction at p. Proof. This follows from the Neron-Ogg-Shafarevich criterion. By Theorem 7 and the remarks before Theorem 7 of [ST68], we have the following theorem: Theorem 6.5. Let K be a number field. Let E be a CM-field. Let A be an abelian variety over K with complex multiplication by E. Let µ(E) be the group of all roots of unity in E. There exists a cyclic extension C of K of degree [C : K] ≤ 2 · #µ(E), such that the abelian variety A C over C has everywhere good reduction. Corollary 6.6. Let K be a number field. Let E be a CM-field. Let A be an abelian variety over K with complex multiplication by E. Let µ(E) be the group of all roots of unity in E. Let S A/K be the set of all prime ideals of O K where the abelian variety A over K does not have good reduction. There exists a cyclic extension K ′ of K of degree [K ′ : K] ≤ 2 · #µ(E), K ′ /K unramified at any prime ideal p of O K such that p / ∈ S A/K , such that the abelian variety A K ′ over K ′ has everywhere good reduction. Proof. Let C/K be the finite cyclic extension in Theorem 6.5 and let L/K be the finite Galois extension in Corollary 6.3. Let K ′ = C ∩ L. Then K ′ /K is a cyclic extension of degree [K ′ : K] ≤ 2 · #µ(E) and K ′ /K is unramified at any prime ideal p of O K such that p / ∈ S A/K . By Lemma 6.1, the abelian variety A K ′ /K ′ has everywhere good reduction. Hence we get our claim. In order to prove Theorem 1.8 and Theorem 1.9, we will also need the following theorem, which is a combination of Corollary A.4.6.5, Theorem A.4.5.1 and Remark A.4.5.2 of [CCO14]. Theorem 6.7. Let E be a CM-field and let Φ be a CM-type of E. Let E * Φ be the reflex field of (E, Φ) and let M be the field of moduli for the reflex norm of (E, Φ) (M is an everywhere unramified finite abelian extension of E * Φ ). There exists a prime p and a CM abelian variety (A, i : O E ֒→ End M (A)) over M of CM-type Φ such that A has good reduction at every prime ideal of O M outside p. Moreover, we can choose p such that where C prime is an effectively computable absolute constant in R >0 , for any positive integer n µ n denotes a primitive n-th root of unity, m is the order of the group µ(E) of all roots of unity in E, and p 1 , p 2 , · · · , p s are the distinct prime divisors of m. Assuming the Generalized Riemann Hypothesis, the above bound on p can be improved to Proof of Theorem 1.8. Assume the Generalized Riemann Hypothesis. Let g be a positive integer. Let E be a CM-field such that [E : Q] = 2g. Let Φ be a CM-type of E. Let E * Φ be the reflex field of (E, Φ) and let the field M , the abelian variety A over M and the prime p be as in Theorem 6.7, such that the upper bound on p is given by Equation (21). Denote K := M . As in Corollary 6.6, let S A/K be the set of all prime ideals of O K where the abelian variety A over K does not have good reduction. By our choice of A, for any p ∈ S A/K , p lies above the prime p. Let K ′ be the cyclic extension of K in Corollary 6.6 of degree [K ′ : K] ≤ 2·#µ(E), K ′ /K unramified at any prime ideal p of O K such that p / ∈ S A/K , such that the abelian variety A K ′ over K ′ has everywhere good reduction. Therefore, the extension K ′ /K is ramified only at the prime ideals q of O K ′ such that q lies above the prime p. Let D K ′ /K be the different of the extension K ′ /K. By Chapter 3, Section 6 of [Ser79], we have for any prime ideal q of O K ′ lying above a prime ideal p of O K , where e q/p is the ramification index of q|p. This means that we have where e q/p is the ramification index of the prime ideal q of O K ′ lying above the prime ideal (p) of Z. Therefore, we have where the product is over the prime ideals q of O K ′ above p. Thus, we have where f q/p is the residue degree of the prime ideal q of O K ′ lying above the prime ideal (p) of Z. Since the extension K/E * Φ is unramified, the different D K/E * Φ of the extension K/E * Φ is equal to the unit ideal of O K . Thus, we have where the last inequality follows from Equation (22). By the following Lemma 6.8, we have Plugging Equation (24) and Equation (25) into Equation (23), we get our claim. Lemma 6.8. Let K be a number field of degree [K : Q] = n. Let µ(K) be the group of all roots of unity in K. Then µ(K) is a finite cyclic group of order less than or equal to (2n) 2 . Lemma 6.9. Let K be a number field of degree [K : Q] = n. Then we have where for any positive integer k µ k denotes a primitive k-th root of unity, m is the order of the group µ(K) of all roots of unity in K, and p 1 , p 2 , · · · , p s are the distinct prime divisors of m. Then it is easy to see that for any g ∈ Z ≥1 , for any ǫ > 0, there exists a constant c(g, ǫ) > 0 depending only on g and ǫ such that for any CM-field E with maximal totally real subfield F such that [E : Q] = 2g and |disc(E)| ≥ c(g, ǫ). Remark 6.11. One might wonder whether there is a lower bound for |disc(E * Φ )| in terms of |disc(E)| and [E : Q]. The following example shows that the answer is no: Let F be any totally real number field. Let −d ∈ Z ≤−2 be any fundamental discriminant (so disc(Q( √ −d)) = d) such that −d is prime to disc(F ). (For any totally real number field F , there are infinitely many such −d.) Let E be the compositum of the fields F and Q( √ −d). Then E is a CM-field with maximal totally real subfield F . Let Φ be the CMtype defined as follows: For any ϕ 0 ∈ Hom Q (F, R), the element φ : E → C in Φ lying above ϕ 0 always sends √ −d to √ −d. Since disc(Q( √ −d)) = d is coprime to disc(F ), by Theorem 4.26 of [Nar90], for example, we have |disc(E)| = d [F :Q] |disc(F )| 2 . Therefore, for any fixed g ∈ Z ≥2 , the quotient where E is a CM-field of degree [E : Q] = 2g and Φ is a CM-type of E, can be arbitrarily small. Combining Remark 6.11 with Theorem 1.8, we have shown the following: Proposition 6.12. Assume the Generalized Riemann Hypothesis. For any g ∈ Z such that g ≥ 2, for any ǫ > 0, there exists a CM-field E with [E : Q] = 2g, a CM-type Φ of E, a number field K ′ and a CM abelian variety (A, i : O E ֒→ End K ′ (A)) over K ′ of CM-type Φ such that the abelian variety A over K ′ has everywhere good reduction and log |disc(K ′ )| ≤ ǫ log |disc(E)|. Remark 6.13. In view of Remark 6.10 and Proposition 6.12, we cannot remove the "average" condition in Theorem 1.6 and Theorem 1.7-Using only the (Proved) averaged Colmez conjecture, we can only prove averaged analogues of Theorem 1.3 and Theorem 1.4. Remark 6.14. In Theorem 6(i) of [Col98], Colmez has proved that there exist effectively computable absolute constants C Col,3 > 0, C Col,4 ∈ R such that for any CM-field E of degree [E : Q] = 2g and any CM-type Φ of E such that the following hold: Let E be a CM-field of degree [E : Q] = 2g and let Φ be a CM-type of E. It is easy to see that for the function A 0 (E,Φ) from Gal( E * Φ /Q) to C, for any σ ∈ Gal( E * Φ /Q), A 0 (E,Φ) (σ) = g if and only if σ = 1. Therefore, some calculations using the definition of the Artin conductor of Artin characters show that for any g ∈ Z ≥1 , there exist effectively computable constants C µ,1 (g) > 0, C µ,2 (g) ∈ R such that µ (E,Φ) ≥ C µ,1 (g) 1 [E * Φ : Q] log |disc(E * Φ )| + C µ,2 (g) for any CM-field E of degree [E : Q] = 2g and any CM-type Φ of E. We can compare this to Theorem 1.8.
2021-11-02T01:15:39.990Z
2021-10-31T00:00:00.000
{ "year": 2022, "sha1": "8e8ae02b89183d05eb88e9622eea11baf7261bf3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8e8ae02b89183d05eb88e9622eea11baf7261bf3", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
258909134
pes2o/s2orc
v3-fos-license
Genome assembly of the pioneer species Plantago major L. (Plantaginaceae) provides insight into its global distribution and adaptation to metal-contaminated soil Abstract Plantago is a major genus belonging to the Plantaginaceae family and is used in herbal medicine, functional food, and pastures. Several Plantago species are also characterized by their global distribution, but the mechanism underpinning this is not known. Here, we present a high-quality, chromosome-level genome assembly of Plantago major L., a species of Plantago, by incorporating Oxford Nanopore sequencing and Hi-C technologies. The genome assembly size was approximately 671.27 Mb with a contig N50 length of 31.30 Mb. 31,654 protein-coding genes were identified from the genome. Evolutionary analysis showed that P. major diverged from other Lamiales species at ~62.18 Mya and experienced two rounds of WGD events. Notably, many gene families related to plant acclimation and adaptation expanded. We also found that many polyphenol biosynthesis genes showed high expression patterns in roots. Some amino acid biosynthesis genes, such as those involved in histidine synthesis, were highly induced under metal (Ni) stress that led to the accumulation of corresponding metabolites. These results suggest persuasive arguments for the global distribution of P. major through multiscale analysis. Decoding the P. major genome provides a valuable genomic resource for research on dissecting biological function, molecular evolution, taxonomy, and breeding. Introduction Plantago is a large genus within the Plantaginaceae family, including over 250 species, with broad geographic distributions in temperate and high-elevation tropical regions. 1,2 Although the Plantago genus is well characterized from a taxonomic perspective, its intrageneric classification is still controversial and inadequate, especially within the subgenus Plantago, largely due to the plesiomorphic characters, low morphological variation, and lack of a reference genome. 1,3 Some Plantago species have cosmopolitan distributions and several are used as herbal medicine, food ingredients, and in pastures, such as Plantago major. 4,5 Plantago major, broadleaf plantain or greater plantain, is diploid (2n = 12), wind-pollinated, and self-compatible. 1,6 It is a perennial herbaceous plant with a fibrous root system, a rosette of oval-shaped leaves, and several long spike inflorescences. It can be found in soils with a wide range of fertility, pH, temperature, and moisture and shows outstanding tolerance to diseases, pests, radiation, and chemical pollution. 7,8 P. major is native to central, northern and southwest Asia, and Europe and naturalized worldwide. 9 It grows in various habitats, including meadows, wastelands, roadsides, and other sites of anthropogenic disturbance. 9 To develop resistance and adaptation, plants have evolved with a series of responding mechanisms such as DNA repair, plant-pathogen interaction, and metabolic changes. Metabolic changes include synthesis and accumulation of metabolites. Such specialized metabolism may confer plants with stress resistance and involve the production of primary metabolites such as amino acids and nucleic acids and secondary metabolites such as phenolics and flavonoids. 10 These specialized metabolites may not directly play a part in plant growth and development, but they are essential in interacting with the environment for adaptation and defense. In the case of primary metabolites, some amino acids (e.g. proline, histidine, and glutamic acid) can act either as signaling molecules or as chelators dealing with stresses such as drought, salt, and metal stresses or as precursors in the biosynthesis of secondary metabolites. 11,12 Even more so with secondary metabolites, in general, ecological and environmental disturbance can induce their accumulation. 13,14 Polyphenols are an excellent example of secondary metabolites elevating plant adaptation: because these molecules are involved in drought, high soil salinity, extreme temperature, UV-irradiation, nutrient deficiencies, metals, and pathogen attacks. 15 Plantago contains numerous secondary metabolites, such as phenolic compounds and flavonoids. 11,16 Some free amino acids and polyphenols can activate plant tolerance to drought and extreme temperature, defense against pathogen attack, inhibition of DNA damage, and chelation of metal ions. 15,17,18 P. major has the potential as a pioneer in phytoremediation of contaminated soil by accumulating metal ions (e.g. Cu, Pb, Zn, Cd, Cr, and Ni) 19,20 that may be based on its production of certain specialized metabolites. P. major is also a model plant for vasculature biology study since vascular tissue can be isolated easily and intactly from mature leaves. 21 A handy and efficient transformation method for P. major was also developed, 22 which can implement functional verification in situ. Based on these applications, the transport of both nutrient and information molecules was well characterized, such as the transporters of sucrose, 23 the responses to salinity, 24 and the responses under phosphate (Pi) deficiency. 25 To date, the chemical compounds, ecology, and population genetics of P. major have been widely studied. [26][27][28] However, the molecular mechanisms underlying its high pollution tolerance and broad fitness in diverse environments are largely unknown. A high-quality genome assembly is necessary to address these questions. We conducted a chromosome-level genome assembly of P. major and found that genes accounting for the biosynthesis of specialized metabolites, such as free amino acids and polyphenols, were expanded. The genes related to polyphenol biosynthesis are more highly expressed in roots. Another expanded gene family, the histidine biosynthetic (HISN) gene family, which correlates strongly with Ni tolerance, has been characterized. The expression patterns of PmHISN genes provided clues for the tolerance of this species to Ni. These genomic data will provide clues for elucidating the molecular mechanism underlying the robust adaptation of P. major to diverse environments. The reference genome will be a valuable resource for genetic studies and improvement of Plantago, such as genome-assisted breeding of novel cultivars with low-level heavy metal ions. . Sequencing, genome size estimation, and assembly One individual of P. major (PlanMa1, Fig. 1a) was provided by the Shennong Caotang Museum of Traditional Chinese Medicine in Guangzhou, China (113.3445 E, 23.2029 N). It is an inbred line that descended from a single seed for six generations. P. major and its seeds as Chinese herbs have the effect of clearing heat, diuretic and laxative. The total genomic DNA was isolated from fresh leaves using a DNA extraction kit (QIAGEN, Hilden, Germany). Three Nanopore libraries with insert sizes larger than 20 kb were constructed according to a standard protocol (Oxford Nanopore Technology, Oxford, UK), followed by single-molecule DNA sequencing. The libraries were sequenced with flow cells on the PromethION platform (Oxford Nanopore Technology, Oxford, UK). A total of 130.60 Gb (~180× of the estimated genome size) read bases were generated. Adapters and lowquality reads (Q ≤ 15) were removed from datasets. Before genome assembly, we estimated the genome size utilizing the K-mer method. The number of 17-mer sequences was counted by KmerFreq as included in SOAPdenovo package v2.04. 29 The P. major genome size was estimated by the following formula: G = K num /K depth , where the K num refers to the total number of K-mers, and K depth is the most frequent peak. Sequencing data were assembled using NextDenovo v1.0 (read_cuoff = 1k and seed_cutoff = 20k, blocksize = 8g) and corrected by NextPolish v1.0.1 with default parameters. 30 The genome was assembled using the following parameters: nextgraph_options= -n 83 -Q 6 -I 0.64 -S 0.27 -N 2 -r 0.48 -m 3.81 -C 1180183 -z 20. The quality of genome assembly was assessed by BUSCO v5.3.2. 31 The genome assembly was assessed using next-generation sequencing. The sequence map rate is 99.02% and the coverage rate is 90.83% and the final accuracy of the genome is 99.99%. Hi-C assembly Fresh leaf material was fixed in formaldehyde to give DNAprotein bonds. The restriction enzyme DpnII (New England Biolabs, Hitchin, UK) was used to digest the chromatin. The 5ʹ overhang ends were filled in with biotinylated residues. After re-ligation, DNA was sheared into ~350 bp fragments by sonication. The Hi-C library was prepared following a standard procedure and sequenced on the Illumina NovaSeq 6000 platform with PE150 mode (Illumina, San Diego, USA). A total of 903.68 million clean Hi-C paired-end reads were mapped to genome assembly using Bowtie2 v2.3.2 (-end-toend model, parameters: --very-sensitive, -L 30). 32 33 The number of pseudo-chromosomes was set to six according to previous karyotyping studies. 34 Then the genome was divided into 100 kb bins. A matrix was constructed based on the pairwise comparison by Hi-C-Pro v2.11.1, 35 and a contact map was plotted to estimate the quality of pseudo-chromosome using the ggplot2 v3.3.6 package as implemented in R. 36 Gene structural annotation of P. major was performed following three strategies: (i) de novo prediction performed by AUGUSTUS v3.3.3 40 A monocot genome (Setaria viridis) was used as an outgroup for the phylogenetic analysis. OrthoFinder v2.5.4 [57] was utilized to detect orthologous groups of the P. major genome using an e-value threshold of 1e-10. 209 orthogroups with a minimum of 75% of species having single-copy genes were used for phylogeny reconstruction. The protein sequences were aligned using MAFFT v7.508, 45 after which the resulting multiple sequence alignment (MSA) datasets were converted to coding DNA sequence (CDS) format using PAL2NAL version 14.1. 46 Sites of poor alignment quality were removed by Gblocks v 0.91b. 47 The final dataset was generated by concatenating alignments. The phylogenetic tree was constructed by RAXML v8.2.11 with the GTR model and gamma distribution. 48 The RelTime-ML, implemented in MEGA X software, was used for an evolutionary time estimate. 49 Fossil records from the Timetree of Life 50 were used to calibrate the inferred tree. The CAFÉ v5.0.0 package was used to investigate the expansion and contraction of gene families. 51 Enrichment analysis of KEGG was performed on unique genes and expansion gene families. The 4DTv (four-fold synonymous third-codon transversion) method was used to assess the WGD (Whole-genome duplication) event of P. major as well as two Lamiales (O. europaea and S. indicum). An all-to-all search was performed by blastp (e-value < 1e-10). Collinear regions were identified by MCscan among these three genomes. 52 Also, the synonymous substitution rates (Ks) of collinear regions were calculated using CodeML in the PAML package v4.8. 53 Paralogous gene pairs, together with the gene density, gene expression pattern (leaf, root, and seed), and GC content were visualized using the R package 'circlize'. 54 Distribution and ecological ranges For analyzing the distribution and ecological ranges of P. major and three other globally distributed species (I. nil, M. micrantha, and S. viridis), we first obtained geo-referenced locality data for each species from the GBIF (Global Biodiversity Information Facility) (GBIF, https://www.gbif.org) in R, and the occ_download function in the rgbif package (https:// CRAN.R-project.org/package=rgbif). We checked each species' occurrence to ensure the database was representative of their distributions. For each locality, we extracted the aridity index (AI = mean annual precipitation/potential evapotranspiration; MAP/PET) using the Global-Aridity dataset (https:// cgiarcsi.community/data/global-aridity-and-pet-database/). Next, we extracted soil nitrogen concentration and soil bulk density for the 0-20 cm soil depth from the World Soil Database 55 at 1 × 1 degree resolution. To compare different ecological ranges among species, we used ANOVA and multiple comparisons (Tukey HSD) based on the mean and variance of each environmental factor across the whole range. Transcriptome sequencing The PlanMa1 plants were cultivated in pots filled with a substrate mixture of peat moss and perlite in a 3:1 ratio. The plants were maintained under constant conditions of 26°C temperature and a photoperiod of 16 h of light followed by 8 h of darkness. Total RNA of 40-day-old seedlings was extracted from three replicates of fresh leaves, roots, and seeds and treated with DNase I (QIAGEN Genomic). The RNA integrity was validated using the NanoDrop One UV-Vis spectrophotometer. Then mRNAs were enriched by Oligo (dT)-attached magnetic beads and random hexamers were used for cDNA synthesis. RNA-sequencing libraries were subsequently sequenced on the Illumina HiSeq platform with PE150 mode. Raw data were filtered by fastp v0.12.6. 56 The FPKM (Fragments per kilobase per million mapped reads) was used to estimate the expression level of transcripts. In this study, we focus on the expression of polyphenol synthesis genes, and FPKM values were calculated by StringTie v2.1.7. 57 Also, differential gene expression analysis was conducted using DESeq2 with fold change ≥ 2 and FDR-adjusted P-value ≤ 0.05. 58 PmHISNs identification and expression pattern analysis Arabidopsis HISN protein sequences were used as queries to perform BLASTP searches to the P. major database with the e-value<1e-10. Only those with e-value<1e-100 were kept as candidates. A phylogenetic tree was drawn by MEGA X 49 with the Maximum Likelihood method and the bootstrap value of 1,000 using the protein sequences of AtHISNs and PmHISNs. The P. major (PlanMa1) seeds were sown in soil in pots and subsequently maintained in a controlled greenhouse with 16 h light and 8 h darkness periods at 25°C. After 35 days, the experimental P. major seedlings were transplanted into a halfstrength Hoagland solution, which was replaced on alternate days. Following an additional week, the seedlings were transferred to fresh half-strength Hoagland solutions that were supplemented with varying concentrations of NiSO 4 (0 μM, 200 μM, and 500 μM) for a duration of 24 h. Plants were harvested and split into roots and shoots, with the root material being washed with double-distilled water to remove any residual Ni ions. All samples intended for qRT-PCR were snap-frozen in liquid nitrogen and subsequently stored at -80°C until required. RNAs were extracted as previously mentioned and reverse-transcribed using an oligo (dT) primer in combination with SuperScript II reverse transcriptase (Vazyme). qRT-PCR was performed in the Quantagene™ q225 Detection System, using SYBR Green Master Mix reagent (Vazyme) according to the manufacturer's instructions. PmACTIN2 served as the internal control, and gene expression levels were determined using 2ˉΔ ΔCt method. 59 A list of the primers used is provided in Supplementary Table S1. The organ-specific expression patterns of AtHISN1A and AtHISN1B were obtained from the TAIR database (https://www.arabidopsis.org/). Free amino acids measurement For the analysis of free amino acids (FAA), all samples, including leaves and roots, were subjected to drying in an oven set to 80°C for a duration of 24 h. 0.50 g of each sample was extracted utilizing 25 ml of 0.01 M HCl for 30 min, at ambient temperature. Following centrifugation, 2 ml of supernatant was transferred into new tubes and combined with equal volumes of an 8% (v/v) sulfosalicylic acid solution. The resultant mixtures were centrifuged at 12,000 rcf for 5 min. Finally, the supernatant was analyzed by Amino Acid Analyzer (Sykam S433, Eresing, Germany). De novo assembly of P. major genome A total of 96.66 Gb (Gigabases) Nanopore sequencing data were generated and used for further analysis (Supplementary Table S2). The genome of P. major consists of six pairs of chromosomes (2x = 12, n = 6), and its size is approximately 690 Mb (Megabases). 6 In this study, the genome size was estimated to be ~701 Mb based on K-mer analysis (Supplementary Figure S1a), and the final assembly was 671.27 Mb with a contig N50 size of 31.30 Mb (Table 1). The longest contig reached 72.24 Mb. The quality of the genome assembly was assessed by BUSCO (Benchmarking Universal Single-Copy Orthologs). 31 We successfully detected 95.49% of the complete BUSCO (S + D) (Supplementary Figure S1b and Table 1). Based on Hi-C assembly with the agglomerative hierarchical clustering algorithm, 157 contigs containing 592.23 Mb Hi-C data were arranged and placed on six pseudochromosomes, representing 88.23% of total bases ( Table 1). The size of chromosomes ranged from 74.69 to 113.78 Mb. A contact map was plotted to validate the correction of the Hi-C assembly; the assembled six pseudochromosomes (named LG01-LG06) corresponded to the chromosome numbers in P. major (n = 6) (Fig. 1b). Annotation of P. major genome Repetitive elements were annotated and masked in P. major before gene prediction. A total of 3.90 million SSR were detected (Supplementary Table S3). The repeat elements of the P. major genome were estimated to be 469. 83 Mb, corresponding to 69.99% of the genomic assembly ( Supplementary Fig. S1c and Supplementary Table S4). Non-coding RNAs (ncRNAs) were predicted in the genome as well (Supplementary Table S5). Protein-coding genes were annotated by integrating de novo homology-based and RNA-Seq-based results. The generated consensus P. major gene set included 31,654 protein-coding genes, and the average length of coding DNA sequences (CDS) was 1,184.4 bp. The mapping rate of RNA-Seq reads was 99.02% and the coverage rate of annotated genes was 90.83%. Functions of 28,390 genes were annotated, corresponding to 89.69% of the predicted genes ( Table 2). The quality of the annotation was assessed by BUSCO (Benchmarking Universal Single-Copy Orthologs). 31 We successfully detected 1,350 BUSCOs in the embryophyta_odb10 database, corresponding with 98.18% of P. major genes (Supplementary Fig. S1b and Table 2). After gene annotation, we analyzed gene density, GC content, organ-specific gene expression patterns, and paralogous genes (Fig. 1c). There were 2048 (12.93%) pairs of tandem genes and 4685 (14.80%) collinear genes (Supplementary Tables S6 and S7). Evolutionary analysis The genome of P. major was compared with that of other angiosperms, including Striga asiatica, Salvia splendens, Erythranthe guttata, Handroanthus impetiginous, Sesamum indicum, Genlisea aurea, Dorcoceras hygrometricum, and Olea europaea, which are all Lamiales and thus phylogenetically closely related to P. major, and that of Ipomoea nil, Mikania micrantha, Ricinus communis, Arabidopsis Table S8). Whole-genome duplication (WGD) is a prevalent phenomenon in angiosperm plants, providing a vast source of raw genetic material for gene genesis. In this study, we investigated genome expansion in P. major by analyzing WGD events. We estimated 4DTv and Ks values based on paralogous gene pairs within collinear regions identified in P. major, O. europaea, and S. indicum. Genome-wide doubling events generate numerous homologous genes, as reflected in the Ks values, which exhibit a large number of homologous gene pairs with closely clustered Ks values, and Ks peaks correspond to the occurrence of genome-wide doubling events. A larger number of gene pairs with 4DTV presence suggests greater genomic diversity or an increased number of redundant genes, possibly indicating species differentiation or ongoing genome duplication. Intra-genome collinearity analysis revealed sharp peaks in both 4DTV and Ks, confirming the occurrence of WGD events in P. major, O. europaea, and S. indicum (Fig. 2b and c). Furthermore, P. major 4DTV results exhibited two peaks, signifying two WGD events: one occurred during the early evolutionary stage of Lamiales (Peak A, 4DTV ≈ 0.55 and Ks ≈ 1.55), and the other was a more recent genome duplication, potentially occurring within Plantaginaceae (Peak B, 4DTV ≈ 0.35 and Ks ≈ 0.07) (Fig. 2b and c). In contrast, O. europaea and S. indicum had only one pronounced WGD event. 3.4. Unique and expanded genes enriched in metabolite biosynthesis and defense in P. major A comparison of gene families was made among P. major, O. europaea, and S. indicum (Fig. 2d). The three species shared 8,881 out of the 15,930 orthologous gene families. There were 1,048 unique gene families in P. major, which shared more gene families with S. indicum (3,866) than with O. europaea (2,135) (Fig. 2d). KEGG enrichment analysis indicated unique families enriched in primary metabolite pathways such as histidine metabolism (map00340) and secondary metabolite pathways such as the phenylpropanoid biosynthesis pathway (map00940), isoflavonoid and flavonoid biosynthesis pathways (map00943 and map00941), and glutathione metabolism (map00480). Free histidine (His) can chelate Ni in plants and is responsible for nickel transport and tolerance. 60 Phenylpropanoids, isoflavonoids, and flavonoids can improve plants' tolerance to environmental stresses. 14 Glutathione is essential for PCs (phytochelatins) synthesis, a principal class of metal chelators (Yadav 2010). Unique genes in DNA repair (nucleotide excision repair and mismatch repair, i.e. ko03420 and ko03430) and plant defense activities (plant-pathogen interaction, i.e. ko04626) were also enriched in P. major (Fig. 2e). Compared with other angiosperms, 1,486/4,449 gene families manifested expanded/contracted patterns in P. major. The KEGG enrichment analysis of expansion showed similar results for unique families. Some expanded gene families were related to the biosynthesis of amino acids (map01230), histidine metabolism (map00340), the phenylpropanoid biosynthesis pathway (map00940), isoflavonoid and flavonoid biosynthesis pathways (map00943 and map00941), and glutathione metabolism (map00480) (Fig. 2f). These results indicate that those resistance genes retained after WGD events may allow P. major to exhibit a global distribution and adaptations, and the specialized metabolites may explain P. major's repertoire of habitat types and environmental conditions. Adaptation strategy of P. major Plantago major has widely naturalized throughout much of the world (Fig. 3a). Compared with three globally distributed species (I. nil, M. micrantha, and S. viridis), P. major occupies much larger ranges of climatic conditions and soil environments (the widest interquartile ranges in Fig. 3b-d). Specifically, P. major grows over a wide climatic range, but moderately in arid environments (intermediate aridity index values, Fig. 3b) and acidic or infertile soils (the most significant values of soil nitrogen concentration and the lowest values of soil bulk density; Fig. 3c and d). The resistance genes mentioned above may contribute to P. major's adaptation to harsh conditions. Given its wide adaptation, P. major can be used as a pioneer plant in new assarts or disturbed lands. To assess differences in gene expression patterns of phenylpropanoid biosynthesis genes among organs, we analyzed 78 differentially expressed genes (DEGs). The correlation of the expression patterns in different organs was relatively low, while replicates of the same organ showed a similar expression pattern (Fig. 4a). Moreover, the heatmap of the DEGs indicated that leaves, seeds, and roots elegantly exhibited quite different transcriptome profiles (Fig. 4b). Most genes showed a higher expression level in roots than in leaves and seeds, except for the K05350 bglB. Organspecific analysis of DEG indicated that the expression level of phenylpropanoid biosynthesis genes was significantly higher in roots than in leaves and seeds (Fig. 4c-e). The numbers of up-and down-regulated genes in roots and leaves were 49 and 19, respectively. For root/seed and leaf/seed comparisons, the numbers of up-and down-regulated genes were 62/21 and 22/17. The results of our analysis suggest that the expression level of phenylpropanoid biosynthesis genes is significantly higher in roots compared to leaves and seeds in P. major. Our findings indicate that the plant would activate polyphenol synthesis in its roots, which may contribute to its global distribution. Highly expressed histidine biosynthesis genes confer P. major high tolerance to Ni stress Plantago major is a pioneer known to grow in polluted areas with high concentrations of metals, such as Ni. 20 As free His is a detoxicant of Ni in plants 60 and the genes related to His metabolism were expanded in P. major, we further explored the His biosynthesis (HISN) gene family. Ten PmHISNs were identified based on Arabidopsis orthologs. All PmHISNs were designated according to Table S9). HISN1A and HISN1B catalyze the first and also the ratelimiting step in the His biosynthetic pathway, and overexpression of HISN1A/B lead to significant accumulation of His. 62 When compared with Arabidopsis, the expression levels of PmHISN1A and PmHISN1B were much higher than that of Arabidopsis homologs (Fig. 5b). More importantly, AtHISN1B was barely expressed in roots, while PmHISN1B was expressed at similarly high levels in leaves and roots (Fig. 5b). To explain the difference, we explored the gene and protein structures. PmHISN1A/B share similar gene structures with AtHISN1A/B which were organized into 11 exons and 10 introns. (Supplementary Fig. S3a). AtHISN1A/B and PmHISN1A/B proteins all encompass HisG and HisG_C (HisG, C-terminal) domains, whereby the HisG is ATP phosphoribosyltransferase participating in histidine metabolism (Supplementary Fig. S3b). But the prominent promoters of PmHISN1B were many more than AtHISN1B (Supplementary Fig. S3c). The accumulation of His is positively correlated with the mRNA levels of HISN1A and HISN1B, which suggests that in comparison with Arabidopsis, a higher concentration of His and greater Ni tolerance is expected in P. major. In addition, more HISN genes in related species (within Lamiales) of P. major partially explain its high Ni resistance (Supplementary Fig. S4). To analyze PmHISNs' response to Ni stress, we treated P. major with different concentrations of Ni 2+ (200 µM and 500 µM NiSO 4 in half-strength Hoagland solution). Almost all the PmHISNs were induced by the Ni treatment, and a higher concentration of Ni led to higher PmHISNs expression ( 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 5b and c). Moreover, since the Ni uptake occurs in the roots, the PmHISNs in root were more significantly induced. To confirm whether the concentration of His was influenced, we measured the levels of free amino acids. Although the concentrations of His in the leaves remained about the same, the His concentrations in roots were significantly increased (Fig. 5e). This result suggests that Ni can activate the His biosynthetic pathway in P. major and promote His synthesis and subsequently mediate Ni chelation. Based on these results, we surmise that the expanded genes and shifted expression patterns related to His metabolism also allowed P. major to exhibit a global distribution. Discussion Due to a dearth of valid taxonomic characters, the intrageneric classification of the genus Plantago is controversial and inadequate. Several morphological features, including trichomes and seeds, as well as chemotaxonomic analyses, [63][64][65] have been employed in attempts to identify and classify the species. However, none of these methods have yielded a conclusive result that is considered satisfactory, not even with 91 mainly morphological and embryological characters. 66 Recently, high-throughput sequencing was applied to update the taxonomy of Plantago. 1 Nevertheless, owing to the paucity of the reference genome, they just assembled the sequencing reads to the only published plastome of P. media L. 67 As our manuscript was nearing completion, the genome of Plantago ovata was published, highlighting the increasing recognition of the significance of Plantago species in genomic research. 68 Research on the genome of herbal medicines is becoming increasingly popular. The rapid development of next-generation sequencing and chromosome-level assembly technologies makes it possible to produce de novo genome assemblies. 69 In this study, long reads of the P. major genome were generated by Nanopore sequencing, and a well-resolved genome was assembled by Hi-C technology. The final assembled contigs comprised six chromosomes, corresponding to 88% of the assembly. The 671.27 Mb P. major genome included 70% repeat sequences and 31,654 genes. The genome size and gene number of P. major are at a medium level among all compared species (Supplementary Table S8). The genome assembly presented in this study represents a robust resource for clarifying the taxonomic relationships within the genus Plantago. Although previous research in the genus has relied heavily on molecular phylogenetic analyses based on ITS (Internal transcribed spacer), [70][71][72][73] chloroplast, 73,74 and mitochondrial 73 makers or sequences, these methods have proven to be insufficient. ITS sequences are typically shorter than 500 bp and therefore possess a limited number of informative variants. In addition, ITS copies within the genome can exhibit high homogeneity due to concerted evolution, 75 a limitation that also applies to chloroplast and mitochondrial markers. The use of ITS, chloroplast, and mitochondrial sequences in taxonomic studies is limited by several factors. As mentioned previously, these molecular markers can be subject to limited variation due to factors such as maternal inheritance and selective sweeps, respectively. These limitations can hinder the ability of these molecular markers to accurately resolve taxonomic relationships, particularly in cases of recent or rapid speciation events. Consequently, genomic data offers a more comprehensive approach that can overcome these limitations and provide a more accurate and detailed understanding of the taxonomic relationships among species. In summary, the genomic data assembled in this study offers a powerful tool for accurately resolving the taxonomic relationships within Plantago. The genome of P. major was compared with that of other Lamiales, a species-rich and highly diverse order. 76 Table S8). Multiple polyploidization events occurred during the evolution of Lamiales. 77 In this study, both 4DTV and Ks results revealed two rounds of WGD events in P. major: one occurred early in the evolution of Lamiales, and the other more recently (Fig. 2). WGD events lead to a rapid increase in the genome size and expansion of gene families. 78 Gene duplication/expansion may enhance plant disease resistance and adaptation to stress. 78 In this case, gene expansion following WGD enriched gene families associated with adaptation to stress (e.g. phenylpropanoid biosynthesis, isoflavonoid and flavonoid biosynthesis, glutathione metabolism, histidine metabolism, nucleotide excision repair, and plant-pathogen interactions). Plantago species synthesize multiple polyphenols, for example, lignin, iridoid, and caffeoyl phenylethanoid glucosides. 11,63,79 Polyphenols confer plant tolerance to biotic and abiotic stresses such as pathogen attacks, oxidants, and ultraviolet radiation. 61,80 As a result, polyphenols enhance the survival of plants in various environments. 81 In this study, we identified genes involved in polyphenol synthesis (Fig. 4). Most gene families involved in polyphenol synthesis were expanded, for example, cinnamoyl-CoA reductase (EC:1.2.1.44) and 4-coumarate-CoA ligase (EC:6.2.1.12). We also investigated the expression pattern of polyphenol synthesis genes in different organs of P. major (Fig. 4). The expression levels of phenylpropanoid biosynthesis genes were significantly higher in roots than in aerial parts, indicating high resistance to belowground biotic and abiotic stresses. Corresponding with the above adaptive mechanisms, we noted the wide ecological distribution of P. major in climatic and edaphic conditions (especially drought and soil infertility; Fig. 3). A previous study indicated that polyphenol-rich plants are adapted to arid and infertile habitats; polyphenols affect root growth and reduce the toxic effects of metal ions. 82 It is our conjecture that, unlike inter-species disparities in expression, when both HISN1A and HISN1B genes are expressed at high levels simultaneously, this can result in enhancing P. major Ni tolerance. One possible explanation would be that the prominent promoters of PmHISN1B, such as CAAT-box, TATA-box, MYB, and MYC, were many more than AtHISN1B, and the total number of identified promoters before PmHISN1B was also more than that of AtHISN1B ( Supplementary Fig. S3c). Ni treatment induced the His biosynthesis pathway and enhanced the concentrations of some stress-related free amino acids, such as glycine, arginine, serine, leucine, lysine, and isoleucine ( Supplementary Fig. S3d). Additionally, evidence is accumulating that P. major is a suitable species for phytoremediation of metal-polluted soils contaminated by Cu, Fe, Pb, and even radioactive U. 83,84 To gain further comprehension regarding the function of HISNs, a comparative analysis was conducted among HISNs obtained from 16 distinct species, including those that are considered related (within Lamiales). The results of this analysis revealed that the related species tend to possess a greater number of HISN genes, particularly with respect to HISN1, which serves as the enzyme responsible for catalyzing both the initial and rate-limiting step in the His biosynthesis pathway (Supplementary Fig. S4). It should be noted that under no circumstances should P. major that has been cultivated within contaminated regions be employed for medicinal purposes. In conclusion, we have successfully assembled a high-quality and chromosome-level genome of the P. major, and have provided an annotation for the same. This genome can serve as a reference for the investigation of gene functions, genome evolution of Lamiales, and Plantago taxonomy. Based on our analysis, we determined that P. major diverged from other Lamiales species at ~62.18 Mya and underwent two distinct rounds of WGD events. Furthermore, we observed an The histidine content in P. major leaves and roots under Ni treatments respectively. CK: the control without Ni treatment. Data represent the mean ± SD of three biological replicates. The student's t-test is evaluated respect to the control *P < 0.05, **P < 0.01. expansion in the genes responsible for secondary metabolism, with polyphenol biosynthesis and amino acid biosynthesis genes being significantly expressed. Notably, we observed a strong induction of His synthesis as a consequence of Ni exposure. These results may serve to explain the global distribution of P. major. We suggest that P. major can be used as a pioneer plant in a harsh environment, as well as for phytoremediation of metals. Conflict of interests The authors declare no conflict of interest. Data availability statement The P. major genome assembly was submitted to NCBI GenBank (accession number JAIFAC000000000). The raw sequencing reads and Hi-C data were deposited in the NCBI Sequence Read Archive (SRA) under the BioSample SAMN20255407. Also, nine files of transcriptome raw reads were deposited in SRA under the BioSample SAMN20959326-SAMN20959334. The annotation file is available at figshare (https://figshare.com/articles/dataset/ Plantago_major_evm_gff/15149097). Supplementary Data Supplementary data are available at DNARES online. Supplementary Figure S1. Genome assessment and annotation of P. major. promoters identified by plantCARE. (e)The contents of some stress-related free amino acids. Gly, Glycine; Arg, Arginine; Ser, Serine; Leu, Leucine; Lys, Lysine; Iso, Isoleucine. Data represent the mean±SD of three biological replicates. Supplementary Figure S4. The summary of HISNs in 16 species. The branches of Lamiales species are indicated as red. The evolutionary time scale is displayed at the top of tree.
2023-05-27T06:17:42.890Z
2023-05-25T00:00:00.000
{ "year": 2023, "sha1": "e5a8e6f69cd5dd8ea525d342793032f88dfe7b7f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1093/dnares/dsad013", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "03db3ab0e6009324cfe35c6c1f051bbf67874b8b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
208275405
pes2o/s2orc
v3-fos-license
Understanding and treating different patient archetypes in aesthetic medicine Abstract Background Factors that motivate the treatment goals and expectations of the aesthetic patient reflect evolving social, cultural, and commercial influences. The aesthetic practitioner may often be faced with the challenge of first decoding the underlying motives that drive the patient to pursue their specific goals. The challenge for clinicians is further compounded by an increase in patient diversity with respect to race, ethnicity, age, and gender. Aims Simplify the path to patient interpretation with identification of primary patient archetypes. Methods The “Going Beyond Beauty” (GBB) initiative, consisting of 27 market research projects, was conducted to survey the primary goals and motives for seeking treatment aesthetic treatment. The results were stratified into predominant patient archetypes using segmentation analysis and then validated through online surveys, 1‐to‐1 interviews, and focus groups conducted with patients. An advisory board of internationally based aesthetic clinicians integrated the data with their own insights to further characterize each archetype. Results Data from over 54 000 participants in 17 different countries were distilled into four distinct patient archetypes based on motivating factors, aesthetic goals, initial treatment requests, and treatment opportunities and challenges. These archetypes were named Beautification, Positive Aging, Transformation, and Correction. Conclusion The clinician's ability to recognize these four primary archetypes may provide a useful frame of reference to understand patient motives better, anticipate and manage their expectations, and provide the appropriate treatment guidance that best serves the long‐term goals of their patients. | INTRODUC TI ON The practice of aesthetic medicine is a combination of interpreting patient goals and skillfully tailoring a treatment approach to provide an optimal aesthetic outcome, and in the process, foster a trusting clinician-patient relationship. In designing the best treatment approach, the aesthetic practitioner may often be faced with the challenge of decoding the underlying motives that drive the patient to pursue their specific goals. While one patient may be very receptive and embrace a proposed treatment approach, another patient with a completely different mindset may reject the same suggestion and withdraw altogether. This challenge is compounded by the fact that clinicians are increasingly encountering greater patient diversity with respect to race, ethnicity, age, and gender as societal comfort with the pursuit of aesthetic treatment grows. Workshops and training sessions are widely available to hone the clinician's technical competence, but there is nothing that directly facilitates the maintenance and practice of patient understanding. Clinicians need a strategy to help them streamline the challenge of patient interpretation, which ideally highlights why there is no one treatment approach that will serve the needs of all. One way to simplify the path to patient interpretation is to identify patients by their primary type or archetype. The Oxford online dictionary defines archetype as "a very typical example of a certain person or thing; an original which has been imitated, a prototype." 1 By using commonly observed goals and motivating factors to identify patients by archetype, clinicians may have a better frame of reference to understand the patient more holistically and select those treatment approaches that will best suit their needs. Although this strategy is not absolute, as patients may fit into more than one archetype or evolve from one archetype to another, these characterizations may aid the clinician's initial ability to understand the goals and motives of individual patients in an increasingly diversifying population. Once a patient's archetype has been identified, the clinician may choose to modify their consultation approach, including the language used, the tone, the pace, and the type of initial treatment suggestion, to reassure the patient that their motivation is understood. To characterize the aesthetic patient archetypes, a global consumer research study was conducted, which explored the motivations and barriers associated with pursuing aesthetic treatment. The "Going Beyond Beauty" (GBB) initiative was conducted by Allergan from 2014 to 2017 and consisted of 27 market research projects that captured the insights of over 54 000 participants in 17 different countries. The primary goals and motives for seeking treatment were stratified into "types" using segmentation analysis, which was then validated through qualitative methods, including online surveys, 1-1 interviews, and focus groups conducted with patients. Through these analyses, four distinct patient archetypes were identified, namely the Beautification, Positive Aging, Transformation, and This overview aims to enhance the clinician's ability to recognize these four primary archetypes of the aesthetic patient by integrating the results of the GBB initiative with the peer-to-peer insight from an advisory board of internationally based aesthetic clinicians. The common motivating factors, goals, and initial treatment requests, as well as treatment opportunities and challenges, were distilled into a profile of each patient archetype. The authors hope to provide aesthetic clinicians with a means to better identify individual patient needs among a diversifying patient population and a key to cultivating a trusting clinician-patient relationship. | Beautification archetype The Beautification archetype is characterized by the patient who is innately focused on aesthetics, well-groomed, on-trend with beauty and fashion accessories, and actively pursues maximizing their attractiveness potential. Highly influenced by trends, fashion, social media, and the treatment outcomes of their peer groups, this archetype tends to align their aesthetic goals with a glamorous "look" (eg, celebrity persona). [3][4][5] Common requests of this archetype include "I want to look more attractive" and "I want to look like a certain celebrity," or they want specific features similar to that of a particular F I G U R E 1 The Beautification archetype. Pretreatment (A, C) and post-treatment (B, D). The treatment approach designed for the patient involved primary management of her lower temple and lateral suborbicularis oculi fat (SOOF) using VYC a -17.5L b and deep malar fat pad using VYC-20L to increase the maxillary projection and support the orbital retaining ligament. Secondary stage involved direct management to inferior orbital rim (lateral and central) using VYC-15L onto periosteum. Patient photographs provided by Jonquille Chantrey. a VYC, Vycross; b L, Lidocaine celebrity. Treatment goals usually include enhancement of individual features such as fuller lips, slimmer nose, more defined cheek and jawline, and glowing skin. Figure 1 provides a treatment example of a Beautification patient. | Treatment opportunities and challenges The Beautification archetype is usually open to a range of different treatments; however, they may not be loyal to a single practice as they may be inclined to shop around and be price-conscious. Their motivation may stem from wanting to look good on social media websites or in "selfies"-even if, in the clinician's opinion, their requests might reduce their overall aesthetics. Some may seek exaggerated results that do not match the clinician's aesthetic ideals or feel they are well-informed but lack insight into their own realistic outcomes. By maintaining focus on 1 or 2 aspects of a certain look, they may not consider the overall harmony of their facial features post-treatment. Some may not be concerned that results are incongruent with their racial/ethnic identity, or how the treatment of 1-2 areas may impact the potential for long-term treatment planning. Notably, a subset of patients may already be very attractive; models or actresses who have a strong sense of ownership of their beauty and pose a technical challenge for some clinicians; this may lead to conflict during the consultation and result in the patient feeling misunderstood. | Treatment considerations Because this archetype tends to be highly influenced by fashion, social media, and high beauty expectations, the clinician needs to understand the trends that influence this archetype and be able to speak their language at their level. Authors agree that primary treatment goals focused on enhancing volume, defining and projecting features, and enhancing skin quality. Racial influences played an important role; while lips and reshaping the cheeks were high priorities for Western patients, Asian patients tend to focus on facial slimming, nose, cheek, and chin definition. Within this archetype, there is a big difference between the 18-20-year-old and 25-year-old patients, as the younger patients (<25 years) may require a more attentive assessment of needs because emotional development and the confidence that comes with it may not yet be complete. 6 In addition, there is a greater potential to encounter patients with body dysmorphic disorder (BDD) in this archetype. 7 A comprehensive consultation and cooling-off period between consult and treatment can help clarify goals and identify any red flags. | Transformation archetype The Transformation archetype is characterized by the patient who wants to improve their social status or competitive edge in the workplace by achieving a specific beauty ideal. In some cases, this may reflect cultural pressure imposed by a culturally defined beauty ideal. This element of treatment motivation differentiates the Transformation archetype from the Beautification archetype. Most are driven to achieve a specific societal or gender ideal with a basis in their specific social culture. [8][9][10] Some of the trends in Korean cosmetic surgery exemplify this archetype. The Korean terms "kyoˇrhon soˇn-ghyoˇng" (marriage cosmetic surgery) or "chig'oˇpsoˇnghyoˇng" (employment cosmetic surgery) are widely accepted concepts that refer to pursuit of the "right face" to elevate your chances of success with a specific goal or aspiration. 11,12 While the Asian cultures are known for their pursuit of certain aesthetic cultural ideals, these are also shared by South American and Middle Eastern cultures. This archetype is not race-specific but rather based on social culture. the nose and chin, and treatments that accentuate and enlarge the appearance of the eyes. Figure 2 provides a treatment example of a Transformation patient. | Treatment opportunities and challenges The Transformation archetype is usually realistic in their expectations, with a high potential for patient satisfaction. This archetype may also be potentially easier to treat because they have clear ob- jectives, are open to education, and tend to accept the clinician's professional opinion to achieve their goals. Once they take time to consider suggestions, they follow through with treatments. | Treatment considerations Because this archetype is focused on transformation, the most suitable treatments will be those that contribute to shaping, pro- | Correction archetype The Correction archetype is characterized by the patient who is motivated by a feature they perceive as having a negative impact on their life. They can be continually bothered by which may or may not be noticeable to others. The particular feature or flaw can create ongoing embarrassment and may even contribute to withdrawing socially and loneliness. There is less focus on a specific aesthetic ideal and more of a desire to rebalance or re-proportion features to simply feel more comfortable in their own skin. The range of bothersome features varies widely and can be congenital or acquired | Treatment opportunities and challenges The Correction archetype is usually very focused and specific with a treatment request. Though they are focused on finding the best solution for their issue, they are also open to additional treatments that may enhance treatment results. This archetype is extremely loyal and grateful once they sense the clinician's empathy for their concerns. This archetype can be very satisfying to treat because F I G U R E 3 The Correction archetype. Pretreatment (A, C) and post-treatment (B, D). A detailed assessment of the patient in animation was necessary to address the expressive asymmetry due to congenital hemifacial microsomia. A complex treatment approach was required, which included VYC a -20L b to the right zygomatic arch and chin menton, VYC-17.5L to the left piriform fossa, VYC-15L to orbicularis oris, and HYC c -24L to the vermillion border. Patient photographs provided by Jonquille Chantrey. a VYC, Vycross; b L, Lidocaine; c HYC, Hylacross treatment results have the potential to alleviate the significant emotional burden associated with a long-standing problem. In some cases, the patient's motivation will change following their initial correction. They may become receptive to pursuing other treatments, and their motivations start to become more aligned with a different archetype. But in most cases, when a specific issue is permanently resolved, the motivation for this patient archetype to pursue ongoing or additional treatment is gone, and the patient will likely not return (particularly men). | Treatment considerations Because these patients may already be coping with long-term low selfesteem, they tend to exhibit less confidence in the potential success of their treatment and are concerned about the need for further treatment. Furthermore, they tend to be more concerned about treatment recovery (eg, pain, time away from work/school, potential complications), as well as more worried about what people will think about their pursuit of treatment. Ideally, once the initial correction is made, additional treatments can be offered to further lessen the impact of the defect and foster comfort with different treatment modalities. | Positive aging archetype The Positive Aging archetype is characterized by the patient who is motivated to minimize the signs of facial aging. These patients want to beautify subtly without changing "who" they are. They want to look like a better version of themselves and take steps to prevent further signs of aging. These patients tend to want natural, subtle results, whether they are short-term goals (eg, a wedding or reunion) or part of a long-term treatment plan. Often, new patients are hesitant about treatment because of their fear of looking unnatural and initiating pursuit of treatment can be a significant barrier to overcome. 14,15 Common requests of this archetype include the following: "I look tired," "I look sad," "I want to look the way I feel," and "I want to look good for my age." This archetype is often motivated by a primary desire to age gracefully and eliminate the negative emotional expressions (eg, sad, tired, angry) that can result from facial aging. Correspondingly, the goals of these patients tend to be aligned with gradual, subtle treatments with initial requests that may include improving skin quality, treatment of upper facial lines (forehead, glabellar, and crow's feet lines), addressing sagging skin, jowls, and marionette lines. | Treatment opportunities and challenges Because this archetype has a concern with treatment results that may be perceived as looking unnatural, they require that trust is nurtured, particularly when advocating the use of dermal fillers. The first step toward treatment can be very tentative for these patients, and many are "considerers" for years before making their first appointment. However, once trust is developed, these patients are loyal to their practitioner and become more open to a range of treatments. This archetype is more receptive to the clinician's aesthetic ideals and explanations regarding treatment that will provide results that are natural and congruent with the patient's existing features. | Treatment considerations For this archetype, subtle, gradual results achieved with minimally invasive techniques are key to cultivating a long-term trusting relationship and helping them feel more comfortable with the possibility of repeat treatments. Educating patients with models or teaching aids that demonstrate the physical effects of aging will enable them to explore treatment options from a well-informed viewpoint and help them consider treatment as a regimen in a life-long holistic approach. Judicious use of (and conservative doses of) neuromodulators and a focus on treatments for skin quality improvement are generally recommended as a starting point before moving on to fillers and other treatments. | CON CLUS IONS In varying degrees, improvement of facial aesthetics is all about self-empowerment, and most patients pursue treatment with the anticipation that it will improve their self-confidence and psychosocial well-being. 16,17 And while the desire to improve physical appearance is nothing new, some of the factors that motivate patients and shape treatment expectations have changed to reflect changing cultural and environmental influences. Undoubtedly, there will always be a desire to amplify beauty and minimize the signs of aging. However, the desire to improve academic, social, and economic status through improved facial aesthetics is becoming a growing impetus for treatment. 8,9 Not surprisingly, aesthetic consumer trends are also strongly influenced by commercial and social media content. The constant flow of visual stimuli through television, movies, and social media ultimately intensifies the focus of everyday aesthetics. [18][19][20][21] Paired with the plethora of information the Internet has to offer, the average patient is now a self-educated consumer who is eager to participate in treatment, but who may also have heightened and sometimes misaligned expectations. 22 The primary patient archetypes are shaped by their patterns in treatment goals, motives, and sometimes demographics. Just as there is no single product or treatment approach that will meet the needs of all, it is possible that not every patient will be defined by a single archetype. However, elements of a prevailing archetype will likely emerge with closer evaluation and may even transition with the patient's journey. The utility of the patient archetype may better equip clinicians not only to understand patient motives but also anticipate and manage expectations, so they can guide patients toward treatments that best serve their long-term goals. Ultimately, a greater multidimensional understanding of the patient has the potential to enhance both clinician-patient communication and treatment approach and provides a more holistic form of patient care. ACK N OWLED G M ENTS The authors would like to acknowledge the additional advi- CO N FLI C T O F I NTE R E S T This study was funded by Allergan, Inc Writing and editorial support for this article was provided by Erika von Grote, PhD, Allergan plc, Irvine, CA. S Liew serves as an investigator, speaker, and consultant for Allergan plc. J Chantrey serves as an investigator, speaker, and consultant for Allergan plc. M Silberberg is an employee of Allergan plc and may own stock/options in the company. The opinions expressed in this article are those of the authors. The authors received no honoraria related to the development of this article. I N FO R M ED CO N S ENT All patients have consented to the use of their photographs.
2019-11-26T14:03:16.610Z
2019-11-25T00:00:00.000
{ "year": 2019, "sha1": "03583db5fee397ba2ab6828926c0f6b65519aaff", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jocd.13227", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "deeb281e1ee3029a8c43678ce2a1029591f16574", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
38192082
pes2o/s2orc
v3-fos-license
Variation in VIP latrine sludge contents This study investigated variations in the characteristics of the sludge content from different ventilated improved pit (VIP) latrines and variation in these characteristics at specific depths within each pit. Faecal sludge from 16 VIP latrines within the eThekwini Municipality was collected and laboratory characterisation including moisture content, total and volatile solids, chemical oxygen demand, and aerobic biodegradability was performed. Sludge samples were collected from 4 specific depths within each pit investigated. The laboratory characterisation performed showed that none of the VIP latrines investigated had the same sludge characteristics, and that within a pit sludge characteristics varied with increasing depth in the pit. This supports the motivating hypothesis that, depending on household habits and local environmental conditions, there should be considerable variation in the organic contents, moisture content, non-biodegradable content and microbial population between different pits. This variation with increasing depth within a pit is expected, since fresh material is constantly being added to the pit overlaying older material which might have undergone a certain degree of stabilisation. Introduction In South African at present, a considerable number of ventilated improved pit (VIP) latrines and conventional pit latrines in rural and peri-urban settlements around the country and, in particular, within the eThekwini Municipality, are full and require immediate emptying.The challenge is finding an appropriate and sustainable disposal route for sludge evacuated from these full pits.Thus, it is important to determine the characteristics of the sludge present in the pits. VIPs are used as an anaerobic accumulation system for stabilising faecal matter, urine and other added materials, depending on household habits (Chaggu, 2004), and function as containment for digestion of fresh faeces, and storage of the digested faeces, and are designed primarily for the storage of the digested solids (Mara, 1996).The content of any par-1996).The content of any par-1996).The content of any particular VIP latrine consists of a wide range of materials.It is impossible to predict the composition of the material present in any particular pit without physically observing the contents of the pit or digging it out, since many households make use of the pit either for their basic sanitation needs or for both sanitation needs and dumping of household solid refuse.In addition to faecal matter a large variety of other material such as newspaper, magazines, broken glass, bottles, rags, plastic bags and other household waste materials could be found in a pit (Fig. 1). The objective of this study was to investigate the variation in the characteristics of VIP latrine sludge content and the degree of stabilisation of sludge content with increasing depth as one excavates the pit.The laboratory results of samples collected from 16 VIP latrines at 4 specific depths within each pit latrine are described in this paper.Buckley et al. (2008) proposed that the faecal sludge portion within any pit latrine comprises of 4 theoretical categories as shown in Fig. 2: • The first category (i) contains sludge in which readily biodegradable components are still present and in which rapid aerobic degradation is taking place • The second category (ii) is the layer in which aerobic degradation of hydrolysable organic material takes place at a rate limited by aerobic hydrolysis of complex organic molecules to simpler compounds • The third category (iii) is suggested to be an anaerobic layer due to the occlusion of oxygen by covering material; anaerobic degradation in this layer is controlled by the rate of anaerobic hydrolysis of complex organic molecules to simpler molecules • The fourth category (iv) which is the lowest and bottom layer of the pit; the sludge component in this layer has attained a significant degree of stabilisation and no further stabilisation of organic material occurs within the remaining life span of the pit This hypothesis applies when there is relatively little movement of material in the pit after original addition, such that the age of the material in the pit (amount of time since it was deposited) increases with increasing depth, and is therefore probably limited to relatively dry pits (no free liquid surface). Sampling techniques In this case, the amount of biodegradable solids as a fraction of total solids should decrease with increasing depth for samples collected from the surface layer, Layer (i) through to Layer (iii) and should remain constant in Layer (iv).This would be observed as decreases in chemical oxygen demand (COD), volatile solids (VS) and biodegradability of pit latrine sludge content as a function of total solids as one digs from the surface layer down to the bottom layer of the pit.It should also be noted that depending on the household habits and local environmental conditions, and the history of these factors, there will be considerable temporal variations in the moisture content, organic content, non-biodegradable content and microbial population of new material as it is added to the pit, and therefore variations will occur within the pit, and similarly large variations will occur between different pit latrines. Based on this proposition and in order to achieve the study objective, samples were collected from 4 different depths within each pit: • Top level (surface material) • After 0.5 m emptying depth • After 1.0 m emptying depth • Bottom level Each pit was emptied manually using a shovel, bucket and waste skips.The emptying process was done by Fukamela contractors and general assistance was given by the eThekwini Water and Sanitation unit.Samples were collected at each location within the pit as the digging process was carried out.Each sample was collected in a plastic bag and placed in the collection bucket.Plastic bags were used so as to limit the amount of air the sample came in contact with, after which the samples were taken to the laboratory and stored in the 481 cool room at 4°C before testing.Figure 3 shows how samples were collected. Laboratory characterisation of samples Laboratory characterisation included: moisture contents, solids (total and volatile solids), chemical oxygen demand (COD), and aerobic biodegradability.The moisture content, solids, and COD analysis were performed using standard methods (APHA, 1998).The aerobic biodegradability tests involved suspending 50 g of well-mixed sample in 1 ℓ of tap water in a large Erlenmeyer flask; the mass of the suspension was recorded.The suspension was then analysed for total COD and aerated with saturated air for 5 days.The mass of the suspension was recorded, after which samples were taken and analysed for total COD.The biodegradable COD content of the sample was calculated as the ratio of the amount of COD reduced by the aeration process to the original COD content of the suspension, and corrections were made for moisture loss through evaporation.The principle of the method used was that vigorous aeration of sludge samples suspended in water for an extended period will result in biological oxidation of all the organic material in the sludge sample that is inherently biologically oxidisable.Thus the difference in COD content before and after aeration is the biodegradable COD of the sample (g biodegradable COD/gCOD). Each analysis was carried out in triplicate on each of the samples collected and the average of each analysis was computed for the final results.Accuracy checks conducted on each analysis carried out on the samples confirmed that the overall coefficient of variance was less than 10%. Results The moisture content results are shown in Fig. 4. The moisture content of the pit materials can influence the microbial activity.As shown in Fig. 4a, within each pit there was considerable variation (p<0.05) in the moisture content at different layers of the pit.The moisture content showed a general decrease with increasing depth.This suggests that most of the pit latrines investigated were located in areas where most of the pit volume was above the level where free groundwater can be found, at the time that the pit was sampled.This implies that there was a net movement of water out of the pit.As shown in Fig. 4c, the average total moisture content within each pit analysed was about 60%; this falls within the range reported in the literature (50 to 60% of the total weight) to be adequate for microbial activity (Peavy et al., 1985;EPA, 1995).Hence, biological activity in most of the pits would not have ceased due to low moisture content. The general trend in the moisture content results for all pits was a decrease from the surface to 1 m depth and little to no change from 1 m to 1.5 m.An atypical result was observed for Pit 16, where there was a gradual increase in the moisture content of the material in the pit from the surface of the pit to the bottom of the pit.This suggests that there might be water ingress from somewhere else, which may be from groundwater or a leaking tap nearby.On average the mean moisture content of the surface layer of the pit was found to be 77% and of the bottom layer was found to be 67%, as shown in Fig. 4b.In eight of the pit latrines investigated, the moisture content at the bottom was substantially higher than the moisture content of the 1 m depth sludge samples.These pit latrines may have been located such that the water table was higher than the bottom of the pit.The average moisture content for all of the 16 pits analysed decreased down the pit, with an increase at the bottom layer as shown in Fig. 4b.Regression/correlation analysis was performed using SPSS15 and Curve Expert 1.3 and showed that there was not a significant linear relationship between the average moisture content and depth within the pit.This supports the earlier statement that most of the pit latrines investigated were located in areas where most of the pit volume was above the level where free groundwater can be found at the time that the pit was sampled, and as such there might be a net movement of water out of the pit.Univariate analysis of variance was carried out using SPSS15 with a post-hoc Scheffe test to compare mean values of moisture of the different samples collected at different depths.It was found that only moisture contents from the top surface and bottom layer of the pit were significantly different from each other. The volatile solid characterisation result is presented in Fig. 5.The most important feature observed from the results, as shown in Fig. 5a, is that for each of the 16 pits investigated the volatile solids as a fraction of the total solids decreases, although not in a regular manner with increasing depth down the pit.This trend is reversed in Pit 16, although this apparent upward trend in volatile solid fraction is not statistically significant.Figure 5b shows a decreasing trend in the average volatile solids content as a fraction of total solids for each of the 16 pits, from the top surface to the bottom layer.These suggest that the degree of stabilisation in the pit increases from the top surface to the bottom layer of the pit, leaving only non-volatile (ash-like) components.Figure 5(c) showed that there was a significant variation in the pit-average volatile solids values in all 16 pits analysed. Regression/correlation analysis undertaken to investigate the relationship between volatile solids as a fraction of total solids and the depth from which samples were collected within the pit using SPSS15 and CurveExpert 1.3.The results showed that there is a significant (p<0.05)linear relationship between volatile solids composition and the different layers from which the samples were collected.Univariate analysis of variance was also performed using SPSS15 with a post-hoc Scheffe test to compare mean values of volatile solids of the different samples collected at different depths.It was found that there was a significant difference between the top layer, 0.5 m depth and 1 m depth in volatile solids, for all samples collected from these depths.There was no significant difference between the values for 1 m depth and the bottom layer. Figure 6 presents the COD characterisation results (as g COD/g dried sample). Chemical oxygen demand (COD) is a measure of the oxidisable organic matter present in samples.It can be used as an indication of the degree of degradation of the pit contents.As shown in Fig. 6a, it is observed that the COD concentration (on a dry basis) at the surface of the pits analysed is significantly higher than that of the bottom layer (except for Pits 5 and 11; Pit 5 has a similar value to the surface and Pit 11 a greater value).Figure 6b presents the averages of COD for the 16 pits at different depths.It is observed that the COD in g/g dry sample follows a decreasing trend from the surface layer of the pit down to the bottom layer of the pit.This implies that additional degradation/stabilisation occurs down the depth of the pit.It can be seen from Fig. 6c that the average COD in g/g dry sample has a wide variation.Regression/correlation analysis was performed to investigate the relationships between COD concentrations and their depths using SPSS15 and CurveExpert 1.3.The results indicated a linear relationship between COD concentrations and the different layers in which samples were collected.A univariate analysis of variance was also performed using SPSS15 with a post-hoc Scheffe test to compare mean values of COD of the different samples collected at different depth.It was found that there was a significant difference (p<0.05) in COD between all samples collected from different depth except for 1 m depth and the bottom layer.These results support the Buckley et al. (2008) hypothesis that biological stabilisation is complete after period of time -all older material does not degrade further. Figure 7 presents the aerobic biodegradability results. The aerobic biodegradability test gives an estimate of the amount of biodegradable material present in each sample.A low value indicates that the samples contain little biodegradable material and have therefore undergone a significant degree of stabilisation.Due to time and equipment constraints, only half of the total number of samples collected could be analysed, because the delay between sampling and analysis would have been too great for the results to be valid.The biodegradability results for all of the 8 pits analysed followed the same trend.Figure 7a, which presents the biodegradability results at different depths for each of the 8 pits, shows a decreasing trend from the surface layer to the bottom layer of each pit.This suggests that the degree of stabilisation increases from the surface layer to the bottom layer of the pit.The average of the biodegradability of each layer, for the 8 pits analysed (Fig. 7b), showed a decreasing trend from surface layer to bottom layer.This supports the motivating hypothesis that the degree of stabilisation within the pit increases with increasing depth. Figure 7c shows that none of the 8 pits had the same degree of stabilisation and the average biodegradability within each of the 8 pits was below 50%.Regression/correlation analysis showed a linear relationship between biodegradability and the different layers from which samples were collected.A univariate analysis of variance was also performed using SPSS15 with a post-hoc Scheffe test to compare mean values of biodegradability of the different samples collected at different depth.It was found that there was significant difference (p<0.05) in biodegradability between all samples collected from different depth, but for 1 m depth and the bottom layer (1.5 m depth) there was no significant difference. Discussion The study was carried out in eThekwini Municipality where pit conditions are predominantly fairly dry, i.e., there is usually no free liquid on the top surface of the pits.It should be noted that researchers with experience of pit latrines in Asia and other parts of Africa consider those found in eThekwini to be unusually dry.Thus, the degree of stratification in the pit (and therefore limited mixing between layers) may not necessarily be found under different conditions, especially under wet conditions.With that stipulation in mind, it was found that all analytes correlated with biodegradable material, i.e.COD, volatile solids fraction and biodegradable COD decreased significantly between the surface layer sample and the third layer sample, taken from approximately 1 m below the surface.However, the difference between the 1 m sample and the bottom sample was not statistically significant.These results support the Buckley et al. (2008) hypothesis that biological stabilisation, otherwise described as the degradation of biodegradable components, occurs in a section of the pit contents that extends from the surface down to a point corresponding with material deposited some years previously, but below this section the material has reached a composition that does not degrade further to any substantial degree with time.This result challenges the common assumption that pit latrines act as storage vessels in which little biodegradation occurs. From these results, a picture of the life cycle of the pit can be developed: when a pit is first commissioned, or emptied, the material added to the pit is fairly fresh, and to begin with the pit material has undergone little stabilisation.It is all 2008) hypothesis.After a period of time, as material undergoes degradation and gets covered over with fresh material, the bottom layers become anaerobic and partially degraded (Layer (iii) of the Buckley et al. hypothesis) while the new top layer is the Buckley et al. Layer (ii).After a considerable amount of time (years) the bottom layers have undergone degradation to an extent that they cannot degrade further under pit conditions, and may be said to be fully stabilised (Layer (iv)).Once Layer (iv) has established, assuming that the material entering the pit is added at a fairly constant rate and composition, the rate at which the pit latrine contents accumulate is the rate at which Layer (iv) increases since the layers above will move upward in a steady fashion.Thus the rate at which the pit fills is approximately equal to the rate at which material that will ultimately end up as unbiodegradable residue is added to the pit.This is of course a much lower rate than the volume addition rate of fresh pit contents. The important corollary of this outcome is that the only sustainable way to reduce pit accumulation rate is to reduce the amount of material that will ultimately end up as unbiodegradable residue.Increasing the rate of degradation will only result in the thickness of the combined Buckley et al. ( 2008) Layers (ii) and (iii) being smaller, which would extend the life of the pit slightly by reducing the average accumulation rate.Alternatively, if it were possible to degrade Layer (iv) contents further than occurs naturally (i.e.changing the yield of non-degradable residue from pit feed material), the amount of material that will ultimately end up as unbiodegradable residue will be a smaller proportion of what is originally added and will have the same net affect.To date, there is no documented method of achieving either of these options. These results do not indicate at what distance below the surface the interface between the Buckley et al.Layer (iii) and Layer (iv) exists.However if one assumed that the rate of reduction of COD concentration, fraction of volatile solids and biodegradability were constant over the sludge residence time in the pit, a simple linear fit of the data suggests that Layer (iii) extends to approximately 1 m below the surface of the pit, and that the remainder of the material will not undergo significantly more degradation under the prevailing pit conditions. Conclusions The purpose of this paper was to investigate the variations in the characteristics of sludge content from different ventilated improved pit latrines and the variation in these characteristics at specific depths within each VIP latrine where samples was collected.The measurement did not take into consideration general household waste found in the pit latrines sampled; for practical considerations it only considered the faecal sludge component of the pit.The characterisation results have provided information on the variability of VIP latrine sludge content from one pit to the other and at different layers within a pit.It was found that none of the pits in which samples were collected had the same sludge characteristics despite the fact that all VIPs used in this study were located within similar geological/environmental conditions and that biodegradable material present in faecal sludge found in pit latrines changes with time. The amount of biodegradable material in terms of COD and organic solid (volatile solid) content decreases down the pits from the surface layer to the bottom, suggesting that changes in sludge content take place with time within a pit.The average COD obtained for faecal material at the surface of the 16 pits investigated was found to be 0.603 gCOD/g dry sample, which is significantly lower than the approximate value of 1.13 gCOD/g dry sample obtained from the characterisation study of fresh faeces by Nwaneri (2009) and other values reported in the literature, such as that of Almeida (1992) andLopez (2002).Also, there was a significant difference in the amount of volatile solid (58%gVS/gTS) at the surface of the pit compared to that of faeces (84% gVS/gTS) and the average biodegradability obtained for the surface layer (52%) of the pit was found to be significantly lower (80%) than that of fresh faeces values reported in the literature.This implies that materials present at the surface layer in the pits where samples were collected has undergone a certain degree of stabilisation when compared to the fresh faeces.,This also implies that immediately after faeces are deposited in the pit degradation of readily-biodegradable components of the faeces takes place rapidly, if it is assumed that what goes into the pit is adequately represented by the reported values in the literature for the characteristics of fresh faeces.This study has indicated that for relatively dry pit latrines (no free surface of water), physico-chemical analyses of pit latrine contents at different levels in the pit produce profiles for COD concentration, fraction of volatile solids and biodegradable COD that correspond well with the Buckley et al. ( 2008) hypothesis of processes in pit latrines, and may therefore be regarded as evidence in support of this hypothesis. The logical consequence of this hypothesis is that the rate at which the pit fills is approximately equal to the rate at which material that will ultimately end up as unbiodegradable residue is added to the pit.This leads to the corollary that the only sustainable way to reduce pit accumulation rate is to reduce the amount of material that will ultimately end up as unbiodegradable residue that is added to the pit, i.e. by eliminating household solid waste from the pit latrine.It may therefore be concluded that considerable variation exists in the organic contents, moisture content and degree of stabilisation of contents from different pits and also that the degree of stabilisation within a pit increases from the surface layer of the pit down through to the bottom layer of the pit. Finally, it is estimated that the layer of material in the pit that is not fully degraded is approximately 1 m thick, although this will differ with feed addition rate, pit conditions and pit cross-sectional area. Figure 1 Figure 1Typical content of a pit latrine from 2 pits located in different communities within eThekwini Municipality Figure 2 Figure 2Diagram showing the different theoretical layers within a pit Figure 4 Figure 4Moisture content characterisation results (a) for each of the 16 pits from different layers within each pit (b) average moisture content at each layer for the 16 pits (c) average moisture content within each of the 16 pits.Error bars represent standard deviation. Figure 5 Figure 6 Figure 5 Volatile solid characterisation results (a) for each of the 16 pits from different layers within each pit, (b) average volatile solids at each layer for the 16 pits, (c) average volatile solid within each of the 16 pits.Error bars represent standard deviation. Figure 7 Figure 7Aerobic biodegradability results (a) for each of the 16 pits from different layers within each pit (b) average biodegradability at each layer for the 16 pits (c) average biodegradability within each of the 16 pits.Error bars represent standard deviation.
2017-09-14T15:18:54.424Z
2012-07-01T00:00:00.000
{ "year": 2012, "sha1": "bf394079e882a704b20bac7f33ce803e3f92cc01", "oa_license": "CCBY", "oa_url": "https://www.ajol.info/index.php/wsa/article/download/81023/71248", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2e0ea4ed1dae179c069acf4d9c22d0ba8a82ed3d", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
46875686
pes2o/s2orc
v3-fos-license
NF-κB in the crosshairs_ Rethinking an old riddle Constitutive NF-κB signalling has been implicated in the pathogenesis of most human malignancies and virtually all non-malignant pathologies. Accordingly, the NF-κB pathway has been aggressively pursued as an attractive therapeutic target for drug discovery. However, the severe on-target toxicities associated with systemic NF-κB inhibition have thus far precluded the development of a clinically useful, NF-κB-targeting medicine as a way to treat patients with either oncological or non-oncological diseases. This minireview discusses some of the more promising approaches currently being developed to circumvent the preclusive safety liabilities of global NF-κB blockade by selectively targeting pathogenic NF-κB signalling in cancer, while preserving the multiple physiological functions of NF-κB in host defence responses and tissue homeostasis. Introduction The anticancer arsenal has traditionally consisted of a limited number of broadly active cytotoxic chemotherapeutics characterised by a small therapeutic index and a minimal capacity to discriminate between malignant and normal cells. Over the past 25 years, fundamental advances in the field of molecular oncology and the understanding of many of the core mechanisms driving oncogenesis have enabled the generation of rationally designed, targeted therapies which selectively interfere with discrete oncogenic effectors, thereby opening the door to an era of stratified oncology and, consequently, revolutionising the clinical management of cancer patients. Indeed, the oncology field is currently undergoing a new revolution with the boom of anticancer immunotherapies capable of producing long-term remissions and even curative outcomes, breaking away from traditional paradigms by targeting the non-malignant, rather than malignant, cell components within tumours (Hodi et al., 2010;Brahmer et al., 2012). The scientific breakthroughs of the past few decades have enabled the creation of a new generation of anticancer medicines, which couple greater specificity with reduced adverse effects, thus equipping the current anticancer armoury with multiple classes of new agents which selectively interfere with a wide spectrum of discrete drivers of oncogenesis. However, while an ever-growing number of cancer-driving mechanisms and signalling pathways have thus far been successfully pharmacologically targeted, leading to improved clinical outcomes in oncology, a select group of other pathways have proven defiant to therapeutic intervention. Among these, the NF-κB pathway stands out as perhaps the most illustrious example and arguably the one that has coalesced the greatest frustration and disappointment. Ubiquitous NF-κB transcription factors are central coordinating regulators of the host defence responses to stress, injury and infection (Hayden and Ghosh, 2012;Zhang et al., 2017). In addition to fulfilling these elemental physiological roles, NF-κB contributes to the pathogenesis of most of the chief threats to global human health, including cancer, atherosclerosis, diabetes and chronic inflammatory diseases (Xia et al., 2014;DiDonato et al., 2012). Aberrant NF-κB signalling is a hallmark of the large majority of human cancers, where it drives oncogenesis, disease recurrence and therapy resistance, largely by regulating genes that suppress malignant cell apoptosis and govern inflammation in the tumour microenvironment (TME) (Xia et al., 2014;DiDonato et al., 2012). Unsurprisingly, owing to these pivotal pathogenic roles of NF-κB, the targeting of the NF-κB pathway has been a paramount objective of the pharmaceutical industry and the focus of worldwide research efforts for the past 25 years, as a means to improve the clinical management of both oncological and non-oncological patients, especially within particularly refractory disease indications (Gilmore and Herscovitch, 2006;Begalli et al., 2017). However, as best illustrated by the ill-fated pursuit of a clinically useful inhibitor of IκBα kinase (IKK)β, the kinase responsible for phosphorylating IκB proteins and enabling nuclear NF-κB translocation (Hayden and Ghosh, 2012), achieving this goal has to this day proven an insurmountable problem, owing to the failure of traditional IKK/NF-κB-targeting strategies to preserve the pleiotropic and ubiquitous physiologic functions of NF-κB (Greten et al., 2007;Hsu et al., 2011). This minireview offers a glimpse into some of the more promising emerging approaches currently being considered to circumvent these inherent limitations of conventional NF-κB inhibitors, with a focus on oncology. 1.1. The futile pursuit of a specific NF-κB inhibitor: an historical perspective on an obstinate conundrum Following its discovery by Baltimore and colleagues in 1986, as a nuclear factor binding to a conserved DNA enhancer region of the κ light-chain immunoglobulin gene in activated B cells, the NF-κB signalling pathway soon became the paradigm of the rapid response mechanisms governing the cellular adaptation to environmental or internal changes by regulating the expression of versatile, inducible genetic programmes (Hayden and Ghosh, 2012;Zhang et al., 2017). In mammals, NF-κB comprises a family of five proteins, known as RelA/ p65, RelB, c-Rel, p50/NF-κB1 (p105), and p52/NF-κB2 (p100), which can form multiple combinations of distinct heterodimeric and homodimeric complexes, the most abundant of which is the RelA/p50 heterodimer ( Fig. 1) (Begalli et al., 2017;Zhang et al., 2017). In cells, these complexes are normally held in a state of latency in the cytosol, where they are bound to IκB-family inhibitory proteins and can be activated in response to inflammatory stimuli, microbial products and a broad spectrum of other signals, which cause the site-specific phosphorylation of IκBs by the IKK complex, leading to the sequential polyubiquitination and proteolysis of phosphorylated IκBs by the SCF βTrCP E3 ubiquitinprotein ligase complex and the 26S proteasome, respectively (Hayden and Ghosh, 2012;Begalli et al., 2017;Winston et al., 1999;Spencer et al., 1999). Thereafter, the liberated NF-κB dimers enter the nucleus where they bind to distinct DNA elements, known as κB sites, to coordinate the expression of a diverse array of inflammatory mediators, immunoregulators, apoptosis inhibitors, developmental signals and numerous other factors orchestrating the immune and inflammatory responses through a process that is normally transient and self-limiting (Hayden and Ghosh, 2012;Zhang et al., 2017). Over the past three decades, ubiquitous NF-κB dimers have been shown to activate and repress hundreds of different target genes mediating these functions, bestowing upon this family of transcription factors a remarkable capability to inducibly alter cell physiology (Hayden and Ghosh, 2012;Zhang et al., 2017). Interestingly, notwithstanding its ubiquitous nature, the NF-κB pathway has also been ascribed a sophisticated capacity to achieve a remarkably wide degree of contextual diversity in the transcriptional programmes it activates in any given cell, upon nuclear translocation, dependent upon the tissue type and specific biological circumstances in which it is induced (Zhang et al., 2017). Given the multitude of stimuli that can activate NF-κB and the broad spectrum of functions that NF-κB plays in different tissues, it is unsurprising that several feedback mechanisms have evolved to ensure the tight control and timely termination of physiological NF-κB signalling as a way to enable the prompt return to homeostasis and prevent excessive inflammation, tissue damage and the development of malignancy (Hayden and Ghosh, 2012;Zhang et al., 2017;Begalli et al., 2017). Indeed, excessive and stable IKK/NF-κB activation is a typifying feature of a wide range of pathological states, including cancer. Whereas in certain malignancies, such as multiple myeloma, diffuse large B-cell lymphoma (DLBCL), mucosa associated lymphoid tissue (MALT) lymphoma and glioblastoma multiforme (GBM), NF-κB is often constitutively activated by recurrent genetic alterations targeting upstream components of the NF-κB pathway, in the large majority of solid tumours and certain haematological malignancies, such alterations of the NF-κB pathway are relatively infrequent (DiDonato et al., 2012;Annunziata et al., 2007;Keats et al., 2007;Pasqualucci et al., 2001;Bredel et al., 2011). Accordingly, in these cancers, aberrant NF-κB activation generally stems from genetic abnormalities targeting conventional tumour-suppressor and oncogenic mechanisms, such as RAS and PTEN mutations, and/or the steady exposure of tumour cells to inflammatory stimuli and other cues emanating from the TME (DiDonato et al., 2012). These findings, and an accompanying extensive body of other genetic, biochemical and clinical evidence, provide a compelling rationale for therapeutically blocking constitutive NF-κB signalling in a wide range of human cancers in areas of current unmet need. Moreover, Fig. 1. The cancer-selective strategy to target the NF-κB signalling pathway. Schematic representation of the canonical pathway of NF-κB activation. Depicted in black are the main conventional therapeutic strategies, which have thus far been used to generate pharmacological NF-κB inhibitors. Also depicted in red is one of the emerging approaches aimed at developing a therapeutic inhibitor of a functionally critical and cancer cell-restricted downstream effector of the pathogenic survival axis of the NF-κB pathway. Also shown is the D-tripeptide inhibitor of the GADD45β/MKK7 complex, DTP3, which selectively targets this GADD45β-dependent survival axis of the NF-κB pathway, yielding cancer cell-selective therapeutic activity, thereby circumventing the preclusive limitations of global IKKβ/NF-κB inhibitors. there is a strong rationale for developing NF-κB-targeting therapeutics to treat numerous non-malignant human pathologies, such as diabetes, autoimmune disorders, and chronic inflammatory diseases, owing to the central role of NF-κB signalling in governing inflammation, and the underlying low-grade inflammatory reaction that propagates the pathogenesis of these and virtually all other human illnesses (Xia et al., 2014;DiDonato et al., 2012). Notwithstanding, to this daymore than 30 years since the discovery of NF-κB and despite an aggressive and persevering effort by the pharmaceutical industry over the past 25 years no specific NF-κB inhibitor has been clinically approved, due to the preclusive on-target toxicities associated with the systemic inhibition of NF-κB (Gilmore and Herscovitch, 2006;Begalli et al., 2017). Owing to its central role as the downstream signal-integration hub for the pathways of NF-κB activation, IKKβ bore the brunt of the drug discovery effort to inhibit pathological NF-κB signalling since its discovery in 1996 (Fig. 1). Nonetheless, while the initial impetus did succeed in generating a large array of specific molecules and multiple candidate therapeutics, this effort eventually came to an inevitable abrupt end, as soon as IKKβ inhibitors were evaluated in animal models and early-phase clinical trials (DiDonato et al., 2012;Greten et al., 2007;Hsu et al., 2011). In a seminal paper published in 2007, Karin and colleagues demonstrated that the pharmacological inhibition of IKKβ increases IL-1β secretion by myeloid cells, owing to an enhanced processing of pro-IL-1β by caspase 1, leading to overt systemic inflammation and increased animal lethality (Greten et al., 2007). In addition to this unanticipated, dose-limiting adverse effect, subsequently confirmed in human studies, global IKKβ/NF-κB inhibition produced a series of other adverse effects, including immunodeficiencies, hepatotoxicity and a potentially increased risk of malignancies arising from tissues such as the liver and the skin, reflecting the essential roles of NF-κB in innate and adaptive immune responses and tissue homeostasis (DiDonato et al., 2012;Greten et al., 2007;Hsu et al., 2011). Eventually, after the initial, short-lived enthusiasm, these findings irrevocably halted any further significant clinical development of IKKβ/NF-κB inhibitors, as demonstrated by the recent dramatic decline in new patent applications relating to these agents (Begalli et al., 2017). Another class of drugs originally developed to therapeutically target pathological IKKβ/NF-κB signalling are proteasome inhibitors, which stabilise IκB proteins, thereby preventing nuclear NF-κB translocation by interfering with the proteolytic activity of the proteasome ( Fig. 1) (Zhang et al., 2017;Manasanch and Orlowski, 2017). These molecules, as well as immunomodulatory drugs (IMiDs), are known to impact upon NF-κB signalling and have found broad clinical indication in multiple myeloma and a handful of other malignant pathologies. However, both classes of drugs display broad biological activities, lack any specificity for NF-κB, and, importantly, afford clinical benefit in these indications via a mechanism unrelated to the NF-κB pathway (Manasanch and Orlowski, 2017;Richardson, 2010). Consequently, there remains an urgent need for a fresh and entirely different approach to safely targeting the NF-κB pathway in human diseases. Embracing complexity as a path to achieve the safe therapeutic inhibition of the NF-κB pathway Historically, the insurmountable problem with conventional NF-κBtargeting strategies has been to achieve the contextual, tissue-specific inhibition of the NF-κB pathogenic activity, while preserving the pleiotropic and ubiquitous physiological functions of NF-κB, including its functions in immunity and inflammation (DiDonato et al., 2012). Since the best documented activity of NF-κB in oncogenesis is to upregulate genes that suppress cancer-cell apoptosis, and despite its ubiquitous nature, NF-κB signalling elicits transcriptional programmes that vary considerably depending upon the type of tissue and activating stimulus, we sought to target a non-redundant, cancer cell-specific downstream effector of this oncogenic NF-κB-mediated survival function, rather than NF-κB itself (Fig. 1) (Begalli et al., 2017;Annunziata et al., 2007;Keats et al., 2007;Bennett et al., 2013). We postulated that this strategy could provide a comparably effective, yet considerably safer alternative to conventional IKKβ/NF-κB-targeting drugs, thus circumventing the dose-limiting toxicities of systemic IKKβ/NF-κB inhibition. Our group recently tested this hypothesis in the context of multiple myeloma, a malignancy of plasma cells (PC) responsible for almost 2% of all cancer deaths and representing the paradigm of NF-κB-driven cancers (Annunziata et al., 2007;Keats et al., 2007). Despite the recent introduction of new treatments, almost all multiple myeloma patients eventually relapse and/or develop drug resistance. Consequently, the management of these patients remains a significant medical problem. Given its paramount importance in disease pathogenesis, the NF-κB pathway provides an attractive therapeutic target in multiple myeloma. Indeed, virtually all clinical cases of this neoplasia display constitutive NF-κB signalling and elevated NF-κB target-gene signature, leading to malignant cell addiction to nuclear NF-κB activity for survival and sensitivity to apoptosis upon IKKβ/NF-κB inhibition (Annunziata et al., 2007;Keats et al., 2007). Our group, as well as others, previously reported that NF-κB inhibits apoptosis, at least in part, by suppressing the exaggerated activation of the JNK MAPK pathway through a mechanism that involves the transcriptional upregulation of Growth Arrest and DNA Damage 45B (GADD45B), a member of the GADD45 family of inducible genes, and other downstream effectors, such as X chromosome-linked inhibitor of apoptosis protein (XIAP) (Jin et al., 2002;De Smaele et al., 2001;Lin and Karin, 2003). Subsequent studies demonstrated that prolonged JNK activation leads to apoptosis, in part, by causing the phosphorylationdependent activation of the E3 ubiquitin ligase, Itch, which in turn promotes the polyubiquitination and subsequent proteasome-mediated degradation of the caspase 8/10 inhibitor, cellular FLICE (FADD-like IL-1β-converting enzyme)-inhibitory protein (c-FLIP), leading to caspase 8/10 activation and ultimately cell death (Bennett et al., 2013). Prolonged JNK activation has been shown to also enhance the activity of several proapoptotic members of the B-cell lymphoma (BCL)-2 family of proteins, such as Bim and Bmf, by promoting their release from sequestered cytoplasmic pools normally bound to dynein and myosin V motor complexes (Kuwana and Newmeyer, 2003). Recently, we identified the complex formed by GADD45β and the JNK kinase, MKK7, as an essential survival module dependent on constitutive NF-κB signalling and a novel therapeutic target in multiple myeloma ( Fig. 1) (De Smaele et al., 2001;Papa et al., 2004;Tornatore et al., 2014;Tornatore et al., 2015;Papa et al., 2007). We demonstrated that GADD45B is upregulated in multiple myeloma cells by constitutive NF-κB activation, promotes malignant cell survival by suppressing proapoptotic MKK7/JNK signalling through its direct binding to and inhibition of MKK7, and is associated with poor clinical outcome in multiple myeloma patients (Tornatore et al., 2014;Tornatore et al., 2015). Importantly, most healthy cells do not constitutively express GADD45β, nor rely on GADD45β for their survival, and, unlike mice lacking IKKβ, any other IKK component, or the NF-κB subunit, RelAwhich all die during late embryogenesis -Gadd45β-deficient mice are viable, fertile, and die of old age (Lu et al., 2004). Accordingly, we hypothesised that, in contrast to systemic NF-κB blockade, pharmacological GADD45β inhibition would be well tolerated, in vivo. Therefore, we sought to selectively target the NF-κB oncogenic function in multiple myeloma cells by inhibiting the GADD45β/MKK7 survival module downstream in the NF-κB pathway. By screening a simplified combinatorial tetrapeptide library, followed by chemical optimisation, we developed the pharmacological Dtripeptide inhibitor, DTP3, which specifically binds to MKK7 with high affinity, disrupting the GADD45β/MKK7 interaction, and, as a result, selectively kills multiple myeloma cells by inducing MKK7/JNK-dependent apoptosis (Fig. 1) (Tornatore et al., 2014;Tornatore et al., 2015). We showed that, due to its target cell-specific mode of action, DTP3 displays potent and cancer-selective therapeutic activity against multiple myeloma cell lines and malignant PCs from multiple myeloma patients and, importantly, is not toxic to normal cells. Owing to these properties, DTP3 exhibited a more than 100-fold higher cancer-cell specificity than either proteasome or IKKβ inhibitors in primary human cells, ex vivo. Notably, as a result of this cancer cell-selective specificity, DTP3 caused a complete regression of established tumour xenografts, extending host survival in mouse models of multiple myeloma, upon intravenous administration, with excellent tolerability and no adverse effects at the therapeutic dose levels (Tornatore et al., 2014;Tornatore et al., 2015). Further toxicology studies demonstrated that DTP3 was well tolerated in both rodent and non-rodent species, upon daily repeated-dose administration at high doses for 28 days, exhibiting no target organs of toxicity and no significant adverse effects, resulting in a wide therapeutic index and exposing no risk for its clinical progression. Accordingly, we are currently conducting the first-in-human phase-I/IIa study of DTP3 in patients with refractory or relapsed multiple myeloma. Upon an initial evaluation, DTP3 demonstrated clinical safety and tolerability at all dose levels investigated thus far, alongside a cancer-selective pharmacodynamic response, in highly refractory oncological patients and as a single agent. Future, larger clinical studies will determine the long-term clinical safety and therapeutic efficacy of DTP3 in patients with multiple myeloma and potentially other types of cancer in which DTP3 is indicated. While DTP3 has thus far produced no significant adverse effects in preclinical models or multiple myeloma patients, and unlike sensitive tumour cells, most normal cells do not constitutively express GADD45β, nor display spontaneous MKK7/JNK activation upon GADD45β inhibition, there remains a possibility that DTP3 administration will result in an exacerbation of MKK7/JNK signalling at sites of pre-existing inflammation, thus aggravating chronic inflammatory comorbidities and/or increasing the risk of autoimmune diseases. Further clinical studies will also consolidate the companion stratification strategy to select those patient subsets who will optimally respond to DTP3 and determine whether and, eventually, how rapidly responding tumours develop resistance to DTP3, for instance by acquiring MKK7 gene mutations or functionally redundant, GADD45βindependent antiapoptotic mechanisms. Notwithstanding, together with the compelling preclinical package, these highly encouraging initial clinical results introduce an unprecedented therapeutic mode of actionpossessing none of the preclusive safety constraints of conventional IKKβ/NF-κB inhibitorsinto clinical oncology and bode well for the ultimate clinical success of DTP3 as a safe and highly effective NF-κB-targeting therapeutic. These results also provide initial proof-ofconcept for a safe and cancer-selective NF-κB-targeting strategy as a novel anticancer therapy which promises to be of profound benefit for patients with multiple myeloma and, potentially, other cancers where NF-κB drives oncogenesis via GADD45β (Fig. 1) (Karin, 2014). Importantly, the same principle we developed of therapeutically inhibiting a cancer-restricted axis of the NF-κB pathway, rather than NF-κB globally, could be also applied to selectively targeting the NF-κB oncogenic function in GADD45β-independent malignancies and, plausibly, in the context of non-malignant NF-κB-driven diseases (Tornatore et al., 2014). Indeed, the NF-κB survival function is mediated by the upregulation of a diverse group of antiapoptotic target genes, which are independently transcriptionally regulated in a tissue-and stimulusspecific manner. Therefore, this NF-κB function is neither exclusively dependent upon GADD45β induction, nor is it necessarily dependent upon the suppression of JNK signalling (Xia et al., 2014;Bennett et al., 2013). For example, NF-κB has been shown to transcriptionally regulate the coding genes for several antiapoptotic members of the BCL-2 family, including B-cell lymphoma-extra large (Bcl-X L ), myeloid cell leukaemia sequence 1 (MCL1), B-cell lymphoma 2-related protein A1 (BCL2-A1)/ Bcl-2-related gene expressed in foetal liver (BFL-1), and, in certain biological contexts, BCL-2 itself. These proteins are involved in maintaining the outer mitochondrial membrane integrity, thereby preventing the release of cytochrome c and other proapoptotic mitochondrial factors, such as second mitochondria-derived activator of caspases (SMAC)/direct inhibitor of apoptosis protein (IAP)-binding protein with Low pI (DIABLO), into the cytosol, ultimately inhibiting the onset of cell death (Vogler, 2014). These antiapoptotic members of the BCL-2 family are also known to promote cancer-cell survival in various types of haematological and solid malignancy (Catz and Johnson, 2001;Cang et al., 2015). Notably, BCL-2-targeting drugs have been successfully developed outside the scope of blocking oncogenic NF-κB signalling, and drugs in this class, including the first-in-class BCL-2-family inhibitor, , have been granted breakthrough status designation by the FDA for treating subsets of patients with relapsed or refractory chronic lymphoid leukaemia (CLL) (Cang et al., 2015;Levy and Claxton, 2017). Therefore, these agents could be additionally developed to therapeutically target pathogenic NF-κB signalling in oncological situations in which NF-κB promotes malignant cell survival through the upregulation of BCL-2-like factors. A potential alternative strategy to selectively target the NF-κB pathway in human cancer involves the inhibition of upstream signalling mechanisms that drive oncogenesis by virtue of their role in regulating NF-κB activation. For instance, the members of the IAP-family of E3 ubiquitin ligases, c-IAP1 and c-IAP2, have been shown to contribute to NF-κB activation by tumour necrosis factor (TNF)α and other stimuli through its binding to TNF receptor-associated factor (TRAF)-family proteins through their baculovirus IAP repeat (BIR) domain, leading to the polyubiquitination of their signalling substrates, including TRAF proteins themselves, to enable the ubiquitin-mediated assembly of multimeric, receptor-specific protein scaffolds, in which the IKK complex is brought into physical proximity of the transforming growth factor β-activated kinase (TAK)1 kinase complex, thereby resulting in IKKβ activation by TAK1-mediated phosphorylation. As well as XIAP, which is also involved in the recruitment of the TAK1 complex to the NF-κB signalosome, c-IAP1/2 proteins have been found to be highly expressed in a subset of human cancers, where they can promote NF-κB activation and cancer therapy resistance. Therefore, since small-molecule dual antagonists of c-IAP1 and XIAP, such as ASTX660, have been recently progressed into phase-II clinical studies in patients with various types of advanced haematological or solid cancer, including DLBCL, T-cell lymphoma, head and neck squamous cell carcinoma (HNSCC) and cervical carcinoma, these therapeutic molecules could be further developed to selectively target oncogenic NF-κB signalling in those clinical cases in which NF-κB is activated by the overexpression of c-IAP1 and/or XIAP proteins. This handful of examples underscore how recent advances in the understanding of the biological functions and regulation of the NF-κB pathway, and the contextual make-up of the genetic programmes NF-κB selectively elicits in cancer cells, are currently providing tangible new opportunities for targeted therapeutic interventions in different areas of unmet need across the oncological landscape. Conclusions Owing to its central role in disease pathogenesis, the NF-κB pathway has been pursued for decades as an attractive target for therapeutic intervention. Yet, despite the clear need for a specific NF-κB inhibitor to treat a broad range of human diseases, developing such a molecule has so far presented an impenetrable riddle, due to the need to confront a ubiquitous signalling pathway that has many elemental physiological functions. This has resulted in the dismaying absence of an NF-κB-targeting drug from the current pharmacopoeia. Despite this bleak reality, the past three decades have seen a succession of fundamental advances in the understanding of the intertwined signalling networks governing NF-κB activation, the myriad of cellular functions the NF-κB pathway embodies, and the diverse transcriptional programmes it contextually governs in any given cell, whether in normal or unhealthy tissues. Indeed, these advances are now providing important clues to finally untangle the NF-κB conundrum, and these clues, in turn, are beginning to translate into targeted and much safer alternatives to global NF-κB blockade as a way to effectively treat patients, both within and outside oncology. While the initial successes in the experimental settings have yet to transform into a clear healthcare benefit, the conceptual revolution conveyed in the approach to therapeutically targeting the NF-κB pathway is already providing tangible opportunities for developing effective new treatments in refractory disease indications. Indeed, if there is a lesson to be learnt from these initial successes, it is that the deep-rooted complexity in the NF-κB pathway may well hold the key to unlocking the gateway to generating clinically useful NF-κB-targeting medicines. Therefore, embracing, rather than evading, this complexity appears to the path to follow in order to finally seize the therapeutic potential still captured in the NF-κB pathway. Recent reports suggest that one way of achieving this goal would be to exploit the contextual diversity of the transcriptional programmes NF-κB elicits in different cell types and, accordingly, inhibit the nonredundant, tissue-restricted downstream effectors of the NF-κB pathogenic functions. Although further clinical evaluation will ultimately determine its safety and clinical benefit, this approach has already added a firm string to the bow of the promising new therapies being developed to selectively inhibit NF-κB signalling in cancer. Additional attractive strategies are also appearing on the horizon of realising the contextual, cancer-cell selective inhibition of the NF-κB pathway by exploiting, for instance, the tissue-specific signalling mechanisms governing the contextual NF-κB activation in cells. Future research will tell whether NF-κB inhibitors will ever become part of the available anticancer arsenal. However, the significant advances recently made in this direction bode well for enabling this new reality in the near future. Conflict of interest The authors declare no conflict of interest.
2018-04-03T06:13:54.232Z
2018-02-01T00:00:00.000
{ "year": 2018, "sha1": "7d25c1ff209d6b03d33fc0a66c6195871c3e4ce1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.biocel.2017.12.020", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c851f912d14cd981bec7d3f63d2f5d59f3c0bcfb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13927281
pes2o/s2orc
v3-fos-license
Biosynthesis and Characterization of Cross-Linked Fmoc Peptide-Based Hydrogels for Drug Delivery Applications Recently, scientific and technological interest in the synthesis of novel peptide-based hydrogel materials have grown dramatically. Applications of such materials mostly concern the biomedical field with examples covering sectors such as drug delivery, tissue engineering, and production of scaffolds for cell growth, thanks to their biocompatibility and biodegradability. In this work we synthesized Fmoc-Phe3 based hydrogels of different chirality by using a biocatalytic approach. Moreover, we investigated the possibility of employing a crosslinker during the biosynthetic process and we studied and compared some chemico-physical features of both crosslinked and non-crosslinked hydrogels. In particular, we investigated the rheological properties of such materials, as well as their swelling ability, stability in aqueous medium, and their structure by SEM and AFM analysis. Crosslinked and non-crosslinked hydrogels could be formed by this procedure with comparable yields but distinct chemico-physical features. We entrapped dexamethasone within nanopolymeric particles based on PLGA coated or not with chitosan and we embedded these nanoparticles into the hydrogels. Dexamethasone release from such a nanopolymer/hydrogel system was controlled and sustained and dependent on genipin crosslinking degree. The possibility of efficiently coupling a drug delivery system to hydrogel materials seem particularly promising for tissue engineering applications, where the hydrogel could provide cells the necessary support for their growth, while nanoparticles could favor cell growth or differentiation by providing them the necessary bioactive molecules. Introduction Tissue engineering and regenerative medicine are part of an emerging multi-and interdisciplinary field that applies the principles of engineering and life sciences towards the development of biological substitutes [1,2]. Such research fields have the potential to revolutionize the way health and quality of life are improved for millions of people worldwide by restoring, maintaining, or enhancing tissue and organ function. Different elements are believed to be crucial for successful tissue regeneration: stem cells, growth factor, and scaffold. Cells provide the machinery for new tissue growth and differentiation, whereas growth factors and other molecules modulate the cellular activity and provide stimuli for cells to differentiate and support tissue neogenesis. A three-dimensional template structure for cell growth is provided by scaffolds able to support and facilitate the processes that are critical for tissue regeneration [3]. The nanotechnology approach to scaffold design and synthesis is an emerging area of research and study [4,5], one of the current biggest challenges is to exploit self-assembly processes (the spontaneous organization of matter into specific arrangements) to obtain materials and devices with innovative characteristics and functions, especially for biomedical and biotechnological use [6,7]. The aim is to achieve pre-defined specific, ordered or disordered, structures via the rational design of elementary "building blocks". In this "bottom-up approach", the effort is made in the direction of a rational design of the elementary components of the requested structure. Despite the large emphasis on the importance of the bottom-up approach in the production of new materials, up to now research has mostly focused on the synthesis and characterization of novel nanoparticles or of new macromolecules with the potential to self-assemble and, less frequently, on the study of collective structures (micelles, fibers, sheets, or three-dimensional networks, gels) arising from their self-assembly. Peptide hydrogels are interesting materials that are currently studied for their potential use in biomedical applications [8,9]. Recently, we have reported the lipase-supported synthesis of Fmoc-tripeptides, which occur in an aqueous phase through a reverse hydrolysis reaction [10]. These materials are biocompatible, as well as biodegradable and they possess a very interesting feature, which is their injectability, since the precursors used for their synthesis are liquid at room temperature. The possibility to use such biomaterials as drug delivery vehicles induced many scientists to investigate the possibility of modulating the crosslinking degree of the macromolecular 3D structures by using different crosslinking agents. Genipin is a natural compound, found in Gardenia jasminoides fruit extracts. It has traditionally been used in herbal medicine and as a food dye [11,12]. Genipin is known to be able to act as a crosslinking agent for proteins and aminoacids, affording stable cross-linked products. In particular peptidic hydrogels can exhibit, as a function of their crosslinking degree, different mechanical properties in comparison to the non-crosslinked ones. Moreover, such chemical modification may be able to influence their in vivo stability. The mechanism of the genipin crosslinking reaction is still not fully understood, however it involves the formation of genipin dimers that bind amine groups on adjacent proteins [13,14], that give a blue-colored reaction product. Genipin is a particularly interesting cross-linking agent because of its low cytotoxicity, especially if compared with traditional crosslinkers such as glutaraldehyde and epoxy compounds [15]. Moreover, it has been recently reported that the presence of genepin may favor cell adhesion to artificial matrices [16]. For the above reasons, the use of genipin in the preparation of new materials for biomedical applications is highly attractive. So far, it has been used to crosslink polymeric hydrogel-forming materials such as gelatin and fibrin [16,17] or polypeptide hydrogels [18]. In this work, we used genipin for the first time as a crosslinker for Fmoc-tripeptide hydrogels of different chirality, synthesized by a lipase-supported reaction in aqueous phase that we developed in the past [10,19]. We characterized the rheological and chemico-physical properties of the obtained materials and we compared them with those of non-crosslinked ones in order to assess if genipin-mediated crosslinking could provide attractive features to the hydrogels in view of their use in tissue engineering approaches. In fact, such materials may be used as artificial scaffolds for cell growth, an approach that may lead to future applications in tissue engineering. With this objective, we loaded the hydrogels with a model drug, dexamethasone (DXM), and we studied its release kinetics from the different hydrogel materials also by using nanopolymeric vectors based on polylactic-co-glycolic polymers embededded with the Desamethsone (DXM) with the aim to modulate drug release. Hydrogel Biosynthesis Both FmocF and FmocF* were used in lipase-catalyzed reversed hydrolysis reactions using respectively FF and F*F* dipeptides, with the formation of a peptide bond between the Fmoc-aminoacid and the dipeptide ( Figure 1). The reaction products are FmocFFF and FmocF*F*F* tripeptides. The reaction conditions for such bioconversions have been optimized in previous works [19] and were employed both for non-crosslinked as well as for genipin-crosslinked hydrogels. As far as the crosslinked gels preparation is concerned, different genipin concentrations were used, corresponding to values ranging from 1/2000 to 1/20 with respect to the amount of Fmoc-aminoacid used. All the bioconversions, with and without genipin, afforded self-supporting hydrogels in the employed reaction conditions in about 20 min, as evidenced by the inversion test. Chemically crosslinked gels were more firm to the touch and more easily removable from the glass vials in which they were formed, while gels fabricated without genipin were more easily torn during handling. For genipin-crosslinked hydrogels, a blue color, evidence of the ongoing crosslinking reaction, appeared within a few hours of hydrogel preparation. The reaction yields for all bioconversions were calculated, affording the results shown in Table 1. Such results, obtained for non-crosslinked gels and for gels crosslinked with the highest genipin concentration used, evidence that the presence of genipin did not affect the tripeptide reaction yield. Material Reaction Yield (%) FmocF*F*F* 50 ± 2 Genipin-crosslinked FmocF*F*F* 53 ± 3 FmocFFF 27 ± 2 Genipin-crosslinked FmocFFF 32 ± 2 Preliminary investigations showed that genipin can react both with the dipeptide, that possesses an -NH2 group, as well as with the Fmoc-tripeptide, that possesses three -NH groups, but not with the Fmoc-aminoacid. The crosslinking reaction starts while the enzymatic reaction occurs, therefore genipin most probably reacts both with the dipeptide as well as with the Fmoc-tripeptide, whose formation triggers self-assembly and hydrogel formation [10]. Although the presence of genipin in the reaction medium does not influence the reaction yield of tripeptide formation, it could significantly influence the self-assembly and three-dimensional organization of the final product. Rheological Measurements As previously reported by the authors [20] the storage modulus, G', is remarkably sensitive to the chirality of the Fmoc-peptide. As shown in Figure 2A, when the polymer network formation involves the D isomer FmocF*F*F*, the value of G' at equilibrium is about twice the value of the storage modulus obtained when the L isomer is used. Such a result can be explained on the basis of an increase in the Fmoc-peptide reaction yield, as well as on the different structure and size of the fibers obtained by using substrates with D-chirality. When genipin is added, the mechanical behavior of the gel is reversed. The mechanical spectra of the genipin-treated hydrogels ( Figure 2B) show that the L isomer provides a firmer hydrogel than the D. G' values are known to be directly proportional to the cross-linking density. An evaluation of the initial rate of the cross-linking reaction can provide information on the difference in the formation of the crosslinks in the presence of the two isomers. In the initial part of the kinetics of Fmoc-tripeptides without genipin, where a linear trend is expected, the slope of the curve obtained with D is about the double of the slope registered in the presence of the L isomer. This finding corroborates the results highlighted in the mechanical spectra of Figure 3A. The presence of genipin lowers the initial growth rate of G' for both isomers. However, consistently with the results of Figure 3B, genipin enhances the G' growth of the L-with respect to the D-isomer. This may reflect a different microscopic organization of the reaction products. Swelling Ratio and Weight Loss Ratio Measurements Hydrogel materials designed for biomedical applications will come in contact with biological fluids in vivo. Studying in vitro the behavior of such materials in the presence of aqueous-based solutions that simulate the in vivo environment, such as buffered solutions or Ringer solution, are therefore important and may afford valuable information on the in vivo interactions of the hydrogels with the surrounding environment. Figure 4 shows the swelling ratios of Fmoc-based hydrogels with different chirality, crosslinked with different genipin concentrations. As reported previously [20], hydrogels with an L chirality show lower swelling abilities than their D counterparts, and this is confirmed also for the genipin-crosslinked materials. Crosslinked hydrogels have lower swelling ratios than non-crosslinked ones (that are around 100%), with no significant differences among the different genipin concentrations used. This behavior may indicate that crosslinked Fmoc-tripeptides preferably interact with the crosslinker or with other tripeptide molecules, rather than with water molecules, therefore their swelling is lower, with values between 27% and 50%. The effect of a Ringer solution on the stability of the hydrogels, evaluated through their weight loss ratios, was also studied after an incubation of 30 days, a period of time that can be considered long for such materials. Results are reported in Figure 5. Non-crosslinked hydrogels have a weight loss ratio of about 18%, with no significant differences between the two enantiomers. Cross-linked hydrogels show higher weight loss ratios, between 24% and 38%. There is no clear relation between the amount of crosslinker used and the weight loss ratio of the crosslinked material. Comparing these results with those obtained with different hydrogel materials, such as agar-kappa-carrageenan blend cross-linked with genipin [21], that show weight loss ratios of 15%-40% in approximately two days, we can affirm that all our materials are quite stable. DXM in Vitro Release Studies Peptide hydrogels are considered promising materials for biomedical applications, i.e., tissue engineering and tissue regeneration applications. Currently, three elements are considered to be crucial for successful tissue regeneration: stem cells, scaffold and growth factors or other chemicals used for in vitro cell differentiation, such as DXM. Therefore, the possibility of employing a hydrogel scaffold as a drug delivery system is a key to the application of such materials in regenerative medicine. With the aim to evaluate the potential of Fmoc-tripeptide hydrogels in this sense, we studied the release kinetics of a model drug into a buffered medium. Figure 6 shows DXM release kinetics from the peptidic hydrogel matrices of different chirality, both crosslinked with genipin at two different concentrations and non-crosslinked. DXM is released at a higher rate from hydrogels with L chirality, both crosslinked or non-crosslinked, reaching values between 30% and 40% of the total DXM amount in approximately one week, providing a slow and sustained drug release over time. On the other hand, hydrogels with D chirality release 20%-25% of DXM in the same time frame. This may be due to the different microscopic structure of hydrogels, which is closely linked to their chirality [20]. Overall, in accordance with the respective yields of tripeptide formation, D-amino acid based hydrogels are more "dense" materials, that detain entrapped drugs more than their L counterparts. The presence of genipin-based crosslinking in the hydrogels did not affect significantly the total amount of released DXM, but had an effect on the release kinetics. All the tested materials were able to release the drug in a sustained manner and showed no burst effects, but rather an almost constant release over time. Moreover, we evaluated DXM release kinetics from polymeric NPs entrapped within FmocFFF and FmocF*F*F* hydrogels (Figure 7). Such hydrogels were chosen because their mechanical properties were the most promising among the materials studied in this work. Also in this case chirality had an influence on drug release, which, as observed for the release of free DXM from Fmoc-based hydrogels discussed above, reached higher values for hydrogels with L chirality. Moreover, for both gels a higher release rate was evidenced for uncoated PLGA NPs in comparison with (CS)-coated NPs. Overall, both formulations afforded a significantly slow and sustained DXM release over time, reaching values between 8% and 20% of released DXM in approximately seven days. Previous studies on NPs alone have already evidenced that DXM is entirely released from such formulations within a couple of hours [22], therefore the presence of the hydrogel matrix is responsible for obtaining a controlled drug release system. Both the gel and PLGA-based NPs seem to be able to interact with DXM and the synergy between such interactions affords its release in a sustained way. In conclusion, the direct encapsulation of DXM into the hydrogel seems to provide a more efficient and sustained release over time, making the system appealing for drug delivery approaches. On the other hand, the slower DXM release provided by the use of NPs, ensuring higher DXM concentrations within the hydrogel matrix, could be interesting in tissue engineering applications, i.e., cells grown within the hydrogel. SEM and AFM Measurements SEM and AFM were employed for the investigation of the morphology of fibers and hydrogels, also in the presence of genipin. In these studies, we chose to focus on D-peptide based hydrogels, on the basis of their promising characteristics, also evidenced by our previous works [20]. Figure 8 shows the morphology of FmocF*F*F* alone and crosslinked with genipin obtained by SEM and AFM. Such investigations revealed that all the peptidic hydrogels self-assemble with similar features, also in the presence of chemical crosslinking, giving rise to nanofibers with similar structure. However, the presence of genipin seems to afford a different three-dimensional arrangement of the fibers, that results in an increase of fiber density. Also, in such conditions, the number of interconnections and knots between different fibers appears to increase, thus contributing to a more entangled organization of the hydrogel scaffold. In all the selected images, the fibers seem to be rather uniform in morphology and highly interconnected with knots. AFM analysis of the size of the fibers, both in the presence and in the absence of genipin, gives a narrow size distribution, with the same size, measured in the vertical direction of AFM images, of approximately 8 nm. Interestingly, in both hydrogels the R-handed twist repeats along the fiber length is visible (see, for example, the fibers marked by an arrow in panels B and E). The fiber pitch measured for FmocF*F*F* seems to be larger in respect to that of the same hydrogel crosslinked with genipin (around 50 nm and 30 nm, respectively, as determined from longitudinal profiles shown in corresponding panels H and I). This structural feature has to be further investigated in genipin-crosslinked FmocF*F*F* because it is observed with a minor clarity than in the non-crosslinked hydrogel, probably due to a variation of the imaging quality during the AFM scan. Conclusions Fmoc-Phe3 based hydrogels of different chirality prepared by using a biocatalytic approach have been chemically crosslinked with genipin. SEM and AFM investigations revealed that the peptidic hydrogels crosslinked with genipin are porous with highly entangled fibers. We also studied and compared some chemico-physical features of both crosslinked and non-crosslinked hydrogels obtaining biomaterials with different elastic modulus G'. Moreover, DXM encapsulation into the hydrogel seems to provide a more efficient and sustained release over time, making the system appealing for drug delivery approaches. Overall, the results of these studies indicate that FmocF*F*F*-genipin hydrogels may be a useful scaffold for a variety of tissue engineering applications. We are currently attempting to discern the mechanisms of genipin crosslinking and determine the in vitro and in vivo cell attachment and degradation rate of genipin-crosslinked hydrogels. Biosynthesis of Peptide Hydrogels F-moc tripeptide hydrogels of different chirality (FmocFFF and FmocF*F*F*) were prepared as previously reported [20]. Briefly, 40 μmol of each substrate, an Fmoc-aminoacid and a dipeptide, were suspended in a mixture of 1 mL of H2O and 420 μL of 0.5 M NaOH and magnetically stirred until obtaining a homogeneous dispersion. Then, 0.1 M HCl was added to reach a final pH value of 7. A fixed amount (100 μL) of enzymatic solution (50 mg/mL) was then added to the substrate suspension and the mixture was placed in a thermostated bath at 30 °C for 30 min. Crosslinked hydrogels were prepared by following a similar procedure, adding to the substrate suspension at pH 7, before the enzymatic solution, a fixed amount (100 μL) of genipin solution with the selected concentration (0.1, 0.25, 0.5, 1, 5 and 10 mM). Tripeptide final reaction yields were calculated indirectly by measuring Fmoc-Phe amino acid disappearance in the reaction medium. The reaction products obtained from the biosynthetic process were analyzed 24 h after their preparation. Samples were dissolved in a fixed volume of organic solution (60% ACN, 40% H2O, 0.1% TFA). 0.5 M NaOH was also added to a final pH value of 8. The solution was then centrifuged at 14.000 rpm for 20 min at constant temperature (25 °C). Naphthalene was added to the supernatant as the internal standard. HPLC measurements of the final Fmoc-Phe amino acid in the reaction mixture were performed by using the following experimental conditions: C-18 silica column, eluent: 60% ACN, 40% H2O, 0.1% TFA, flow rate: 0.9 mL/min, λ = 256 nm. Rheological Measurements The viscoelastic behaviour of hydrogels was studied by monitoring the storage and loss moduli, G' and G", using an AR2000 rheometer (Waters-TA Instruments, Milan, Italy) equipped with a parallel plates geometry (diameter 20 mm, gap 1 mm) with a fixed plate equilibrated at 25 °C. The mechanical spectra were obtained recording G' and G" in oscillatory mode, from 0.01 to 100 Hz, at constant strain of 1% (limit of the linear viscoelastic strain was about 10%). Kinetics of hydrogel formation at 30 °C were carried out by monitoring the time dependence of G' at 30 °C, at the constant frequency of 1 Hz. Swelling and Stability Studies The swelling ratios of Fmoc hydrogels in PBS (pH 7.4) were measured by adding to each hydrogel sample 3 mL of PBS and incubating it overnight at 30 °C. Fully swollen hydrogels were weighed (Ws) immediately after the removal of excess water. Then, the hydrogels were freeze-dried and weighed again (Wd). The swelling behavior was expressed, according to Equation (1), as the swelling ratio q, that is the ratio between the weight of the swollen sample (Ws) and the weight of the freeze-dried hydrogel (Wd) Each experiment was performed in triplicate. Hydrogel in vitro degradation was evaluated by adding to previously synthesized Fmoc hydrogels a fixed amount (8.5 mL) of Ringer solution (NaCl 8.6 mg/mL, KCl 0.3 mg/L. CaCl2 0.33 mg/mL) and placing the system in a thermostated bath at 37 °C for 30 days. Hydrogels were weighed before the addition of the Ringer solution (W0) and after its removal (Wt) and the weight loss ratio (ΔW) was calculated as: (2) DXM in Vitro Release Experiments DXM-loaded hydrogels, containing 6 mg of drug, were prepared by adding DXM-loaded PLGA or CS-coated PLGA NPs, or the corresponding amount of free DXM, during hydrogel formation. DXM-loaded PLGA or CS-coated PLGA NPs were prepared by a patented methodology [23], as described in previous works [24]. The mixture containing hydrogel precursors and free or entrapped DXM was then incubated at 30 °C for 1 h for gelation. Self-supporting hydrogels, entrapping DXM in their network, were formed in such conditions. After their preparation, DXM-loaded hydrogels were incubated in 2 mL of PBS (pH 7.4, 0.1 M) at 37 °C using a thermostated bath. At fixed time intervals, the supernatant was withdrawn and substituted with an equal amount of fresh PBS. DXM concentration in the supernatants collected at different time points were determined by HPLC by using the following experimental conditions: C-18 silica column, eluent: 60% ACN, 40% H2O, 0.1% TFA, flow rate: 0.9 mL/min, λ = 243 nm. SEM and AFM Measurements Hydrogel morphology was investigated by SEM and AFM microscopy. SEM images were collected by using a Zeiss Auriga 405 microscope at low extracting voltage (1.5-4 kV) and current (7.5 m aperture), in order to reduce the significant charging of the substrate and avoid radiation damages and artifacts [25], and at a very low working distance (≈1 mm) to improve the quality of the signal received by the in-lens detector. Hydrogel samples were cryo-fractured. Internal fragments were collected, freeze-dried, and mounted on an aluminum stab using double-sided carbon tape. AFM images of the peptidic hydrogels were acquired in air at room temperature using a Dimension Icon (Bruker AXS, Billerica, MA, USA) instrument operating in Scan Asyst™ mode, with dedicated probes and using an ultra-sharp silicon tip (nominal radius of curvature 2 nm). This imaging mode allows the application of lower forces than standard tapping mode. Sample preparation for AFM measurements was performed according to the protocol described in a previous paper [20]. Aliquots of 10-20 μL were removed from the peptide hydrogel sample at the end of the gelation process and deposited onto a freshly cleaved mica surface. To optimize the amount of peptide adsorbed, each aliquot was left on mica for 5 min and then gently washed with 200 μL of Milli-Q water. The mica surface with the adsorbed peptide was then flushed with a stream of nitrogen for drying, and analyzed after 30 min. Images were analyzed using the Gwiddion free software and are presented as raw data, except for flattening. No further image processing was carried out.
2016-03-14T22:51:50.573Z
2015-10-16T00:00:00.000
{ "year": 2015, "sha1": "5671102448b52a16bfc1704a71aac44719a3f9ec", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/gels1020179", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5671102448b52a16bfc1704a71aac44719a3f9ec", "s2fieldsofstudy": [ "Materials Science", "Biology" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
251067437
pes2o/s2orc
v3-fos-license
Automating reflectance confocal microscopy image analysis for dermatological research: a review Abstract. Significance Reflectance confocal microscopy (RCM) is a noninvasive, in vivo technology that offers near histopathological resolution at the cellular level. It is useful in the study of phenomena for which obtaining a biopsy is impractical or would cause unnecessary tissue damage and trauma to the patient. Aim This review covers the use of RCM in the study of skin and the use of machine learning to automate information extraction. It has two goals: (1) an overview of information provided by RCM on skin structure and how it changes over time in response to stimuli and in disease and (2) an overview of machine learning approaches developed to automate the extraction of key morphological features from RCM images. Approach A PubMed search was conducted with additional literature obtained from references lists. Results The application of RCM as an in vivo tool in dermatological research and the biologically relevant information derived from it are presented. Algorithms for image classification to epidermal layers, delineation of the dermal–epidermal junction, classification of skin lesions, and demarcation of individual cells within an image, all important factors in the makeup of the skin barrier, were reviewed. Application of image analysis methods in RCM is hindered by low image quality due to noise and/or poor contrast. Use of supervised machine learning is limited by time-consuming manual labeling of RCM images. Conclusions RCM has great potential in the study of skin structures. The use of artificial intelligence could enable an easier, more reproducible, precise, and rigorous study of RCM images for the understanding of skin structures, skin barrier, and skin inflammation and lesions. Although several attempts have been made, further work is still needed to provide a definite gold standard and overcome issues related to image quality, limited labeled datasets, and lack of phenotype variability in available databases. Introduction to Reflectance Confocal Microscopy For a long time, biopsies followed by histological and microscopic analysis were the gold standard in studying skin structure. Unfortunately, biopsies are invasive and lead to local inflammation due to the damaged cells, which may alter the studied sample and hinder the study of healthy skin. They can also be traumatizing for the patient, particularly when done repeatedly, e.g., for skin cancer monitoring. In certain cases, biopsies can raise ethical questions, such as for cosmetic testing 1 or for the study of healthy infant skin. Therefore, alternatives to biopsies have been developed; these include magnetic resonance, optical coherence tomography (OCT), and reflectance confocal microscopy (RCM). 2 RCM allows for real-time in vivo visualization of the epidermis and the upper parts of the dermis at the cellular level. 3,4 It provides information on the geometrical and topological properties of the observed tissue. 5 It is noninvasive, thus enabling repetitive sampling of the same area without damage. This makes it a technique of choice when studying dynamic changes of the upper parts of skin over time. 6,7 In addition, RCM allows for the quantitative study of skin cellular structures involved in the makeup of the skin barrier. 8 Unfortunately, it is limited by the maximum depth before the signal-to-noise ratio of the image becomes too low to extract any information, but it provides information faster than microscopic analysis of a biopsy sample. The confocal microscope was first invented by Minsky in 1957. 9,10 Two versions are currently in use for clinical studies: a handheld in vivo skin imaging microscope 11 and a wide-probe RCM. 12 They offer a horizontal resolution of 0.5 to 1 μm and a vertical resolution (optical section thickness) of 3 to 5 μm, to a depth of 150 to 200 μm depending on the observed site. 12 RCM is based on the collection of signals arising from light reflections at the interface of microstructures with different indices of refraction. Such microstructures are cell membranes, melanosomes, collagen fibers, lipidic layers, keratin fibers, and intracellular organelles. 13 The closer the size of the organelle is to the wavelength of the light source and/or the higher its refractive index is compared with its surroundings, the brighter it appears. 2 RCM is an appropriate technology to study structures in vivo, as the energy of the incident light, although sufficient to produce a signal, is too weak to initiate a photobiological process, hence allowing for a visualization of living cells without disturbing or changing their structure or function. By contrast, the collection of a biopsy is accompanied by local inflammatory reaction of the tissue. RCM stacks are gray-scale images acquired at sequential depths from the skin surface. Their orientation is orthogonal to the vertical sections typical in histopathology [ Fig. 1(a)]. An in vivo reflectance confocal microscope is composed of a light source (commonly a nearinfrared laser), pinhole apertures, lenses, a detector, and a mechanism for beam-scanning 14 (Fig. 1). In the reflectance configuration, the objective lens is used to focus the illumination to a spot on the sample and to collect the reflectance signal. The detection pinhole only passes light reflected from the illuminated point in the sample to the detector and significantly rejects scattered light. By scanning the illumination spot on the sample, an image is reconstructed. RCM Imaging of the Epidermis The epidermis is an avascular keratinized stratified squamous epithelium generally comprising four distinct layers. From superficial to deepest, these layers are called stratum corneum (SC), stratum granulosum, stratum spinosum, and stratum basale. In the soles and palms, a thicker epidermis is observed, with an additional fifth layer between the cornified and granular layer called the stratum lucidum. The majority of cells in all layers below the SC are referred to as keratinocytes, named due to their involvement in manufacturing and storing of keratin intermediate filaments. In contrast to the viable keratinocytes, SC is made of dead but enzymatically active keratinocytes called corneocytes. 15 Throughout the lifetime of a person, these cells are shed and replaced by others from the lower layers. The process starts in the basal layer, where cells are continuously produced (by stem cells and transient amplifying cells), lose their attachment to the basal membrane, and migrate toward the upper layers, while undergoing differentiation toward final cell death in a process called cornification. RCM can be used to observe the epidermal layers, the dermal-epidermal junction (DEJ), and the upper layers of the dermis, 7 thus allowing for the computation of several quantitative descriptors of skin structure, such as keratinocyte density, number of basal keratinocytes around each dermal papilla, length of DEJ, and circumference of dermal papillae. Measuring such parameters on RCM images enables the quantitative study of skin structures and their evolution over time, for example, as a response to different stimuli. In addition to the geometrical parameters, we can also extract information about the topological organization of the epithelium, for example, the distribution of the number of nearest neighbors to each cell, an important factor in determining molecular exchange rates between neighboring cells. 5 The top slices of RCM stacks represent the SC, which appears as large bright areas forming islands surrounded by dark empty areas [ Fig. 2(a)]. These dark areas are due to grooves called skin microrelief lines, 3 whereas the bright signal in the island structure is due to the high reflectance of keratin. The cells are anucleated dead corneocytes, made primarily of aggregated keratin filaments embedded in a lipid matrix, 15 polygonal in shape, and 10 to 30 μm in size. 2 SC thickness is an important factor involved in skin barrier function. 8,15 The thicker the SC is, the more difficult it is for a noxious substance to penetrate into the viable parts of the epidermis (or equivalently for water to transvers the epidermis and evaporate, potentially leading to tissue desiccation). It is 12 to 208 μm thick depending on the body site. 16 Moreover, corneocytes provide a mechanical strength to the skin surface and are involved in protecting the lower layers against UV radiation, and the lipid matrix is important in maintaining skin permeability. [17][18][19] SC thickness can be calculated by the depth difference of the uppermost and lowest optical sections that contain SC structures. The stratum granulosum [ Fig. 2(b)] and stratum spinosum [ Fig. 2(c)] are the second and third layers in the epidermis from the skin surface, respectively. They are composed of keratinocytes arranged in a honeycomb pattern in minimally pigmented skin and a cobblestone pattern in heavily pigmented skin 7 [Figs. 3(b) and 3(c)]. In minimally pigmented skin, the cells are characterized by a dark center and grainy cytoplasm due to organelles and microstructures 6 and are surrounded by bright membranes. 2,7 In heavily pigmented skin types, due to the high melanin-content in melanosomes, which gives a strong reflectance signal, we observe bright keratinocytes separated by a dark contour. 20 Viable keratinocytes are found at depths of 20 to 100 μm 21 and are about 10 to 15 μm in size. 2 Cells are typically larger in the granular layer than in the spinous layer, 5 where they have a higher density. Indeed, as the keratinocytes further differentiate while climbing toward the surface, they get wider and flatter. Toward the basal layer of the epidermis, the cells appear similar in shape but smaller in size compared with the two previous layers [ Fig. 2(d)]. In contrast to the other layers, the basal keratinocytes make a monolayer. These cells are precursors of the keratinocytes in the upper layers and appear brighter than them due to the presence of melanin, which has a high reflectance. 22,23 Melanin is made by melanocytes scattered through the basal layer and then transferred to the keratinocytes. 24 The cells of this layer are adherent to a collagenous membrane that separates the epidermis from the dermis called the basement membrane. The thickness of the viable epidermis is calculated as the depth difference between the optical sections at which we observe discernable viable keratinocytes in the stratum granulosum and that at which the top of the DEJ appears in the stratum basale. The undulating DEJ [ Fig. 1(a)] separates the epidermis and dermis and is located at 50 to 100 μm below the skin surface. Sometimes, bright areas can be observed on RCM images at various layers. They may arise from the keratin in hair shafts [ Fig. 4(a)] or from clustered keratinocytes, also called mottled pigmentation [ Fig. 4(b)]. 1 Skin maturation and aging Skin function and structure change throughout our lifetime, 25,26 whether this refers to skin maturation during childhood or to skin aging during adulthood. RCM allows us to visualize such changes by imaging the skin of different age groups. Infant skin is structurally different from that of an adult starting from the appearance of the skin surface to the thickness of the epidermal layers and the extracellular matrix structures in the dermis. On the surface, infant skin has thinner more abundant microrelief lines compared with adult skin. 5,8 This is an important point because the spaces in the microrelief lines may act as reservoirs of topically applied substances, affecting their permeability kinetics. In addition, when comparing RCM images of adults and children, we observe that the SC is 30% thinner in children and the suprapapillary epidermis is 20% thinner. 8,27 The structural differences between infant and adult epidermis translate into functional differences, e.g., when measuring trans-epidermal water loss rates, we notice that it is significantly higher in infants and decreases throughout childhood toward the values recorded in adults. 8 We can also observe on RCM images that, due to higher cell turnover 27 in infant epidermis, keratinocytes and corneocytes are smaller; therefore the cell density decreases with age in both stratum granulosum and stratum spinosum. 5 Both cell area and cell perimeter, which can be measured on RCM images, increase with age, 5 as well as overall epidermal thickness and individual layer thickness. 27 Changes in skin structure do not stop in adulthood as has been documented and quantified in studies using RCM. 28 Early signs of skin aging can be observed, 1 and the effects of cosmetic products can be evaluated. Indeed, in subjects over 70 years of age, Longo et al. observed more irregularly shaped keratinocytes, an increased compactness of collagen fibers under the DEJ, a thinning of the epidermis, and an increase in the presence of clustered keratinocytes appearing as bright spots in RCM images. 1 In addition, although the overall epidermal thickness decreases after 50 years of age, the SC thickness increases. 29 In addition, the number of dermal papillae decreases in aged skin, 30 as does their size, compared with young skin. 29 These changes in skin epidermal structure related to age become more obvious in sunexposed areas where physiological aging related to biochemical processes is accelerated by photo-aging caused by continuous exposure to UV radiation. We can therefore connect structural changes of the epidermis observed with RCM to clinical manifestations of aging, e.g., wrinkles, thinning of the skin, hyperpigmentation spots, and loss of elasticity. As previously mentioned, RCM can be used in studies requiring repeated assessment of skin structures in healthy as well as diseased skin. Photoaging A known risk factor of skin damage, premature skin aging, and even skin cancer is exposure to UV radiations. 31 Because RCM allows us to visualize noninvasively different elements of skin structure at different epidermal layers, it makes the longitudinal studies of skin responses to stimuli such as UV irradiation possible. Therefore, RCM is a reliable technology to assess solar damage. 32 Using RCM images, it has been observed that the SC may appear brighter in sun-exposed areas than in sun-protected areas. Overall epidermal thickness and keratinocyte density are greater in sun exposed areas and on the face, 6 and their honeycomb pattern organization is often disturbed in sun-damaged regions. 32 Skin inflammatory diseases RCM has been used to study inflammatory skin conditions, such as psoriasis 33 and allergic contact dermatitis, 17 to evaluate descriptive features of skin inflammation in vivo noninvasively, and to support diagnosis. Psoriasis is characterized by a thickening of the SC and viable epidermis; both features are quantifiable by RCM. RCM has been used in patients with psoriasis to document thinning of the granular layer, 2 increase in the number and size of dermal papillae, and increase in keratinocytes size and brightness. Diagnosis of allergic contact dermatitis can be guided by RCM. Some of its characteristics observable in RCM are disrupted SC, vasodilatation, increased epidermal thickness, and detached corneocytes. 17,34 RCM is limited by the maximum depth before the signal-to-noise ratio of the image becomes too low to extract any information. Nevertheless, it provides information faster than microscopic analysis of a biopsy sample and therefore can be integrated as an initial step in clinical diagnosis, 35 e.g., guiding biopsies and determining lesion borders. 36 Skin cancer Finally, 18 RCM features have been identified as useful in skin cancer diagnosis; two of them, atypical cells, and DEJ disarray, are specific for malignant melanoma (MM) diagnosis. 37 With increasing MM incidence in Europe, RCM-aided MM diagnosis can help with early detection and thus increase survival rates. 38 Using RCM in MM diagnosis may also reduce the number of unnecessary biopsies and benign skin lesion biopsies. 12 In addition to diagnosis, RCM can also be used in the examination of MM and nonmelanocytic skin tumors. Indeed, the large field of view provided by RCM can be used to determine the lesion margins as it can cover a much larger area than classical histopathological approaches, such as biopsies. 2,12 Therefore, it enables the monitoring of tumor growth and response to nonsurgical treatment and potentially guides its surgical excision. Cosmetology Biopsies and other invasive methods are rarely used in the study of cosmetic product effect on skin due to ethical considerations. RCM provides a noninvasive alternative that can help to link the changes in structure to changes in appearance, e.g., aging. RCM is useful for assessing the impact of topical formulations on the cellular structure of the skin, e.g., retinoic acid and retinol, 39 and to measure the in vivo kinetics of the skin after application of moisturizers, 40-43 e.g., by measuring epidermis thickness and width of skin folds. Assessment of moisturizer photoprotection efficacy 44 and cleanser efficacy 45,46 is also possible with RCM. RCM Limitations All of the previously discussed studies are limited by the maximum depth for RCM imaging, which varies depending on the observed tissue type. RCM use is limited to the epidermis and upper layer of the dermis. Although refractive microstructures are abundant in deeper dermis (e.g., collagen fibers, inflammatory cells), light intensity and coherence drop exponentially with depth. In addition, most studies to date require manual analysis of RCM images, which calls for training that takes 4 to 6 months. 12 The process is tedious and time-consuming and is subject to human error and interexpert variability. Feature extraction could be facilitated by automated image analysis methods. RCM use could then become more widespread 47 in skin research, training time could be reduced, and image analysis could be standardized. Automatic Identification of Epidermal Layers Attempts at automatically labeling the four epidermal layers have been made using machine learning approaches. [48][49][50][51][52] The maximum accuracy obtained by these algorithms, i.e., percentage of correctly identified layers against ground truth, was reported to be 88%. 48 Somoza et al. 51 used an unsupervised texton-based approach to achieve 54% classification accuracy. To do so, they generated a library of microstructures called textons by convolving the RCM images by 10 of the Leung-Malik filters, which matched the size of a keratinocyte. They then applied principal components analysis (PCA) to the generated filter-space to reduce the texton-space dimensionality to three, followed by a K-means clustering with 15 clusters. This texton library was applied to test RCM images, and the results were projected on the three PCA axes. Each pixel on the RCM images was classified as one the 15 textons, based on Euclidian distance, and represented as a 15-dimentional texton histogram. This histogram dimension was reduced to three by applying a second PCA. Finally, by applying K-means clustering with five clusters, for the four layers of the epidermis and the dermis, they obtained the classification of each pixel (Fig. 5). This texton-based approach could be adapted for other classification problems, such as assessing the impact of a treatment on different skin diseases or studying skin maturation and aging. This could be achieved by extending the texton library to include more features. This method, however, could also be improved by including higher level information and features. Indeed, it does not include cellular characteristics or the presence of reflective or darker surfaces that are taken into account by experts manually identifying epidermal layers on RCM images. In addition, this was only a pilot study conducted on three stacks, and it considered that each image contains only a single epidermal layer, which is often not true. Hames et al. 50 used a bag of features approach to classify RCM images into four categories: SC, viable epidermis, DEJ, and papillary dermis. They established four features, inspired by prior knowledge of RCM images: (1) the presence of a visible honeycomb pattern of viable keratinocytes indicative of the viable epidermis, (2) the presence of bands indicative of basal cell/ dermal papillae indicative of the DEJ, (3) the absence of stratum basale features, and (4) visible papillae indicative of the papillary dermis. Any image before the first visible viable keratinocyte is considered to belong to the SC. Using these features, they built a feature dictionary from small image patches, which they used to represent each test image as the histogram of counts of visible features, and then they classified this histogram with an L1 regularized logistic regression as one of the four categories. They obtained a classification accuracy between 62.9% and 95.6% depending on epidermal layer, body site, and phenotype. Not all phenotypes were included, nor were all of the body sites, and the study did not include diseased skin. The method assumed the presence of a single epidermal layer per image, which is rarely true. In addition, there is no clear pattern explaining the differences in accuracies. Finally, this method does not take advantage of the three-dimensional (3-D) organization of skin to improve the results. Kaur et al. 49 developed a hybrid deep learning approach for classification of RCM imagers in five categories: the four epidermal layers and the dermis. First, each RCM image was convoluted by a 48-filter bank. Then using a prebuilt texton library, each pixel was represented with a patch centered around it and labeled. Each pixel was then associated to its eight nearest neighbors; thus, each image was represented by eight texton maps. These maps were pooled together by weight while the dark pixels were ignored. Finally, the obtained histograms for each image in the training set were used to train a convolutional neural network (CNN), the parameters of which were determined empirically. They achieved 82% accuracy but only tested the algorithm on three RCM image stacks. In addition, the results are limited by the features in the texton library. Using a multiresolution, multiorientation filter bank to build the library does give more features than if they were determined manually, but it complicates the interpretation of each feature. Finally, Bozkurt et al. proposed a method to automatically classify RCM images into epidermal layers based on recurrent CNN. 48 They introduced a mechanism called a Toeplitz structure that helps in the interpretation of the model by informing on which image the model's decision was based. This model is an encoder-decoder, with bidirectional recurrent units and Inception V3 53 networks. This approach achieved 88% accuracy in classifying RCM images into epidermis, DEJ, and dermis. This approach was tested on a much bigger dataset than the methods described above. Moreover, it includes higher level information by considering three surrounding images to make a prediction, which is frequently done when manually identifying epidermal layers, but using a CNN leads to a loss of feature interpretability. Overall, neural network-based approaches 48,49 were more successful in correctly classifying images to epidermal layers than algorithms based on texture analysis, 50,51 but comparison between methods is not straightforward as the datasets used for each method vary in size, from 15 to 504 stacks, i.e., from 1500 to 21,412 images. The methods also vary in types of included samples, with some containing only normal healthy skin and others also including lesional skin. Finally, not all RCM stacks were taken in the same sites, but all were captured using a VivaScope 1500. Table 1 summarizes the methods described for automated epidermal layer classification. Automatic Identification of the DEJ The DEJ plays a fundamental role in wound healing and molecular exchanges in the skin. Skin cancer lesions are often characterized by DEJ structural changes. In addition to applications related to skin cancer, rapid localization of DEJ would be helpful in the study of healthy skin structure, for example, related to aging. Such approaches have been tested to delimitate the DEJ in RCM images, with varying levels of accuracy. Indeed, the DEJ is harder to delineate in lighter pigmented skin due to the lack of strong melanin signal. Some researchers have considered the DEJ identification to be a segmentation problem, whereas others look at it as a classification problem. Kurugol et al. 52 achieved 88% accuracy in more pigmented skins and 60% accuracy in minimally pigmented skins, with an average distance from the expert labels of 7.9 and 8.0 μm, respectively, using a bag of features approach with support vector machine (SVM) algorithms. Indeed, Kurugol et al. 52 started by applying to each image a skin type detection algorithm based on the detection of bright cells in the stratum basale, which is characteristic of heavily pigmented skin. On RCM images of heavily pigmented skin, they clearly detected the DEJ, but for images of minimally pigmented skin, where melanin contrast is lower, a transition zone between dermis and epidermis was detected instead. For RCM images of heavily pigmented skin, where the image contained bright basal cells and had one strong intensity peak, they relied on intensity information to determine DEJ. If the image had more than one intensity peak, then they used a texture-based detection algorithm. For minimally pigmented skin, Kurugol et al. applied an SVM classifier trained on features from manually labeled images. These features were automatically selected from a list of 170 computed parameters and were the most discriminative and less redundant in the training set (Fig. 6). This approach showed better results on heavily pigmented than on minimally pigmented skin. On minimally pigmented skin, the approach failed to identify the DEJ where wrinkles are present, limiting its use. In addition, Kurugol et al. mentioned a cross-expert correlation of 81% in DEJ delineation on minimally pigmented skin, highlighting the difficulty of this task but also showing that their automated method is not on par with manual identification of the DEJ by experts. Robic et al. 54,55 used a random forest classification combined with spatial regularization based on a 3-D conditional random field 54,55 (CRF) and achieved 54% accuracy in identifying the DEJ and 90% and 75% accuracy in identifying the epidermis and dermis, respectively. For each pixel in an RCM image of light pigmented skin, they predicted the probability of it belonging to one of three categories, i.e., dermis, epidermis, and uncertain, with a random forest classifier. These results were fed to the CRF to predict the labels of the pixel neighbors by imposing the transition order of the epidermal layers (Fig. 7). This method was not tested on RCM images of heavily pigmented skin, but it performed on par with state-of-the-art methods [48][49][50]52 on minimally pigmented skin. It took into account the possibility of different layers per image, which other methods do not, as they performed classification at the image level and not at the pixel level. Bozkurt et al. proposed a method 47 to automatically delineate DEJ based on recurrent CNN with attention to aid cancer diagnosis. This approach achieved 88% accuracy. It is based on the knowledge that skin maintains a strict sequential ordering: epidermis, DEJ, and dermis so recurrent CNN are appropriate to automatically identify epidermal layers on RCM images. Bozkurt et al. first trained a deep CNN and then augmented it with recurrent layers, so the model would take into consideration information from the image and its surrounding slices. The CNN model used was an Inception v3 network. 53 This method was tested on a much bigger dataset than other approaches, which allowed the authors to take into consideration the dependencies between sections, i.e., order of the layers, accounting for the 3-D organization of the skin, which helps circumvent the difficulty of identifying the DEJ by giving information on the DEJ shape across different images. However, the complexity of the model and the size of the training set hindered the training computational performance. Finally, a Poisson point process 56 approach managed to identify the DEJ with an average error of 5.4 μm in heavily pigmented skin and of 12.1 μm in light pigmented skin. It is an unsupervised generative framework that segments and detects complex objects, based on prior knowledge of the shape. Here, the DEJ was modeled as a succession of an unknown number of hills at random locations determined by a marked Poisson process. This method is based on the assumption that prior shape information improves object detection and thus provides an explicit model of the boundary, whereas other methods rely on extracted features. However, it requires parametrization of the Poisson point process based on prior knowledge, which in turn requires detailed labeling of the images. Therefore, the parameters in this approach can easily be interpreted, which is often not the case with deep learning-based approaches. 47,55 On the other hand, this method relies on a small number of structural and physiological features that are meaningful to the expert, which makes it reliable but could lead to a loss of information related to other characteristics, sometimes even determined by the machine but unknown to the expert. All of the methods used images acquired with a VivaScope 1500 microscope, but not all images were captured at the same body site. Table 2 summarizes the methods described for DEJ delineation. Automatic Identification of Pigmented Skin Lesions Automated identification of lesions from RCM images has also been investigated. We distinguish two types of applications for these algorithms: (1) finding melanoma patterns and (2) distinguishing nonmelanocytic lesions from melanoma. For the first application, an algorithm to identify melanocytic lesions on RCM images based on wavelet transform obtained moderate success with 55% of the melanomas and 47% of the benign nevi being correctly identified. 57 Another approach aimed to reproduce clinician analysis of an RCM image to determine the presence of melanoma 58 by identifying patterns in the DEJ mosaics and classifying them into melanoma or nonmelanoma with a sensitivity of 55% to 81% and a specificity of 81% to 89%. A more recent approach by Bozkurt et al. 59 used a multiresolution convolution neural network to identify similar patterns as Kose et al. 58 It achieved 95% average specificity and 77% average sensitivity. For the second application of distinguishing between melanomas and nonmelanocytic lesions, Halimi et al. 60 proposed a Bayesian model to quantify RCM images reflectivity and classify images into two categories, healthy and lentigo patients, based on their reflectivity distribution. They obtain an accuracy of 98%. Zorgui et al. 61 obtained similar results, with an accuracy of 98% with a CNN. The CNN was trained on normalized resized RCM images with a pretrained Inception V3 model. 53 Transfer learning was then used to apply the model to skin RCM images. 62 Table 3 summarizes the methods described for skin lesions identification. Finally, Bozkurt et al. 59 proposed a CNN inspired from the U-net 64 architecture to identify six classes: nonlesion, artifact, meshwork pattern, ring pattern, nested pattern, and aspecific/patternless. Indeed, this model was built on a dataset containing RCM images of both lesional and nonlesional skin. This model slides through RCM images with a sliding window with 75% overlap and applies three consecutive nested U-nets. This generates segmentations at different resolution levels. Each U-net model generates a probability map. The deepest U-net model only takes a sliding window as input, whereas the others use a concatenation of the upsampled probability map at the higher level and sliding window. This model achieved 73% classification accuracy. The goal here is not to compare the performance of these approaches, as they do not all identify the same lesion, but to give an idea of what can be achievable with RCM images. Automatic Identification of Cells RCM individual cells provide important information in the assessment of skin health, but their manual identification is tedious, time-consuming, and subject to expert interpretability. Very few attempts have been made to automatically identify individual cells or nuclei in skin RCM images. Harris et al. 65 proposed a pulse coupled neural network to automatically segment nuclei in oral mucosa RCM images. Identifying the nuclear-to-cytoplasm area ratio is a useful indicator in the early diagnosis of cancer. Unfortunately, RCM images tend to be noisy and nonuniform, which makes the development of an accurate segmentation algorithm complicated. 66,67 RCM images in which the background was removed were filtered, and a spiking cortical model 68 was applied to them, followed by an artificial neural network classifier, which outputs a segmented image. The approach had 90% accuracy (trained using eight images and tested on 28). The small size of the training set may impact the accuracy of the model. It would be difficult to apply this approach to skin RCM images that are noisier than images of the oral mucosa and riddled with small organelles within the cytoplasm and membranes of similar shape and size than a nucleus. Gareau 69 attempted an automated identification of keratinocytes. An error function reflectance profile was trained on labeled RCM images and then tested on other images to identify keratinocytes coordinates. All images belong to the same stack. The obtained keratinocyte density matched prior knowledge based on manual counts. The model supposed that the keratinocytes center is darker than the rim. This assumption fails on basal cells due to brightly scattering melanosome caps over the nuclei. The method may be improved by training two separate models for the granular and spinous layers as their cells differ in size. It is also unclear how it behaves when tested on RCM images of other people of different ages, as keratinocytes size changes with age. In all models described above, results differed between minimally and heavily pigmented skin or were not tested in both cases. Any algorithm developed for the analysis of RCM images should be tested on skin types with various degrees of pigmentation. Discussion Various attempts have been made at automating the analysis of RCM images in dermatology, from automatic epidermal layer classification and DEJ identification to lesion detection and cell identification. The published work to date shows that there is high potential in the application of machine learning and/or image analysis algorithms to RCM images. The presented algorithms have various levels of success and accuracy, but a direct comparison is difficult as used datasets varied in size and skin samples varied in sampled body site, volunteer age, and phototype. Overall, the application of computational methods in RCM is made difficult by poor image quality, high noise, and low contrast. In addition, training any supervised model requires the manual labeling of the images to obtain a ground truth, which is time-consuming and tedious and, with relative variability between experts, highlights even more the need for automation in RCM image analysis. Furthermore, the used datasets often lack in variety in terms of subject ages and skin phototypes, which introduces bias to the models and reduces their general application to all populations. Another noninvasive in vivo technology used in the study of skin is OCT. OCT has been used in the study of different skin lesions, especially carcinomas and inflammatory skin diseases. OCT has a greater imaging depth than RCM of up to 2 mm, but its resolution is limited and does not allow for identification of individual cells. Attempts at automating epidermal layer classification, hair follicle identification, lesion detection, and skin inflammation have been made for OCT images, but similar to RCM, a generally accepted gold standard does not exist. Conclusion RCM can provide a quantitative evaluation of skin barrier physiology and how it changes due to age or responding certain stimuli, with near histological resolution. However, the study of RCM images is currently mainly done manually and therefore is tedious, time consuming, and subject to human interpretation. An automated approach to extract quantitative descriptors from confocal images would enable an easier, more reproducible, precise, and rigorous study of these images. Although attempts at the automation of descriptor extraction have been made, a globally accepted gold standard that combines all approaches and can be used by biologists and clinicians is still an open problem. Therefore, future research should focus on methods allowing for an easier translation of images into relevant quantifiable parameters and on making RCM easier, faster, and more accessible to use. Disclosures The authors have no relevant financial interests in this article and no potential conflicts of interest to disclose. France. She received her MEng degree in bioinformatics and modeling from INSA Lyon, France, in 2019. Her research focuses on applying image analysis, machine learning, biostatistics, and omics analysis to study skin. Georgios Stamatas is a research associate director, Translational Science at Johnson & Johnson. His research focuses on method development and applications on understanding skin physiology and topical product effects. He works on the differences between pediatric and adult skin and has transformed our understanding of newborn and baby skin maturation. He received his PhD in chemical/biomedical engineering from Rice University, has coauthored close to 100 scientific publications, and holds several patents. Xavier Descombes received his PhD in computer science in 1993, his master's degree in mathematics in 1990, and his engineering diploma in 1989. He is currently heading the Morpheme team. He has pioneered the marked point process approach in image processing and currently focuses on image processing for biological applications. He received the "Prix de la Recherche, Catégorie Santé" in 2008. He has published more than 50 publications in international journals (H-index 39).
2022-07-27T06:17:51.029Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "396c8fc021018b1827e39b12b7c621312411048a", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "014752fc0f6ee372202d4357a84661c44c8633f6", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Engineering", "Medicine" ] }
251086673
pes2o/s2orc
v3-fos-license
GENERATION OF ENERGY IN CONSOLE PIEZOELECTRIC ENERGY HARVESTERS In this work, the oscillations of the cantilever unimorph energy harvester under harmonic loads are investigated. Unimorf console consisting of a brass base and a rectangular piezoelectric element with electroded flat surfaces without and with tip mass is considered. There is derived the characteristic equation for beam bending oscillations, wave numbers, circular frequencies and natural frequencies are determined. Eigenforms of oscillations are constructed, the dependence of natural frequencies from body size and tip mass is analyzed. Forced oscillations of the energy harvesters with tip mass at the end at oscillations of the base are studied. The voltage generated on the piezo element plates is determined taking into account the electrical resistance. Due to the voltage and resistance of the conduct line the power of the energy harvester is determined. the element, as well as beams bending relations. There is derived the characteristic equation for beam bending oscillations, the wave numbers, circular frequencies and natural frequencies are determined. There is carried out The averaging of material characteristics over the cross-sectional area. Eigenforms of oscillations are constructed, the dependence of natural frequencies from body size and tip mass is analyzed. The next step is to study the forced oscillations of the energy harvesters with tip mass at the end at given oscillations of the base. The equation of the elastic line of the console is formed, the maximum deflections and angles of rotation are determined. The voltage generated on the piezo element plates is determined taking into account the electrical resistance. Due to the voltage and resistance of the conduct line the power of the energy harvester is determined. Curves of voltage and power dependence from load frequency and external resistance are constructed. It is established that the voltage and power of the element change in proportion to R. The maximum power of the energy collector occurs in the vicinity of resonances, and before the first resonance the power is almost zero. Between the first and second resonance, the power is approximately 1,5 mW. During the transition to the ultrasonic zone, the power of the energy collector increases significantly.Analysis of the harvester operation at resonant frequencies requires consideration of the damping of oscillations in the material. Introduction. Harvesting of mechanical oscillations energy and its conversion into electrical for the purpose of accumulation for further use (energy harvesting) has already occupied an important place both in mechanical engineering (damping of oscillations with conversion of excess energy into electricity), and in construction and environment as autonomous systems for monitoring and state controlling of the object [2]. Piezo-based devices are one of the most common types of energy harvesters [9]. Under harmonic oscillations, piezoelectric elements produce alternating electric current, showing the greatest efficiency at resonant frequencies. Piezoceramic elements working on bending give a much higher yield of the potential difference compared to the axial load, because in this case we have much greater displacement. One of the most common are cantilever unimorph or bimorph energy converters, consisting of a passive layer and one or two thin symmetrically placed piezoceramic plates [10]. Tip mass at the end is often used to adjust the operating frequency of the element with external one. One of the applications of cantilever piezoelectric elements is the conversion of unnecessary or undesirable oscillations of the structure into electrical energy and its subsequent use for autonomous operation of the monitoring device or accumulation in accumulators or batteries [6]. The operation of the element at resonant frequencies is the most effective, so an important characteristic of the piezoelectric element is the width of the range of operating (natural) frequencies [11]. There are commercially available cantilever bimorphs with the following dimensions: thickness 0.3 -0.35 mm, length 4 -100 mm, width 1.6 -22 mm. Electrodes made of silver (6-10 μm thick) or nickel (1-3 μm thick) are applied to the piezoceramic plate. After applying the electrodes, the piezoelectric element is polarized in a strong constant electric field [10]. The main rod can be made of bronze, brass, stainless steel, nickel foil, graphite, composites, etc. For products with high sensitivity, a piezoelectric element is also used as the main rod. The passive layer increases the mechanical strength, but reduces the amount of displacement. The use of a stainless steel base provides 25% greater strength of the element and is used in cases with high blocking force, such as implanted pacemakers. Epoxy or acrylate glue is used for gluing layers, which provides a strong connection. The thickness of the adhesive layer is 10-15 μm. An unimorph or bimorph operating in generator mode is often used as a flexible sensor [8]. The generator-type sensor does not require an external power supply to operate. It is designed to convert dynamic deformations into electrical signals with further processing and recording by various de-vices, including energy collection. Unimorph can be used as a stand-alone converter of mechanical energy into electric current, and be a part of a more complex device. It can be connected to the control and management system in two main ways: by the voltage registration circuit or by the charge registration circuit. This work is devoted to the study of oscillations of cantilever energy harvester at monoharmonic loads. Earlier in [5] the resonant oscillations of piezoceramic cylinders with energy dissipation were studied. Multilayer piezoceramic elements are considered in [7]. Fundamental theory of vibrations is described in [4]. The most up-to-date overview of piezoelectric energy harvesting is found in [9]. Works describing the use of cantilever energy harvesters in bridge structures [1], in pavements [12], in sound energy harvesting [3] should be noted. Formulation of the problem. For collaboration work of element with the structure, the natural frequencies of the element must be coordinated with the operating frequency of the structure. This is done by varying the mass and its position at the end of the rod. For the most efficient operation of the converter, a piezoelectric material with a high coefficient of electromechanical conversion is used. The console consists of a metal rod (steel or brass) of rectangular cross-section with relatively low rigidity in the direction of oscillation ( fig. 1). Piezoelectric rectangular thickness-polarized element is attached as pad to the part of the beam that undergoes maximum deformation. At the end, an additional mass is attached in the form of a steel cylinder, which reduces the operating frequency of the element. The calculation is performed in several stages: determination of natural frequencies for different design options; analysis of oscillation forms to determine operating frequencies; studying of harmonic oscillations of the console at operating frequencies; determination of the potential difference generated at the electrodes of the piezoelectric element; determining the power of the energy harvester. Natural oscillations of the cantilever beam. We consider the transverse oscillations of the rod, which stiffness in vertical direction is much lower than in horizontal. This allows to most effectively use the influence of gravity and cause significant deflections in the rod. The section width is several times less than the length to provide torsional rigidity, so as torsional modes are undesirable. a) Cantilever beam loaded with its own weight. Given: length l , density ρ, Young's modulus E, cross-sectional area A, moment of inertia I. Differential equation of transverse oscillations of the rod We use the procedure of separating variables (1) and obtain two differential equations with corresponding solutions: where  is the natural circular frequency of the body oscillations, k is the wave number. The coefficients in (4) and (5) are determined from the boundary and initial conditions. Wave numbers can be got as / b) The natural oscillations of the mass at the end of a cantilever rod. If the ratio of the mass of the beam to the attached mass is small, the mass of the beam can be neglected. The oscillations of the mass at the end of the cantilever rod are described by a differential equation Here 3 3EI k l  is the stiffness factor. Natural frequency c) Natural oscillations of the cantilever beam with mass at the end, taking into account the mass of the beam. The deflection for all values of x, except the points of application of the load, satisfies equation (3) with the solution in the form (4) [4]. At attaching the mass at the end of the beam we have a boundary condition 3 2 Here We supplement (6), (7) with condition (14), write the determinant and obtain the characteristic equation ( ) r kl  : ,67; 4,33; 7,38; 10,46; 13,56; 16,67; 19,78; 22,91...}. i r  Fig. 2 shows the dependence of the roots of equation (15) (15) is reduced to (9). With increasing α, the values of the roots change little and at 5   we have {0,8807; 3,9512; 7,0833} i r  . A larger mass ratio is physically improbable. 2. Forced oscillations of the converter during oscillations of the base. Consider the oscillations of the base with amplitude A and frequency ω. The coefficients і С in general solution are got from (16) 3. Characteristics of the unimorph. We consider a two-layer cantilever beam of length l, consisting of a metal layer with a Young's modulus 1 Е , density 1  and cross section 1 1 b h  and a piezoceramic layer with a Young's modulus 2 Е , density 2  and cross section 2 2 b h  . We will average the material characteristics: The coordinate of the neutral layer is Equations (18), (19) are a set of characteristics required for the application of the above formulas. Determination of electromotive force and power of a harvester. Electrical boundary conditions are applied to the electrodes located on the upper and lower surfaces of the piezoceramic element. For thin plates, we believe [5] that the electric potential inside the body varies linearly: Here ( ) V t is the required electromotive force of the transducer (potential difference at the electrodes). We can use the hypothesis of flat sections and assume that the deformations in the cross-sectional plane are small. Linear deformation in the direction of the axis of the rod is Total charge on the lower electrode Generated current can be written through Ohm's law We substitute (22) into (23) and find the generated potential difference:  . All analytical formulas were duplicated using finite-difference approximations of the second order of accuracy. The deviation between the results was 1% at 200 n  breakpoints along the length of the rod. Figure 3 shows the dependence of the resonant frequencies of the rod with a cross section of the brass base 10 1mm  and the piezoelectric element 10 0,3 mm  with attached mass 6,2 g m  , which corresponds to the mass of the steel cylinder in size10 10 mm  , from the length of the rod. We see that the first resonance is in the range from 2,5 kHz at 4 mm L  to 57 Hz at 60 mm L  . With increasing length coefficient α varies from 13,9 to 0,88. At 15 mm L  the second and third resonances lie in the ultrasonic range. Increasing the length leads to decreasing in the resonant frequency along the curve close to the quadratic hyperbola, what corresponds to the physical laws. Figure 4 illustrates the dependence of the resonant frequencies of the rod with a cross section 10 1mm  of the brass base and the piezoelectric element with section 10 0,3 mm  and length 40 mm L  from the value of the attached mass. In this case coefficient α varies from 0 to 2,66. The first resonance varies from 307 Hz at 0 m  to 89 Hz at 12 g m  . The second and third resonances lie in the sound range. Therefore, the presence of the attached mass can reduce the first resonant frequency on 70%, the second and third on 30%. Let us analyze the forms of oscillations for the above-described element when the base oscillates with an amplitude A = 1 mm. Graphs of amplitude functions are shown in Fig. 5. Relevant natural frequencies {153; 1812; 5724; 11860; 20227} Hz i f  . The number of extremum points corresponds to the mode of oscillations + 1. Frequencies after the third are ultrasonic. Considering that the oscillations of building structures and mechanisms are mostly low-frequency, we conclude that harvesting of energy from the construction is possible only on the first resonance. Let us analyze the forced oscillations of the described above element at electrical resistance hm 1 O R  . Fig. 6 shows the deflection curves of the rod end, corresponding generated potential difference on electrodes and the power of the element. At 0 f  the deflections are equal to 1 mm, which corresponds to the perturbation of the base. From 0 to 300 Hz we have an increase in displacement with changing of vibrations phase, which corresponds to the first resonance. From 500 to 1500 Hz displacement are near 0,15 mm. Maximum stresses in a piezoelement occur on the bottom fibers near fixed end. Let's analyze the dependence of the generated voltage and power of the element from the electrical resistance R, which varies from 0 to 10 Ohms at 160 Hz f  (Fig. 7). It is almost linear, what allows us to say that at higher resistance we get higher generated voltage and power of the element. But at the same time losses increase in a circle, and a considerable part of energy turns to thermal. Therefore, the optimal parameters for the electrical circuit require special research. We can analyze real electromechanical state of the element at electrical resistance 50 Ohm s R  near the first resonance with Fig. 8. To get real parameters of harvester on resonance frequency we take into account dissipation of energy in the body through losses tangents. Consider all material characteristic complex with small imaginary part: (1 ) ij ij c c c i    and so on, where we for example will take tan 1% 0.01 c      . In nonrezonance range the potential difference and deflections are not zero, but the power is small. At 115 Hz f  we have 0,5 mW P  and max 22 МРа   , and they increase near the resonance. Conclusion. The proposed approach makes it possible to calculate cantileverunimorphic energy harvesters, for which / 0, 2 h b  , / 0, 2, b L  since such dimensions can be assumed to fulfill the hypothesis of flat sections, the linearity of the potential difference in the thickness of the element and use the long eamsbending relations. The generated voltage is proportional to the angle of rotation of the beamend and the most remoted fiber from neutral axis. In the denominator we have two terms, one of which is responsible for the electrical conductivity of the element and is proportional to length, other is inversely proportional to the resistance and frequency. At low resistance and frequency, the second term is much larger than the first, and the voltage value is not high. If resistance increases, the voltage and power of the element increase proportionally to R. At 115 Hz f  we have power of the energy harvester 0,5 mW P  , and in the vicinity of resonances it increases. Between the first and second resonance the power is approximately 3,7mW. At transition to the ultrasonic zone, the power of the energy collector increases significantly. So cantilever harvester is resonant device and works at defined frequensies. Detailed analysis of oscillations in the resonant mode should include damping of oscillations in the material. перетворення енергії та поширенню мініатюрних пристроїв, для живлення яких достатньо потужності в кілька міліват. GENERATION OF ENERGY IN CONSOLE PIEZOELECTRIC ENERGY HARVESTERS Energy harvesting of mechanical vibrations and their conversion into electrical energy using piezoelectric devices has become widespread. This has been made possible by the creation of highenergy piezoelectric materials and the proliferation of miniature devices with a few milliwatts of power. In this work, the oscillations of the rod cantilever bimorph energy harvester under harmonic loads are investigated. A two-layer rod consisting of a brass base and a rectangular piezoelectric element with electroded flat surfaces without and with tip mass is considered. The thickness of the layers is much less than the width and the width is much less than the length, which allows us to use the hypothesis of flat sections and assumptions of the potential difference linearity by thickness of the element, as well as beams bending relations. There is derived the characteristic equation for beam bending oscillations, the wave numbers, circular frequencies and natural frequencies are determined. There is carried out The averaging of material characteristics over the cross-sectional area. Eigenforms of oscillations are constructed, the dependence of natural frequencies from body size and tip mass is analyzed. The next step is to study the forced oscillations of the energy harvesters with tip mass at the end at given oscillations of the base. The equation of the elastic line of the console is formed, the maximum deflections and angles of rotation are determined. The voltage generated on the piezo element plates is determined taking into account the electrical resistance. Due to the voltage and resistance of the conduct line the power of the energy harvester is determined. Curves of voltage and power dependence from load frequency and external resistance are constructed. It is established that the voltage and power of the element change in proportion to R. The maximum power of the energy collector occurs in the vicinity of resonances, and before the first resonance the power is almost zero. Between the first and second resonance, the power is approximately 1,5 mW. During the transition to the ultrasonic zone, the power of the energy collector increases significantly.Analysis of the harvester operation at resonant frequencies requires consideration of the damping of oscillations in the material. Keywords: cantilever energy harvester, passive layer, piezoceramic overlay, characteristic equation, amplitude function, forced oscillations, energy generation, energy harvesting, potential difference on the plates, power of the energy collector.
2022-07-27T15:10:38.090Z
2022-05-30T00:00:00.000
{ "year": 2022, "sha1": "9b34369a7807f461003ba3415fceb8bcb2215ad7", "oa_license": "CCBY", "oa_url": "http://omtc.knuba.edu.ua/article/download/259147/255814", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "98cd00fd09d1e53f4a27f50f0462c3afb0ed5933", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [] }
59461256
pes2o/s2orc
v3-fos-license
On the Two-Moment Approximation of the Discrete-Time GI / G / 1 Queue with a Single Vacation We consider a discrete-time GI/G/1 queue in which the server takes exactly one vacation each time the system becomes empty. The interarrival times of arriving customers, the service times, and the vacation times are all generic discrete random variables. Under our study, we derive an exact transform-free expression for the stationary system size distribution through the modified supplementary variable technique. Utilizing obtained results, we introduce a simple two-moment approximation for the system size distribution. From this, approximations for the mean system size along with the system size distribution could be obtained. Finally, some numerical examples are given to validate the proposed approximation method. Introduction For several decades, discrete-time queues with the various vacation policies have been receiving growing attention due to their applications to a variety of time-slotted digital communication systems and other synchronous systems.This is due to the fact that discrete-time queues are more suitable for modeling those systems where basic operational units are bits, packets, and cells.On the other hand, server vacation models are characterized by utilizing the idle time of the server to do other work, such as maintenance work, servicing secondary customers, machine repair, or just taking a break.Doshi [1] gives a large number of examples, as does Takagi [2]. Discrete-time queues where the interarrival time, the service time, and the vacation times have general distributions present an interesting subject with which to develop a practical analysis method.Generally, however, the analysis of such a queue is notoriously difficult due to the limited information on their distributions.While several approximations have been proposed, they are often computationally demanding.Moreover, most approximation methods have been applied to continuous-time queues. In this paper, we consider the discrete-time GI/G/1 queue, in which the server takes exactly one vacation each time the system empties.Upon returning to the system after a vacation, the server begins to serve a customer if any has arrived during a vacation.If none has arrived, the server will wait until a customer arrives.For this discrete-time GI/G/1/SV queue under an exhaustive first-in first-out service discipline, where SV stands for single vacation, we derive an exact transform-free expression of the steady state system size (or queue length) distribution through a modified supplementary variable technique (modified SVT).The first step of the modified SVT is to define a discrete-time Markov chain by including appropriate supplementary variables into the state vector.The second step is to construct the steady state system balance equations.The last step is to solve these equations by directly summing each equation after multiplying a supplement variate.The result thus obtained is the steady state system size distribution expressed as not probability generating functions but conditional expectations.In other words, we derive an exact transform-free expression for the steady state system size distribution. There are several studies on the discrete-time queues with a single vacation.A discrete-time Geo/G/1/SV queue was considered by Takagi [3], who obtains the probability generating functions (PGFs) of the system size distribution at an arbitrary epoch and waiting time distribution using a stochastic decomposition property.For a discrete-time GI/Geo/1/GSV, where GSV stands for geometric single vacation, Chae et al. [4] derived the PGFs of the system size distribution at arrival epochs, at departure epochs, and at arbitrary epochs through an embedded Markov chain and a semi-Markov process.They also verify the decomposition structure of the PGF of the waiting time distribution.Fiems and Bruneel [5] consider a discrete-time GI/G/1 queue with modified multiple vacations, called timed vacations, in which vacations occur whenever the queue becomes empty or whenever a timer geometrically distributed expires.Using the PGF approach, a variety of performance measures are derived. Addressing the continuous-time queues, Kempa [6] analyzes a continuous-time GI/G/1 queue with batch arrival of customers and a single exponential vacation.In [6], the author applies a technique of integral equations to obtain the Laplace transform (LT) of the joint distribution of three random variables: the first busy period, the first vacation period, and the number of customers served during the first busy period. All of the aforementioned studies give the transformed results because the main analysis approaches are based on transformation techniques such as PGF and LT.The only study that obtains transform-free results can be found by Chae et al. [7].They first propose a modified SVT and apply it to a continuous-time GI/G/1/K queue with multiple vacations.However, to the best of our knowledge, studies of a discrete-time queue with general interarrival times, general service times, and general vacation times cannot be found.This motivates us to analyze the GI/G/1/SV in a discrete-time environment. This paper is organized as follows.In Section 2, we introduce the modified SVT briefly and present the transformfree system size distribution for a discrete-time GI/G/1/SV.In Section 3, we propose a simple approximation, termed here a two-moment approximation, for the system size distribution.A two-moment approximation for a continuoustime queue has been reported in the literature [8,9], but there is no precedent for a discrete-time queue.In Section 4, numerical experiments are conducted to demonstrate that our approximations are remarkably simple yet provide fairly good performance. The Steady State Queue Length Distribution of a GI/G/1 Queue with Single Vacation We adopt the late arrival system with delayed access (LAS-DA) model [3].Let the time axis be marked by = 0, 1, 2, . . . .According to the LAS-DA model, a potential customer arrival takes place during interval ( − , ) and a potential service completion occurs during interval (, + ), where + and − represent lim Δ→0 ( + |Δ|) and lim Δ→0 ( − |Δ|), respectively.The GI/G/1/SV queue is assumed to operate as follows. Suppose that a customer departs the system during interval (, + ) leaving behind no customers in the system.The single server then begins to take a single vacation at + .Suppose that the length of the vacation that begins at + is equal to , = 1, 2, . .., and this vacation will end at , where = + +.Upon returning to the system at , the server will begin to serve a customer at − if any has arrived during interval ( + , − ).If none has arrived, the server will wait until a customer arrives in the system without taking another vacation.Interarrival times are independent and identically distributed (iid) discrete random variables (RVs) that have the following distribution: Pr{ = } = , = 1, 2, . . . .Vacation times are iid discrete RVs and have the following distribution: Pr{ = } = V , = 1, 2, . . . .Service times are iid discrete RVs and have the following distribution: Pr{ = } = , = 1, 2, . . . .We assume that interarrival times, vacation times, and service times are mutually independent.Let ( − ) denote the number of customers in the system at − and define Here, refers to the probability that customers are in the system at an arbitrary time.Considering mutually exclusive events that can occur during one slot, the system balance equations for the discrete-time GI/G/1/SV queue are given by 0 () = 0 ( + 1) + 0 ( + 1, 0) , (3b) The left-hand sides of (3a), (3b), and (3c) represent the probabilities of the system state at ( + 1) − in a steady state.The right-hand sides of (3a), (3b), and (3c) are then expressed in terms of the probabilities of the system state at − in a steady state, together with the probabilities of all potential queueing activities that can occur during ( − , + ). Note that (0, 0) ( (0, 0)) has a positive value since an arrival and a departure can occur simultaneously (since an arrival and vacation termination can occur simultaneously) in the discrete-time setting. In the modified SVT, we first sum (3a) both over and , 0 ≤ , ≤ ∞ and sum (3b) both over , 0 ≤ ≤ ∞.We also sum (3c) both over and , 0 ≤ , ≤ ∞.Secondly, we multiply +1 to both sides of (3a), (3b), and (3c) and sum over , , and , 0 ≤ , , ≤ ∞.Finally, we multiply + 1 to both sides of (3a) and sum both over and , 0 ≤ , ≤ ∞, and then multiply + 1 to both sides of (3c) and sum both over and , 0 ≤ , ≤ ∞.This procedure is then applied to our queuing model.We first sum (3a) both over and , 0 ≤ , ≤ ∞ and sum (3b) both over , 0 ≤ ≤ ∞.We also sum (3c) both over and , 0 ≤ , ≤ ∞.Simplifying the results (for more detailed derivation, see Appendix A), we get Clearly, since we do not consider the situation where customers balk or abandon their services, the departure rate (output rate) of customers is identical to the arrival rate (input rate) = 1/[].Since the system is equipped with a single server, the server utilization (or the probability that the server is busy), denoted by , is equal to = [].For the system to be stable, we assume that < 1. Let ( ) denote the probability that an arriving customer sees customers when the server is on vacation (when the server is available).This gives Remark 1.In (5a), ∑ ∞ =0 (0, ) can be interpreted as the rate (or the expected frequency per unit time) that an arriving customer sees customers when the server is on vacation.Since is the expected number of arrivals per unit time and is the probability that an arriving customer sees customers when the server is on vacation, we have the concrete result: = ∑ ∞ =0 (0, ).The rest of equations in (5a), (5b), and (5c) can be obtained by the same manner. Let us remark that (15a), (15b), (15c), and (15d) involve the unknown conditional expectations of supplementary variables.In general, these conditional expectations are not easy to compute, except for some special cases such as a Bernoulli arrival process, geometric service times, or geometric vacation times.However, the availability of such expressions provides a basic idea for developing approximations for various performance measures of practical interest.This is discussed in Sections 3 and 4. The Two-Moment Approximations and Numerical Results Making use of the exact results of the system size distribution given in Section 2, we introduce its two-moment approximation.From this, approximations of the various mean performance measures including the mean system size, the mean sojourn time, and the mean waiting time can be carried out.Among others, we focus on the mean system size, which is of great practical importance.We employ the following approximation scheme: where [ 2 ] is the second moment of the discrete RV whose distribution function is and is the mean of the equilibrium distribution of .Recall that Remark 6.In our setting, the remaining interarrival time of a customer both at a service completion epoch and at a vacation termination epoch does not contain 0. In contrast, both the remaining service time and the remaining vacation time at a customer arrival epoch contain 0. Therefore, from the discrete-time inspection paradox, ( ), , and ] can approximate , , and ], respectively.Remark 7. Note that the approximation scheme in (17a), (17b), (17c), and (17d) is exact for the Bernoulli arrivals, geometric service times, and geometric vacation times under the LAS-DA model, respectively, due to the memoryless property of the geometric distribution.Therefore, our approximations lead to exact results for the discrete-time Geo/Geo/1/GSV queue.However, for some queues with a non-Bernoulli arrivals, general service times, and general vacation times, conditional expectations in (17a), (17b), (17c), and (17d) cannot be easily calculated due to the absence of the information about the interarrival, service, and vacation time distributions.Hence, the quick and simple approximation is to replace all the conditional expectations with the unconditional counterparts like (17a), (17b), (17c), and (17d).This approximation scheme works fairly well and its numerical examples are given in Section 4. A similar approximation scheme was used for the continuous-time GI/G/1/K queue by Kim and Chae [9] and the continuous-time GI/G/c/K queue by Choi et al. [8]. Numerical Example In this section, numerical examples are presented to evaluate the performance of our approximation.We apply the results obtained in Section 3 to queues with a variety of interarrival times, service times, and vacation times, but only a few that exhibit representative information are presented in Figures 1-3.In each figure, the horizontal axis represents the system size and the vertical axis does its probability.In all cases, exact values are calculated by differentiating the PGFs of each system size distribution.MGeo 2 , NB, and Pois denote a mixture of two different geometric distributions, We compare our approximate results of the system size distribution with exact results for several Bernoulli arrival queues with a GSV in low ( = 0.25) traffic, in moderate ( = 0.50) traffic, and in high ( = 0.75) traffic, as presented in Figure 1.Extensive numerical investigations show that our results are in good agreement with the exact results regardless of the traffic intensities.Figure 2 gives results for Bernoulli arrival queues with a general single vacation.Interestingly, our approximation functions well even though vacation times do not follow geometric distributions. The approximated and exact values of the system size probability for the non-Bernoulli arrival queues are depicted in Figure 3.In this case, however, the approximation can deteriorate.Thus, one should use our approximation method cautiously for the non-Bernoulli arrival queues.Note that our approximations require only the first two moments of interarrival times, service times, and vacation times.Thus, it is not essential to identify these distributions.The first two moments alone will lead to quick and simple approximate results.We anticipate that our two-moment approximation will be beneficial to those practitioners who seek simple and quick practical answers to queueing systems with a single vacation and other systems. Concluding Remarks For a discrete-time GI/G/1/SV queue, this work presented the exact transform-free expressions of the system size distribution.Then, we proposed the simple two-moment approximation of the system sized distribution and the mean system size.It is worth noting that the modified SVT is basically the same as the conventional SVT except for the last step, in which the system equations are solved.We multiply a supplementary variate +1, +1, and +1 and then sum over both , , and .As a result, we obtained simultaneous equations for the system size distribution expressed in terms of the conditional expectations of the supplementary variables.We believe that our approach will help the readers better understand the discrete-time queueing systems and gain new insight into analyses of these systems.The left hand side of (A.1) is split into four terms as follows: The right hand side of (A.1) is simplified as follows: Applying the same procedure to the rest of (3a), (3b), and (3c), we get the following relations: From (5c), substituting = ∑ ∞ =0 (0, ) into (B.1)completes the proof.One can make proofs of the other quantities in (11a), (11b), (11c), (11d), (11e), and (11f) through the above same procedures.
2018-12-30T12:27:26.619Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "93fab66e63aa82f240b5b5454dbf36a63e56e282", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ddns/2016/5345374.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "93fab66e63aa82f240b5b5454dbf36a63e56e282", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
53289671
pes2o/s2orc
v3-fos-license
SVM-Based Sea-Surface Small Target Detection: A False-Alarm-Rate-Controllable Approach In this letter, we consider the varying detection environments to address the problem of detecting small targets within sea clutter. We first extract three simple yet practically discriminative features from the returned signals in the time and frequency domains and then fuse them into a 3-D feature space. Based on the constructed space, we then adopt and elegantly modify the support vector machine (SVM) to design a learning-based detector that enfolds the false alarm rate (FAR). Most importantly, our proposed detector can flexibly control the FAR by simply adjusting two introduced parameters, which facilitates to regulate detector's sensitivity to the outliers incurred by the sea spikes and to fairly evaluate the performance of different detection algorithms. Experimental results demonstrate that our proposed detector significantly improves the detection probability over several existing classical detectors in both low signal to clutter ratio (SCR) (up to 58%) and low FAR (up to 40%) cases. I. INTRODUCTION Accurate detection of small targets on sea surface is an important problem in remote sensing and radar signal processing applications [1]. However, when detecting, the radar returns from the small targets are severely obscured by the backscatter from the sea surface, which is referred to as sea clutter [1]. To identify the small targets from the sea clutter, a promising approach is to seek certain features from the returned signals that can depict the intrinsic differences between these two classes and then design a feature-based detector. However, the extracted features usually become ineffective when the detection environment changes, as the characteristics of the sea clutter are highly dependent on the sea states and radar's parameter configurations. Therefore, extracting robust features from the returned radar signals that adapt to varying environments is crucial for target detection. There have been extensive works to design potentially discriminative features for detecting small targets within sea clutter. In [2], the authors utilized a doppler spectrum feature to describe the differences between the sea clutter and target signals, where the detector's decision was made by simply comparing the feature's value with a predefined threshold. However, such single feature based detector only exploits limited information of the returned signals and thus its detection performance is likely to be affected by the varying detection environments. Consider this, a potential solution for detection performance improvement is to integrate more features to The authors are with the School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, 430074, P. R. China (e-mail: yuzhouli, xiepengcheng, zeshentang, taojiang@hust.edu.cn). construct multi-dimensional feature spaces, as by this more additional information within the returned signals can be provided. Following this insight, Xu in [3] extracted two temporal fractal features to devise a 2-D convexhull learning algorithm for detection. Further, Shui et. al in [4] introduced three features, i.e., the RAA, RPH, and RVE, to construct a 3-D feature space, under which the detection accuracy is improved in both high and low signal to clutter ratio (SCR) scenarios compared with several single feature based detectors. Nevertheless, it should be noted that the detection performance in [3] and [4] is still poor in low SCR scenarios, e.g., lower than 57% when SCR = -2 dB. To further promote the robustness of the detectors, the following two ideas could be considered. Firstly, seek more discriminative features. It was observed that some features such as the widely-adopted amplitude become ineffective in low SCR scenarios [5]. On the contrary, we find that some concepts in other research fields can be used to define features that are effective even in low SCR situations, e.g., the information entropy in the communication theory. Secondly, establish more advanced detection frameworks. Several recent works have shown that machine learning based techniques exhibit excellent potential in target detection compared with some conventional approaches [6]- [8]. One of their main advantages is that they can adaptively adjust the involved parameters and decision regions according to the collected radar returns, which are usually predefined in existing popular frameworks, e.g., the constant false alarm rate (CFAR) detector [9]. In this way, learning-based detectors may be less sensitive to the variation of the detection environments. In view of these, this letter devotes to exploring discriminative features for feature space construction and designing a learning-based detector for accurate small target detection. The main contributions of this work are as follows: • We exploit some concepts in other research fields to define three features i.e., the temporal information entropy (TIE), the temporal Hurst exponent (THE), and the frequency peak to average ratio (FPAR), from the perspective of time and frequency domains. Particularly, the three defined features are quite simple yet practically discriminative under varying detection environments even in low SCR and false alarm rate (FAR) cases. • We adopt and elegantly modify the support vector machine (SVM), a classical binary classifier, to design a learning-based detector. Significantly different from the existing learning-based detectors, our proposed detector enfolds the FAR and can flexibly control it by simply tuning two introduced parameters. By this, it is convenient to fairly evaluate the performance of different detection algorithms, and to flexibly regulate the sensitivity of the detector to the outliers incurred by factors such as the sea spikes to meet the requirements of different applications. • Experimental results show that, compared with several classical detectors, our proposed detector significantly improves the detection probability in both low SCR (up to 58%) and low FAR (up to 40%) cases. II. FEATURE SPACE CONSTRUCTION In this section, we adopt the Intelligent PIxel Processing Xband (IPIX) database, a widely-used database for sea-surface small target detection, to extract features. The IPIX database contains amount of sea clutter datasets, collected by the IPIX radar at the east coast of Canada in November 1993 [10]. In this database, each dataset is composed of 14 spatial range cells and each cell has 2 17 samples with a sampling rate of 1000 Hz. For each dataset, the cell with the target returns is labeled as the primary cell, the adjacent cells affected by the target are labeled as the secondary cells, and the remaining cells are clutter-only cells. In addition, each dataset contains four kinds of data, referred to as the HH, VV, HV, and VH data, as the transmitter and receiver of the IPIX radar have two channels with H and V polarizations, respectively. Throughout this letter, we will use 10 datasets in the IPIX database, namely the datasets #54, #30, #31, #310, #311, #320, #40, #26, #280, and #17. For notational simplicity, we denote the samples from the primary cell and clutter-only cells as target signals and sea clutter signals, respectively, in the following. Based on the 10 selected datasets, this section extracts three simple yet practically discriminative features from returned radar signals in the time and frequency domains, and then based on them to construct a 3-D feature space. A. Temporal Information Entropy We first utilize the concept of the information entropy in the communication theory to define a feature in the time domain. Let x = {x i , i = 1, 2, · · · , N } be a time sequence composed by the amplitudes of the returned signals. Divide the amplitude range covered by x into K (K ∈ N + ) independent segments with equal length and use N k to denote the amount of the elements falling into the k-th segment. Then, the probability that the amplitude of returned signals falls into the k-th segment, denoted by P (N k ), can be calculated as Accordingly, the information entropy of such a time sequence, referred to as the temporal information entropy (TIE) in this letter, is expressed as To avoid invalid calculation, we set P (N k ) log 2 P (N k ) = 0 when P (N k ) = 0. From the above definition, it can be interpreted that the TIE actually reflects the temporal variation or randomness of the amplitudes of returned signals. To yield more samples to evaluate the performance of the proposed features, we segment each cell's data of length 2 17 into mutiple small-scale signals of length D, given by where d is a constant to tune the overlapping length among adjacent vectors. Figs. 1(a) and 2(a) exhibit the discriminability of the TIE on #54 under the HH mode through the histogram and scatter distribution, respectively. In both figures, d and D are set to 64 and 4096 (i.e., the observation time is 4096 ms), respectively. It can be seen that the TIE indeed can be used to distinguish target signals from sea clutter signals, as the TIEs of most target signals are larger than those of sea clutter signals. However, these two figures also show that effective detection cannot be achieved by only adopting the TIE, as the target and sea clutter signals are highly tangled with each other in some regions. This is because sea clutter contains spiky pulses in cases of high sea states or low radar grazing angles, which would enlarge the TIEs. B. Temporal Hurst Exponent From [7], the temporal hurst exponent (THE), a widely-used feature to characterize the fractal property of the sea clutter, presents satisfactory discriminability when distinguishing the target from sea clutter. Inspired by this, we adopt the THE as another feature in our feature space, the calculation procedure of which is described as follows. Firstly, divide x into L adjacent sub-periods with the same length τ = ⌊ N L ⌋ and denote the amplitude set of the l-th (l = 1, 2, · · · , L) sub-period by {x l,1 , x l,2 , · · · , x l,τ }. Secondly, compute the average amplitude and standard deviation of each sub-period, denoted byĪ l and S l for subperiod l, respectively. Let Y l = {Y l,1 , Y l,2 , · · · , Y l,τ } denote the accumulated deviation set of sub-period l, where Y l,t is calculated as Y l,t = difference between the maximum and minimum values of Y l , i.e., R l = max(Y l,t ) − min(Y l,t ). Thirdly, calculate R l S l for all l ∈ {1, 2, · · · , L} for a given τ and denote the mean value of them by R S . From [11], R S shows the fractal feature at a certain time scale range τ , e.g., from 0.1 s to 4 s for the IPIX datasets. Particularly, R S is related with the THE, denoted by H, by the following equation where c is a constant independent on τ . Finally, to expediently calculate H, the logarithm operation is taken on both sides of (4), yielding From (5), log 2 R S τ is linearly dependent on log 2 (τ ), and thus H can be readily obtained by the method of first-order least-squares polynomial approximation. The THEs of the 14 range cells on #54 under the HH mode are plotted in Fig. 1(b), from which we can observe that the primary cell has a larger THE than that of the clutter-only cells. Furthermore, we combine the TIE and THE to construct a 2-D feature space in Fig. 2(b). Compared with the 1-D feature space (see Fig. 2(a)), the 2-D feature space exhibits better separability. Nevertheless, there are still some overlaps between the target and sea clutter signals. As a consequence, it is still necessary to extract additional features for small target detection, which will be described in the next subsection. C. Frequency Peak to Average Ratio To further enhance the discriminability of the feature space, we introduce a frequency-domain feature into it, inspired by the fact that additional spectral information of returned signals that possibly can not be reflected in the time-domain features (e.g., the TIE and HE) can be embedded. Interestingly, when conducting the Fourier transform on the received signals, we find that the spectrum difference between the target and sea clutter signals exhibits potential discriminability that can be used for detection, as the spectrum of the former mainly distributes over a fluctuant and rough surface while that of the latter more concentrates around a peak. To quantify this difference, we introduce the frequency peak to average ratio (FPAR) feature, defined as where X(k) is the Fourier transform of the time sequence x, given by X(k) = N n=1 x n e −j 2π N nk , k = 1, 2, · · · , N. Figs. 1(c) and 1(d) exhibit the FPAR of the target and sea clutter signals, the results in which validate that the simple FPAR does be effective because these two histograms are only slightly overlapped. Furthermore, we combine the FPAR with the TIE and THE to construct a 3-D feature space, and examine its discriminability through the scatter distribution on #54 in Fig. 2(c). Compared with the 2-D feature space (see Fig. 2(b)), the 3-D feature space becomes more prominently separable. However, it is worthwhile to note that some datasets are possibly linearly non-separable in our constructed 3-D feature space, e.g., #30 (see Fig. 2(d)), which indicates that extracting more features does not always result in better separability performance. Hence, this uncertainty of linear separability should be considered when designing the learningbased detector based on these features, the detailed of which will be described in the next section. III. FALSE-ALARM-RATE-CONTROLLABLE SUPPORT VECTOR MACHINE BASED DETECTOR Back to the detection problem itself, identifying an object from sea clutter can be naturally regarded as a classification problem. Based on this fact, this section adopts and elegantly modifies the SVM, a classical and widely-used learningbased binary classifier, to design a detector. Although SVMbased detectors have been utilized in some existing works to distinguish targets from sea clutter [6]- [8], almost all of them directly applied the SVM and did not consider the FAR therein. However, making the FAR controllable can conveniently regulate detector's sensitivity to the outliers incurred by factors such as the sea spikes and also facilitates to evaluate the performance of different detection algorithms. It is thus interesting to design a FAR-controllable SVM-based detector when identifying the small targets within sea clutter. For a sample i in the training dataset, we construct a 3-D feature vector where the originally linearly non-separable dataset is shifted to a linearly separable one. In this letter, we take the radial basis function (RBF) as the kernel function, a prominent choice in SVM-based detectors, defined as follows After mapping, the next step is to find the hyperplane, i.e., ω T F − b = 0, to separate the target and sea clutter data in the mapped linearly separable high-dimensional feature space according to the max-margin principle. To determine ω and b, the original SVM, referred to as the β-SVM in this letter, solves the following quadratic program min ω,b,ξ where ξ i is the slack variable and β refers to the penalty parameter used to balance the maximization of the margin and the minimization of the error. Observe that the sea clutter and target signals share the same β in the β-SVM, which implies an assumption that these two classes have the same degrees of toleration to outliers incurred by factors such as the sea spikes. However, this assumption is possibly not reasonable in practice because the impacts of the outliers on the target and sea clutter signals are usually different. To deal with this problem, we elegantly modify the β-SVM to an alternative yet mathematically equivalent version of the β-SVM, referred to as the FAR-controllable SVM (P f -SVM) in this letter. Specifically, in the P f -SVM, we introduce two penalty parameters β 0 and β 1 , respectively for the sea clutter and the target signals, to replace β in (8), to control their individual error weights in the quadratic program. By this, problem (8) is recast to min ω,b,ξ From (9), increasing β 0 would reduce the FAR for a given β 1 , as by this the obtained hyperplane will tilt toward the target signals and thus less sea clutter signals will be misclassified. On the other hand, enlarging β 1 would increase the FAR for a given β 0 , as the hyperplane will be more partial to the sea clutter signals in this case. Therefore, the modification exploited here not only can enfold the FAR into the SVMbased detector but also facilitates to flexibly control it by simply adjusting β 0 and β 1 . In what follows, according to the theory of the SVM, problem (9) can be solved by the sequential minimal optimization (SMO) algorithm in the dual domain [12]. With the obtained hyperplane, i.e., ω T F − b = 0, the class of an incoming test Algorithm 1 FAR-Controllable SVM-Based Detector. 4: Determine the class of the training data by (10). 5: Calculate the FAR, defined as P F = The number of misclassified sea clutter samples The total number of sea clutter samples in training dataset ×100%. 6: if P F = P f then 7: Break. data F j can be decided according to the following principle Based on the above discussion, the detailed procedure of our proposed detector is summarized in Algorithm 1, in which β h and β l denote the upper and lower bounds of β 0 , respectively. The algorithm runs in two stages. In the first stage (Lines 3-5), obtain the hyperplane with the given parameters. Then, use this hyperplane to classify the training data and calculate the actual FAR P F . In the second stage (Lines 6-17), adopt the bi-section method to adjust β 0 by comparing P F with the user-defined FAR P f . These two stages will be executed iteratively until the difference between P F and P f is lower than the predefined threshold η. IV. EXPERIMENTAL RESULTS In this section, we use the 10 datasets mentioned in Section II to evaluate the performance of our proposed detector. Consider that sufficient signal samples are needed to train the learning-based detector, the overlapped segmentation is thus adopted under the partition rule presented in (3), with the parameters set to d = 64 and D = 4096, respectively. By this, we could yield 1984 target samples and more than 20000 sea clutter samples for each dataset. Then, we divide the obtained samples into two groups, one for training composed of a half of the target samples and all the clutter-only samples and the other for testing composed of the rest of target samples. To verify whether our proposed detector can flexibly tune the FAR or not, we test its performance on #17 under the Methods P d (HH mode) SCR=-2 dB SCR=17 dB Proposed detector 76 99 Tri-feature detector [4] 57 99 Fractal-based detector [11] 18 79 VV mode. Fig. 3 illustrates how the two introduced penalty parameters β 0 and β 1 impact the FAR. From the figure, it is obtained that a higher β 0 corresponds to a lower FAR for a given target penalty parameter β 1 and a higher β 1 results in a larger FAR for a given β 0 . Hence, our proposed detector can flexibly control the FAR by simply adjusting β 0 and β 1 . Furthermore, to evaluate the performance of our proposed detector under varying detection environments, we compare it with two classical detectors, the tri-feature detector [4] and the fractal-based detector [11]. Firstly, we compare their detection performance under different SCR situations in Table I. It can be observed that our proposed detector can attain better detection performance than the other two in both the high and low SCR cases. For example, our proposed detector improves the detection probability by 58% and 19% compared with the fractal-based and tri-feature detectors, respectively, in the case of SCR = -2 dB. Secondly, we compare their detection performance at different FARs in Fig. 4, where the detection probability is obtained by first calculating the detection probabilities of all the datasets and then taking an average on them. It can be seen that, although the detection probabilities of these three detectors all increase with the FAR, our proposed detector always achieves better detection performance than the other two either in high or low FAR cases. For instance, our proposed detector improves the detection probability by 16% and 40% compared with the tri-feature detector and fractal-based detector under the HH mode, respectively, when the FAR is 0.001. Proposed detector Tri-feature detector [4] Fractal-based detector [11] 10 -3 10 - Proposed detector Tri-feature detector [4] Fractal-based detector [11] 10 -3 10 -2 10 -1 Detection probability (c) Proposed detector Tri-feature detector [4] Fractal-based detector [11] 10 -3 10 -2 10 -1 V. CONCLUSIONS Taking the varying detection environments into account, this letter has investigated the problem of detecting small targets floating on sea surface. For this, we have first extracted three discriminative features and then designed a SVM-based detector that can flexibly tune the FAR. Experimental results have verified the superiority of our proposed detector over several existing detectors in both low SCR and low FAR cases.
2018-11-13T12:30:49.000Z
2018-11-13T00:00:00.000
{ "year": 2018, "sha1": "3a4c2cb6e3b2981655986d5cf69e44f5f5f39251", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1811.05251", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "780c9c2bba3a519e95ebecef63b07ad8f164d605", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
261064687
pes2o/s2orc
v3-fos-license
Self-consistent autocorrelation for finite-area bias correction in roughness measurement Scan line levelling, a ubiquitous and often necessary step in AFM data processing, can cause a severe bias on measured roughness parameters such as mean square roughness or correlation length. Although bias estimates have been formulated, they aimed mainly at assessing the severity of the problem for individual measurements. Practical bias correction methods are still missing. This work exploits the observation that the bias of autocorrelation function (ACF) can be expressed in terms of the function itself, permitting a self-consistent formulation. From this two correction approaches are developed, both with the aim to obtain convenient formulae which can be easily applied in practice. The first modifies standard analytical models of ACF to incorporate, in expectation, the bias and thus actually match the data the models are used to fit. The second inverts the relation between true and estimated ACF to realise a model-free correction. Both are tested using simulated and experimental data and found effective, reducing the total error of roughness parameters several times in the typical cases. Introduction Recently a couple of works drew attention to how roughness measurement by atomic force microscopy (AFM) are impacted by levelling/background subtraction [1,2], in particular line levelling, a ubiquitous and often necessary step in AFM data processing [3][4][5][6].The classic results for the effect of mean value subtraction on statistical quantities [7][8][9] were generalised in a theoretical framework covering many common levelling methods.The mean square roughness σ, as well as many other quantities, becomes biased.For 1D data and 1D scan line levelling the bias of estimate σ can be written (in expectation): Function G is the true autocorrelation function (ACF) of the roughness and C a complicated function capturing correlation/spectral properties of the specific levelling method.The second term expresses the measurement bias, which can often be severe [1,2,8].Explicit expressions are known for several common levelling methods and autocorrelation function forms.[1] It should be noted that is more correct to call G the autocovariance function and reserve the term autocorrelation for the function normalised to variance, but both are commonly used.The bias problem is not unique to AFM and profilometry data levelling.Similar problems occur for autocovariance function estimation from locally smoothed (detrended) data [10,11]. Ultimately, the bias and variance depend on the ratio α = T /L of correlation length T and scan line length L. The bias further increases with 'aggressivity' of the levelling procedure [1,8].The ratio α must be kept small for reliable results.If scan line levelling and similar 1D corrections are applied to images the error is proportional to α (not α 2 as one might assume for 2D image data), which can be difficult to keep sufficiently small.Even when scan lines are not levelled explicitly, the computation of 1D ACF imposes the condition of zero mean value on image rows, corresponding to degree-0 polynomial levelling.The length L is, sadly, also often not set deliberately but instead to what 'feels right' [2].This then translates to α which is way too large-sometimes far beyond instrumental constraints.Reported results are then unnecessarily skewed.Bias estimation procedures have been proposed, either simple and coarse [2] or more detailed [1], allowing one to check whether it is within reasonable bounds.The simplest (and coarsest) estimate of relative bias of σ is −nα, where n is the number of terms in the scan line levelling polynomial, usually equal to its degree plus one. Unfortunately, all the estimates suffer from a chicken and egg problem.They require knowing the correlation length T , or even the form of the ACF, which are not known a priori.They should be the outputs of our measurement.Therefore, they must be estimated from experimental data, and these estimates are again biased.The experimental T (denoted T ) is underestimated because the entire ACF is affected in a similar manner as σ 2 , as illustrated in figure 1.Consequently, although such estimates can help with judging the bias for a particular measurement or guide towards a better choice of scanning parameters, they cannot be applied in a logically consistent manner.They are thus of limited use for actual correction of the biased results.Clearly, the problem is not yet satisfactorily solved.In order to deal with the bias pervading all the roughness parameters we need a self-consistent method which does not require a priori knowledge of the result.It should also be convenient to have practical impact and allow wide adoption.Here we aim to provide this missing piece. The overall plan is fairly straightforward.We being from the observation that the value of ACF at zero is σ 2 , that is σ 2 = G(0).Formula (1) can thus be also written The second (bias) term is linear in G. Suppose an expression of the same form could be obtained for G as a whole (we show later that it is indeed the case) Figure 1.The effect of scan line polynomial levelling using polynomials of various degrees on the estimated ACF.The beginning of the curve (small distances) is biased, whereas for larger distances the ACF estimate is not converged and exhibits oscillations [8]. Here R is a linear operator expressing the bias, now of the entire ACF.It again captures the properties of the specific levelling procedure.Expression (3) ties self-consistently together the true and estimated ACF.We can say that the ACF knows about its own bias.The relation can be formally inverted yielding unbiased G from the biased estimate Ĝ (in expectation).This is the adventurous option-it is not immediately obvious such inversion would be numerically feasible.The conservative option is to employ expression (3) directly.Assume, for instance, that the roughness is Gaussian.The true ACF has then the form Conventionally, we fit the experimental ACF Ĝ(τ ) with the ideal model G Gauss (τ ) with σ and T as free parameters.But it is clearly the wrong model.It does not describe the experimental ACF, which never conforms to the theoretical form.The correct model is and can be obtained by applying R to G Gauss (τ ). The questions are what is the form or operator R, whether R and (1 − R) −1 can be reasonably evaluated and how well the bias correction works in practice.They are answered in the following sections.The general expression for R is derived in section 2, which the reader can skip on the first reading.Section 3 provides elementary formulae and procedures for practical bias correction and section 4 tests their effectiveness using simulations and real AFM data. Bias of ACF after levelling The calculation of R follows the general scheme and notation introduced in Ref. 1 (sections 3.1 and 3.3), including treating the data as continuous functions.Since scan line levelling is the dominant source of bias even for image data [2], we consider the 1D case.Denote φ j orthonormal basis functions used for background subtraction by linear fitting, with j distinguishing the functions.If φ j are polynomials then j ∈ {0, 1, 2, . . .n − 1} is their degree, but the index may not be a simple integer in other cases.Summations over j are, therefore, written below only formally. Levelled data are computed by subtracting the projection onto the span of φ j with coefficients a j equal to the dot products The ACF is estimated as Substituting expressions (7) into (9) gives for Ĝ(τ ) (10) which can be expanded into four terms corresponding to the combinations of z and φ: Taking expectations, where we utilised the linearity of expectation and that for any a and b In a similar manner as in formula (1) for the bias of σ 2 , one term (here E[ Ĝ1 ]) gives the unbiased G(τ ) and the remaining terms combine to give the bias RG(τ ). Linear operator R In principle, formulae (12) can already be considered a representation of the operator R.However, it is more natural (and useful) to write it explicitly Meaning in E[ Ĝ2 (τ )] we must set u = x ′ − x + τ , transform the domain of integration (which splits the integral into three) and obtain The symmetry of G was utilised to ensure its argument is always positive and thus from interval [0, L].Functions c j again express the correlation properties of φ j , in analogy to Ref. 1.However, as the various integrals are over different subintervals of [0, L] they are more complicated here, defined Finally, in order to transform the expression to the form (14), we replace the integration limits for u using the indicator function resulting in The term in square brackets is one piece of R(τ, u) in the form required by ( 14)-the one corresponding to Ĝ2 .The second piece, corresponding to Ĝ3 , is obtained using the same steps.The last piece contains integrals combining φ j and φ k for j ̸ = k that cannot be expressed using (16).If we define it can be written Therefore, the final expression for R(τ, u) is Polynomial levelling A polynomial basis φ j has symmetries which can be used simplify R(τ, u) somewhat.We first note that the expression is not unwieldy because we failed to express it more elegantly.The operator is inherently complicated, with a number of discontinuities in the derivative.Even for mean value subtraction, when the single basis function φ 0 is a constant, we get illustrated in figure 2. Although only function values for small τ and u are important and some of the discontinuities do not affect expansions for small τ and u, R(τ, u) is not totally differentiable at (0, 0).A small-τ approximation of the entire integral in ( 14) is possible only because the integral is a smoother function than R(τ, u) itself. For general polynomials, we note that Legendre polynomials P n (x) on interval [−1, 1] are either even or odd, P n (−x) = (−1) n P n (x).For the orthonormal basis functions on [0, L] it translates to From this we can easily see that Terms with j + k odd can be omitted as they mutually cancel.And for j + k even only terms with j < k can be kept, multiplied by 2. Together with the relation c j,j,b = c j,[0,b] , permitting rewriting terms with j = k, these rules eliminate most of the terms in the second summation in (21).In fact, for degree 1 no such term remains, giving A similar simplification is possible for other bases formed by even and odd functions φ j , for instance sines and cosines, although the indexing by j may differ (and sines and cosines are more natural to handle in the frequency domain).However, the small-(τ, u) expansion for a specific basis is still tedious and better evaluated using symbolic algebra software. Maxima [12] was used to obtain the practical formulae summarised in the following section.The expansions were terminated at α 2 terms.The first reason is that preliminary numerical experiments showed that the leading α 1 terms is not always sufficient and without the second term there is a tendency to overcorrection.The general form of σ 2 bias for polynomial levelling contains only even-power terms after α 2 (equation (27) in Ref. 1).Therefore, there is no third order term in the expansion and higher powers are negligible.Finally, the low smoothness of R at zero means that more accurate expansions would not be, in general, Taylor-like and would have to include more complicated ACF-specific terms.For this reason it is advantageous to express analytical models of ACF in terms of α = T /L and s = τ /T as it makes them smoother functions.In model-free inversion there is no T .Therefore, the formulation has to be done in terms of t = τ /L instead of s. For a polynomial with n terms (degree n − 1) the expansion up to the second order in α is where and The expressions in the following section are obtained by evaluating (26) for particular G. Corrected models The (biased where τ = k∆ x if ∆ x is the sampling step.It is fitted by an ACF model function.Simple models have only two free parameters, σ and T .The classic Gaussian ACF model ( 5) and analogous exponential model are replaced with the leading terms of (G − BG)(τ ) expanded for small α and τ .In particular, the Gaussian model is replaced with and the exponential model with where s = τ /T , α = T /L and erf denotes the error function (antiderivative of Gaussian).If evaluation of special functions is not possible or desirable erf can be replaced for instance by a Padé-style approximation as it only occurs in the second order term.The intermediate Gaussian-exponential ACF model [1,8,13] is replaced with where Γ denotes the gamma function and γ the lower incomplete gamma function. The superscript bias indicates the models are bias-corrected, i.e. take into account the bias of the data (29) they are used to fit.Models (31) and (32) should be fitted from zero to approximately the first zero crossing, i.e. up to the first k for which G k < 0. The biased models do not have any additional free parameters.Nevertheless, they contain two additional inputs, the profile or scan line length L and the number of terms n of the line levelling polynomial-which is one plus its degree.The full profile length must be entered as L, not the length of ACF data which are often cut to a shorter interval of τ . Model-free inversion For an unknown, but quickly decaying ACF, formulae ( 26)-( 28) can be evaluated using the discrete values of estimated Ĝk , leading to the following expressions: where G c m with m = 0, 1, 2, . . ., K − 1 are the correct ACF values and matrix A is Although (35) can be read as expressing measured Ĝk using a true ACF G c m , we interpret it as a set of K linear equations for corrected ACF G c m , with A being matrix of the system.Number K is the cut-off after which the function is assumed to be negligible or the data not usable, i.e. again around the first zero crossing.Symbol χ(j < m) is 1 when j < m and 0 otherwise. Matrix A is the sum of four simple matrices, the identity matrix, two rank-1 matrices and a lower diagonal matrix (in this order).The most efficient solution may be solving first a system with only the first and last terms as A ′ is lower triangular and thus the equations are solved by back substitution.Sherman-Morrison or Woodbury formula [14,15] is then used to perform low-rank updates of the solution to include the two rank-1 terms.However, numerical stability of the update formulae is not well understood.Furthermore, the full system is wellconditioned and only moderately sized.Therefore, it can be easily solved using any standard linear algebra routine. Simulated data-Gaussian ACF We first compare the performance of standard and biased Gaussian roughness models ( 5) and (31) using simulated data.Synthetic rough Gaussian surfaces were generated using the spectral method with T = 20 px.The correlation length is in the typical range for real AFM images, regardless of the physical dimensions of the scanned area.The mean square roughness σ was set to 1 as it is only a scaling parameter.The image size varied from 100 px to 2000 px, corresponding to α from 0.01 to 0.2 (in the reverse order).The discrete ACF (29) was evaluated using the standard Fast Fourier Transform method, after levelling image rows using polynomials with degrees from 0 to 2. The polynomial levelling was, of course, not actually necessary here because the simulated data were ideal and had neither tilt nor bow.It simulated the effect of preprocessing that would be applied to measured data.Tilt or bow could be added beforehand, but it would be pointless.The levelling would subtract them again, together with a part of the roughness-which is the effect we are studying.Marquardt-Levenberg algorithm was used for the non-linear least squares fitting to obtain σ and T .Both models were fitted to data up to the first zero crossing.The entire procedure was repeated with randomly generated Gaussian surfaces hundreds of times (with more repetitions for smaller images for which the variances are larger).The means and standard deviations are plotted in figure 3. The biased model (31) clearly succeeded at bias reduction.For both parameters and almost all image sizes the bias becomes so small that it is no longer an issue.The only exception is very small images which are only several correlation lengths large (α ≲ 1/10).Although bias usually still decreases, it is at the cost of considerably increased variance.Too much roughness information is missing in such small areas.Using them for roughness evaluation is just wrong and the correction cannot change it. The correction generally trades the bias for variance, i.e. the parameters have larger variances than for the standard model.For reasonable T /L the trade-off is advantageous.The total error (variance+bias 2 ) 1/2 decreases as illustrated in the bottom row of figure 3. The improvement is more marked for σ where it can be an order of magnitude, whereas for T it ranges from about 2× to 5×.The improvement is larger for higher polynomial degrees.It is because the bias is larger in absolute value, but of the same functional form.Hence, the same correction is able to deal with a larger bias. Full circles in figure 3 correspond to the worst case scenario of a single-image roughness measurement.Multiple scans reduce the variance, the dominant contribution for the improved model, but do not help with bias, the dominant contribution for the standard one.This is illustrated in the plot of total error reduction for five-image evaluation (open circles). Simulated data-inversion For model-free correction (inversion) random pyramidal surfaces with an unknown ACF were generated using Gwyddion [16] Objects function which generates surfaces by sequential 'extrusion' [17].The pyramids were randomly oriented and the pattern was large-scale isotropic.The generated images were 8000 × 8000 pixels, corresponding to approximately 700 correlation lengths (α ≈ 0.0015).A small (512 × 512) part of one such image is shown in figure 4. Smaller images of various sizes were again cut from the large base image and used to estimate the ACF. The corrected ACF was computed by cutting Ĝk slightly beyond the first zero crossing (10 % farther) and solving the linear system (35) as described in section 3.2.Roughness parameters σ and T were again evaluated from both the biased and corrected ACF.In particular σ was calculated from the relation σ 2 = G 0 and T as the distance at which the ACF first falls to G 0 /e (e being Euler's number).The 1/e ≈ 0.368 threshold is consistent with the analytic models ( 5) and ( 30), although it should be noted that roughness measurement standards often set the threshold differently, 0.2 being a common choice [18]. The comparison also requires true values of σ and T .They were obtained using angularly averaged 2D ACF, which was averaged over all generated images.The data were artificial and did not contain any tilt, bow, sample bending or other type of background.Therefore, the only preprocessing necessary before the computation of 2D ACF was the subtraction of the mean value from the entire image.The relative bias introduced by this operation is of the order of α 2 [1,8], i.e. < 10 −5 and thus negligible. The results are plotted in figure 5.The overall trends are similar as for the modified Gaussian model fitting.Conclusions concerning polynomial degree and multi-image analysis remain unchanged.The dependency on image size (or α) is flatter, especially for σ.Furthermore, T is not improved at all for tiny images and degree 0. Unlike for the explicit model, the correction in fact decreases the accuracy in this case.It must be, however, emphasised that T /L ratios around 0.1 or larger are not recommended, whether with correction or without. We also tested how the correction depends on the ACF cut-off point by choosing the interval from 10% shorter to 40% longer than to the first zero crossing.The effect can be assessed using the accuracy of σ and T or differences between the corrected and true ACF curves.All the dependencies are generally quite flat and often without any clear trend.This is a reassuring result because it means the correction is not sensitive to the cut-off point precise location.As expected, for very tiny images (large α) shortening the interval improves the accuracy somewhat.For large images (small α) the trend was sometimes slightly opposite.Overall, however, cutting at the first zero crossing or moderately beyond it appeared to work well. Rough thin film-inversion A test with real rough surface would ideally be done with a sample whose ACF is precisely known.However, even standard rough samples do not have the ACF specified.Furthermore, the objective is to verify that the bias caused by limited area can be corrected.Meaning the resulting ACF is close to ACF which would be obtained by measuring a very large (or infinite) area.The same approach as in the previous section can thus be used.In fact, comparing measurements on small and huge areas allows us to study the effect in isolation-as opposed to comparison with a reference ACF where any observed difference could have a variety of possible causes. An SrO thin film, prepared by atomic layer deposition, with large-scale uniformly and isotropically rough upper surface was chosen for the demonstration (see figure 4).The texture is formed by nanocrystals and is clearly non-Gaussian.Images were acquired using a Bruker Dimension Icon atomic force microscope in ScanAsyst mode with a standard ScanAsyst-air probe and scan rate of 0.2 Hz.In order to follow the 2D ACF route, a large image without scan line artefacts is necessary.The absence of scan line artefacts means 2D polynomial levelling is sufficient, leaving only bias proportional to α 2 (or higher powers).A scan of area 12 × 12 µm 2 with pixel resolution of 3072 × 3072 was selected for the evaluation.The correlation length to scan size ratio was estimated as α ≈ 0.0037, meaning the relative bias following from background subtraction was < 10 −3 .The long scanning time resulted to drift, which was estimated from the acquired image using Gwyddion Compensate drift function.Its primary effect on the ACF is slight smearing along the abscissa as distances in the xy plane are distorted, in particular in the slow scanning axis.The relative changes were estimated below 1.5 × 10 −3 and thus negligible.The resulting ACF is plotted in figure 6 (each subplot) and separately also in figure 7. The large image was then cut to smaller images of various sizes and processed as above, assuming subimages are reasonable approximations of measurements on smaller areas.The uncorrected (red) and corrected (green) ACF computed for each subimage are plotted in figure 6 for three selected sizes and all three polynomial degree 0-2.The corrected ACF curves were extrapolated beyond the cut-off points by a simple subtraction of the last computed correction from all further data (cyan). The correction is clearly effective.The green (corrected) curves, although spread slightly more than the red (uncorrected), are centred on the best estimate ACF.Deviations are noticeable only for the highest degree and the far ends of the curves, where there is a tendency to overcorrection. Discussion We first remark on the normalisation factor in (29) which is sometimes taken to be 1/N instead of 1/(N − k) because of positive definiteness and/or variance [7,19,20].It corresponds to dividing the integrals in section 2 by L instead of L − τ .However, the estimator with N − k denominator is unbiased, or at least it would be without background subtraction.Furthermore, a constant denominator N does not generalise to irregular regions and other cases where varying amount of data is available for different distances τ [16].Therefore, in this context N − k is the appropriate choice. Interpretation of results Figure 6 almost looks too good to be true.One has to be careful with its interpretation.Everything was computed from the same large base image.The clustering of the green curves around the best estimate shows that we removed the bias tied to smaller scan areas.However, they do not necessarily cluster around the true ACF.In the example with synthetic Gaussian and pyramidal data, the surfaces were uniform and could be made infinite for all practical purposes.But for real rough surfaces, the issues of uniformity, representativeness and the statistical character of roughness are much more tangled.It should also be noted that roughness measurement is also affected by tip sharpness and probe-sample interaction in general [21][22][23][24], sampling step [22,25], calibration, scanning speed and feedback loop settings [23], defects, and other effects not analysed here as we attempt to isolate those related to the finite area.The measurement of a neighbour region (somewhat smaller, 8 × 8 µm 2 ) results in a slightly different ACF, as illustrated in figure 7. Subimages taken from this scan yield curves centred on its own best-estimate ACF.The bias estimates for the two images are approximately 3 × 10 −4 and 6 × 10 −4 .The relative standard deviations of G(0) = σ 2 are proportional to α [8] were estimated as 2 × 10 −3 and 3 × 10 −3 .They are all too small to explain the difference of almost 6 % between the two curves.The correlation length does not capture the scale at which real textures can be considered uniform.Although surface heights become uncorrelated for points considerably farther apart than T , the texture itself varies along the surface.The characteristic scale of these variations can be much longer than T even if the texture is ultimately large-scale uniform.Scanning such large areas is seldom feasible and we have to rely on multiple independent scans. Comparison with spectral density Two other functions are commonly used to characterise spatial properties of roughness, height-height correlation function [8] (sometimes also called structure function) and power spectrum density function (PSDF).Height-height correlation function H is directly related to ACF by H(τ ) + 2G(τ ) = 2σ 2 , so the results can be translated.PSDF is the Fourier transform of ACF and is probably the most commonly utilised function [22,26].The effect of levelling is suppression of low-frequency components [2]. The low-frequency components can be excluded from fitting, similarly how the ends of spectral range are avoided in PSDF stitching [19,22,[27][28][29].However, the peak around zero frequency is where almost all the spectral weight lies.It is also the least affected by noise, discontinuities and smoothing effects such as tip convolution [22,23,26].It is often critical in roughness analysis.However, it is the region worst affected by levelling, and possibly in a non-trivial manner.In the case of ACF the worst affected region is far from the origin and it is never used for analysis.Around the origin, levelling manifests as the subtraction of a slowly varying function.An approach similar to the one developed here can perhaps be formulated also for PSDF-Ref.1, for instance, gives hints at spectral reinterpretation.However, what would be the equivalent of model-free correction for PSDF is not clear. Zero crossing The model-free correction procedure relies on the true ACF monotonically and quickly decaying to zero.In particular, sums of discrete ACF values must give good approximations of integrals (27) (or similar integrals, but up to L instead of infinity).Even though it is true for many types of real roughness, at least approximately, some violate this condition.For instance if the surface is locally periodic/corrugated the true ACF crosses zero, possibly many times.It may be possible to modify the correction procedure for this case, but likely at the cost of reliability.And although the approach of fitting Ĝk with biased model remains intact in principle, the first zero crossing may no longer be a good choice of fitting cut-off. All the procedures utilise the zero crossing for choosing the cut-off in some manner.Must there always be a zero crossing?By splitting the sum z m z k over all m and k into triangular parts and correcting for the double-counted diagonal The left hand side is zero since the mean value of z is zero.Therefore, and Ĝk must take both signs.As for the crossing location, the leading term approximation of the analytical models or (35) is a small constant (proportional to ).If ACF decays quickly, the first zero crossing occurs when the true ACF is equal to this constant.And this is also when S 0 [G] and S 1 [G] can be assumed to give good approximations to the corresponding integrals.For biased model fitting, the heuristic zero-crossing rule is further supported by the following: • The rule is simple and easy to implement both manually and in code. • Fitting only data of the ACF apex at origin is an ill-conditioned problem.The optimal bias-variance trade-off invariably includes the side slopes in the fit.Shortening the interval too much cannot be beneficial. • Although fitting beyond the zero crossing may be beneficial, often the ACF is not converged in this region and telling where useful data end is difficult. • Numerical simulations support the zero crossing as a good choice (section 4.2). Choosing the cut-off based on zero crossing for each data contributes to the increased variance of bias-corrected results.When multiple ACF curves are evaluated it may be preferable to choose a single cut-off based on all the data and use it for all curves. Conclusion The goal of this work was to correct the finite-area bias in autocorrelation function (ACF) evaluation in roughness measurements, which includes the correction of parameters like the mean square roughness and correlation length.Starting from the observation that it should be possible to express the bias of measured ACF in terms of ACF itself, we developed a self-consistent formulation and used it to propose two types of bias correction.One was a modification of standard analytical ACF models to take into account the bias of the data they should fit.The other was a model-free correction procedure based on inverting the self-consistent relations by solving a set of linear equations.Their effectiveness was tested using simulated and measured data.The two corrections behave similarly.They appear most helpful in the cases when they are also needed the most, that is the common moderate scan line lengths, as data for too short scan lines are not salvageable and for very long profiles the bias may already be small.Furthermore, they are more beneficial for higher levelling polynomial degrees for which the bias is worse.Both also trade bias for variance and thus the accuracy improvement is larger when multiple scans are evaluated.Modified (biased) analytical ACF models do not require any fundamental changes to the evaluation and can even be used to re-analyse existing raw ACF data.Based on numerical results, the measurement of Gaussian roughness, fitting the experimental ACF with a modified model has substantial advantages and few downsides and can probably be recommended quite universally.The model-free correction (inversion) procedure proposed for ACF of an unknown form is computationally efficient and worked surprisingly well in the selected test cases.A simple zero crossing based criterion was proposed for choosing the subset of discrete ACF data to use in the inversion.However, open question remains regarding the application of the procedure to ACFs of more complicated forms as the simple criterion may then no longer be suitable.The second correction method thus should be currently considered more an interesting concept to explore in further works. Figure 3 . Figure 3.Comparison of fitting the biased Gaussian ACF model (31) with the standard one (5) (for correlation length of 20 px).Error bars represent singleimage standard deviations.Results for different polynomial degrees are slightly offset horizontally for visual clarity. Figure 5 . Figure 5.Comparison of roughness parameters σ and T obtained from uncorrected and model-free corrected ACF curves for a random pyramidal surface.Error bars represent single-image standard deviations.Results for different polynomial degrees are slightly offset horizontally for visual clarity. Figure 6 . Figure 6.Autocorrelation functions obtained by model-free correction for rough SrO film surface, compared to uncorrected ACF and the best-estimate ACF.Each curve corresponds to one subimage cut from the large base image. Figure 7 . Figure 7.Comparison of best-estimate ACF obtained using independent scans of two different areas.
2023-08-23T06:45:44.267Z
2023-08-22T00:00:00.000
{ "year": 2024, "sha1": "bc6f4a76219640a3c93434dabc3055fd2b0e50c5", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/2631-8695/ad5302/pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "fc651d2b27b37618f776802f26b11cd1c9b3b087", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
229182338
pes2o/s2orc
v3-fos-license
Prevalence and Characteristics of Obsessive-Compulsive Disorder Among Urban Residents in Wuhan During the Stage of Regular Control of Coronavirus Disease-19 Epidemic Background: Coronavirus disease-19 (Covid-19) is one of the most devastating epidemics in the 21st century, which has caused considerable damage to the physical and mental health of human beings. Despite a few regions like China having controlled the epidemic trends, most countries are still under siege of COVID-19. As the emphasis on cleaning and hygiene has been increasing, the problems related to obsessive-compulsive disorder (OCD) may appear. Objective: This study was designed to investigate the prevalence of OCD in the urban population in Wuhan during the stage of regular epidemic control and prevention. Meanwhile, characteristics and risk factors for OCD were also explored. Method: Five-hundred and seventy residents in urban areas of Wuhan were recruited using the snowball sampling method to complete questionnaires and an online interview from July 9 to July 19, 2020. Collected information encompassed socio-demographics, Yale-Brown Obsessive-Compulsive Scale (Y-BOCS) scores, Social Support Rating Scale (SSRS) scores and Pittsburgh Sleep Quality Index(PSQI) values. Results: Three months after lifting the quarantine in Wuhan, the prevalence of OCD was 17.93%. About 89% of OCD patients had both obsessions and compulsions, while 8% had only obsessions and 3% had only compulsions. Top 3 common dimensions of obsessions were miscellaneous (84.0%), aggressive (76.6%), and contamination (48.9%), and of compulsions were miscellaneous (64%), checking (51.7%), and cleaning/washing/repeating (31.5%). The unmarried were more vulnerable to OCD than the married (p < 0.05, odds ration = 1.836). Students had 2.103 times the risk of developing OCD than health care workers (p < 0.05). Those with positive family history of OCD and other mental disorders (p < 0.05, odds ration = 2.497) and presence of psychiatric comorbidity (p < 0.05, odds ration = 4.213) were also at higher risk. Each level increase in sleep latency increased the risk of OCD to 1.646 times (p < 0.05). Conclusion: In the background of regular epidemic control, the prevalence of OCD was high, and the symptoms were widely distributed. Obsessions often accompanied compulsions. Being single and a student, positive family history of OCD and other mental disorders, presence of psychiatric comorbidity, and longer sleep latency were predictors of OCD. Early recognition and detection of these issues may help to intervene in OCD. INTRODUCTION The new coronavirus disease (COVID- 19), which was first detected in December 2019, was declared as a public health emergency of international concern (PHEIC) on January 30, 2020 (1). Due to the rapid spread of the infection and paucity of available medical resources, the entire world was affected within a short time. The medical service system was once on the brink of collapse, facing the seemingly invincible "rival." As of August 8, 2020, the total number of confirmed cases had approached 19,295,350, among whom, 719,805 people had lost their lives (2). As one of the first few countries which were heavily hit by the pandemic for a long time, mainland China has almost succeeded in managing the situation. Despite some slight increase in numbers of contingent and sporadic cases, Wuhan was reopened on April 8, 2020, and financial status and social businesses were gradually brought back on track. Notwithstanding, mental health seems to be a pending problem worth close attention. Apart from causing serious damage to the human body, infectious diseases tend to influence mental health (3); the same is the case with COVID-19. Since the outbreak of this unprecedented pandemic, a swarm of studies across nations indicated an increased prevalence of mental disorders. For example, a study from China found that 40.4% of the local youth were mentally distressed, among whom ∼1/3rd had symptoms of post-traumatic stress disorder (4). Another study of the adult population in Bangladesh found that 33.7% of the sample population was anxious, and 57.9% was depressed (5). However, previous studies were largely based on statistics at the beginning or peak of the pandemic, and no studies have investigated the mental status of the population in the later stage. After all, the quarantine has been lifted for months in Wuhan, China. According to an earlier review, the most focused mental disorders were anxiety, depression, post-traumatic stress disorder, stress, and not much attention was paid to obsessive compulsive disorder (OCD) (6). OCD, mainly characterized by recurrent intrusive thoughts (obsessions) and repetitive stereotyped behaviors (compulsions), is a common chronic mental disease, which is often underrecognized (7). The estimated lifetime prevalence is usually believed to be 2-3% (8). As one of the top 10 diseases contributing to the Global Burdens of Disease, it is also related to suicide (9,10). The fact that OCD could last for decades has also been mentioned in some clinical and community researches (11). Trauma, originally considered as a cause of post-traumatic stress disorder, could influence OCD to some degree (12, 13). A recent study conducted among OCD cases in Italy found a higher Y-BOCS score after 6 weeks of quarantine, indicating possible changes in OCD severity. However, studies rarely discussed the occurrence of OCD among the general population (14). After all, psychological reconstruction is an upcoming challenge. Is the prevalence of OCD still high in the epidemic stage? Social support and sleep quality have been linked to mental health in many previous studies; enough social support and good sleep quality could ensure a better mood (15,16). However, the association between them seems to be complicated. For example, family members are challenged in terms of offering support, which is helpful for patients with OCD, but to not let this support turn into family accommodation, which may lengthen the duration of OCD symptoms because anxiety is avoided in these patients (17). Jacob A. Nota found that delayed sleep phases were common in patients receiving intensive OCD treatment and later bedtimes were associated with more severe OCD symptoms both during admission and after discharge, however, no evidence revealed the same prediction for sleep onset latency or duration (18). No exploration on correlation between sleep quality and OCD in this special situation was found. Accordingly, more studies should be made to elucidate the relationship between social support or sleep quality and OCD in the later stage of the epidemic. Currently, most countries are still under the siege of Covid-19. The occurrence of mental problems may be delayed, and these problems can persist for a long time. Therefore, the mental health effects of the pandemic need to be investigated. Would people in reopened areas suffer from OCD in the background of regular epidemic prevention and control? Thereby, we investigated urban residents in Wuhan, aiming at collecting concrete clues on OCD and its risk factors, which might, in turn, assist in providing valuable reference for other countries as well as handling this issue instantaneously and potentially. The hypotheses for this study were the following. Hypothesis 1: the prevalence of OCD in the regular epidemic stage is higher than what it used to be pre-pandemic. Hypothesis 2: social support and sleep quality may help to predict OCD diagnosis in this background. Participants People from central areas of Wuhan, China, were recruited online through "Wenjuanxin" and "WeChat" using the snowball sampling method from July 9 to July 19, 2020, around 3 months since the quarantine had been lifted. Inclusion criteria were: (1) a resident of a central urban area in Wuhan, (2) aged 15 years or above, and (3) ability to understand the contents of questionnaires. Eleven participants spent over 1 h for filling questionnaires, 17 failed in "trap questions, " and 1 quit midway. Thus, 29 invalid questionnaires were eliminated, and 541 samples were included in the analysis; the valid response rate was 94.91%. All respondents participated voluntarily under the premise of written informed consent and could quit at any time. Ethical approval was obtained from Renmin Hospital of Wuhan University. Measures Demographics Several socio-demographic characteristics, such as sex, age, income, marital status, educational level, and number of family members, were included in the questionnaire ( Table 1). In particular, information on family history of mental disorders or comorbid mental disorders was acquired by items in the questionnaire saying "Have you ever been diagnosed with a mental illness like schizophrenia, depressive disorder, maniac disorder, bipolar disorder, anxiety disorders, post-traumatic stress disorder, Tourette syndrome in the hospital and remained uncured, " "Do you have family members diagnosed with OCD or mental disorders like above?" We also reconfirmed this information orally in a brief online interview. Respondents who reported the presence of an additional mental disorder as well as the context where the diagnosis was given, plus the duration of the disorder, were confirmed as participants with psychiatric comorbidity. The same criteria were applied to ascertain a positive family history for a mental disorder. Yale-Brown Obsessive-Compulsive Scale (Y-BOCS) The widely used Y-BOCS consists of a checklist for symptoms (58 items) and a scale for severity (10 items, with each item scored from 0 to 4, and a total point ranging from 0 to 40). There is a moderate correlation in consistency and discrepancy between self-reported and clinician-rated Y-BOCS scores and patients tend to rate symptoms lower than clinicians from experience (19,20). Given its availability in self-report format, Y-BOCS was applied for the assessment of diagnosis and manifestations of OCD as an online questionnaire and interview. A cut-off point of 6 was considered for the diagnosis of OCD (21). A 10-min oral online interview was conducted for all participants through the "Wechat" app (a worldwide communication application, similar to Facebook, Skype, etc.) for rendering explanations of purpose of this research, reconfirmation of participation, as well as interpretation of colloquial definition of obsessions and compulsions, thereby aiming to minimize the confusion to a maximum level. Social Support Rating Scale (SSRS) The Chinese version of SSRS designed by Shuiyuan Xiao was used to evaluate the type and levels of social support received from others. The questionnaire consists of 3 aspects, namely, subjective social support, objective social support, and the availability of social support, with a total point ranging from 7 to 56. The more the points you score, the more the social support you have (22). Pittsburgh Sleep Quality Index (PSQI) The modified PSQI with 4 dimensions, such as sleep satisfaction, sleep disturbance, sleep latency, and sleep duration, was applied to appraise sleep quality. Each dimension is scored between 0 and 3, with a total point ranging from 0 to 21, and the more the points you score, the poorer the quality of your sleep (23). Statistical Analysis and Data Processing SPSS 24.0 software (Armonk, NY: IBM Corp) was used for statistical analyses. The dependent variable in the current study, OCD or non-OCD, was categorical, while independent variables consisted of both categorical and quantitative ones. Thereby, comparisons of group differences in categorical data were performed by the chi square test. Quantitative variables with normal distribution were processed using the T-test, while non-normally distributed variables were processed using nonparametric tests. A p < 0.05 indicated a significant difference. Besides, all significant factors in univariate analysis as well as those believed to be relevant variables were introduced in a multifactorial logistic regression stepwise equation (LR, Forward) for a deeper insight into relatively independent risk factors of OCD; p < 0.05 indicated significance. Description of Samples Five-hundred and seventy residents from all 7 central urban areas in Wuhan participated in the research, 29 among whom were excluded due to invalid response to the questionnaire. Distribution of OCD Symptoms In total, 97 respondents were confirmed to have OCD according to Y-BOCS, among whom 86 had both obsessions and compulsions; obsessions (n = 8) or compulsions (n = 3) presenting alone were rare. For a clearer understanding of the manifestations of symptoms, the Y-BOCS symptom checklist was introduced. As shown in Table 2, a wide range of distribution of manifestations of obsessions and compulsions was observed. Top 3 obsessions were miscellaneous (84.0%), aggressive (76.6%), and contamination (48.9%); top 3 compulsions were miscellaneous (64%), checking (51.7%), and cleaning/washing/repeating (31.5%). Group Differences of OCD in Socio-Demographics, Social Support, and Sleep Quality Altogether, 97 respondents met the criterion for OCD diagnosis, so the prevalence of OCD in the background of regular epidemic prevention and control was 17.93%. The prevalence of OCD increased as age decreased, with the highest being 22.66% in the young group aged between 15 and 24 years (p < 0.05). Further, the univariate analysis indicated that the prevalence of OCD diagnosis differed depending on some sociodemographic variables such as marital status, occupation, and employment status (p < 0.05). Moreover, the prevalence in respondents who were asymptomatic cases, with comorbid mental disorders, family history of OCD or other mental disorders, sleep disorders, or poor social support levels turned out higher than that in those without these factors (p < 0.05). Table 1 and those non-significant but believed to be relevant factors from past experience (gender, education level) (24,25) were all included in the multivariate logistic regression model; finally, as listed in Table 3, several variables were identified as predictors for OCD. Compared to the married, the respondents who were single were at 1.836 times the risk of having OCD (p < 0.05). Students were at 2.169 times the risk of having an OCD diagnosis compared to that of health care workers (HCWs). The prevalence of OCD in people with comorbid mental disorders or a positive family history of OCD or other mental disorders was much higher than that in those without other mental disorders (p < 0.05). Notably, sleep latency, which was one of the assessments for sleep quality in the current research, turned out to be an independent predictor for OCD; each unit increase in sleep latency was associated with 0.646 times higher risk for developing OCD (p < 0.05). DISCUSSION To the best of our knowledge, this is the first study on the prevalence of OCD and possible influencing factors among central urban residents in Wuhan in the background of regular epidemic control and prevention. As known to all of us, Wuhan, one of the first areas that were heavily thrashed by COVID-19, has achieved great success in the battle against this pandemic through hard work and generous support from all circles. New cases have not been observed since March 18, 2020, and the lockdown policy was removed on April 8, 2020, under the premise of the mitigated situation. Notwithstanding, something worth much attention is the fact that people from this area might still suffer from certain mental disorders in the stage of regular epidemic prevention. As observed in this study, despite being a relatively secure area compared with many other countries where the pandemic progressed, 3 months after reopening, people in Wuhan were still affected by OCD with a prevalence rate of 17.93%. Occupation, marital status, comorbid mental disorders, family history and sleep latency were associated with OCD. To date, very limited studies have focused on OCD. An earlier study with Symptom checklist-90 indicated that the prevalence rates of OCD symptoms among HCWs and non-HCWs were 5.3 and 2.2%, respectively (26). Similar to other mental diseases, OCD was pervasive among participants in our study, with prevalence rates of 14.6% for HCWs, 29.2% for students, and 15.1% for others. Students had 2.169 times the risk of developing OCD compared with that of HCWs (p < 0.05), indicating that students were the vulnerable ones. Indeed, students, under great pressure and with dubiously-oriented coping skills were often prone to mental disorders (27). Hence, more attention from the education sector is warranted. It is also important to note that we did not classify students into more detailed categories according to majors (e.g., medicine, art or music, computer or Internet) or grades (e.g., freshman, sophomore, junior, senior). Hence, it would be too early to figure out whether differences exist between medical students and HCWs or across subgroups. Further research in this direction would help to address this problem. Marriage was another predictor of OCD; the risk of developing OCD in the unmarried population was 1.836 times greater than that in the married population. Previous studies have shown that marital status contributed meaningfully to the quality of life; meanwhile, nearly all domains of quality of life seemed to have degenerated in patients with OCD (28)(29)(30). This may be the reason that marriage acted as a predictor of OCD in our study. Comorbid status is typical of OCD, as indicated in one study, where in ∼80% of cases, OCD occurred at a certain stage after being diagnosed with anxiety (31). In our research, people with other concurrent mental disorders were more prone to develop OCD (48.8 vs. 15.4%, p < 0.05, odds ration = 4.213). A family history of OCD also increased the risk of developing OCD with an odds ratio of 2.497. Previously, both twin studies and genome-wide association reports indicated the heritability of OCD (32,33). Regarding sleep quality, difficulty in falling asleep was a predictor of OCD. The risk of developing OCD seemed to increase by 64.6% for every increase in the level of sleep latency. People were reported to have many sleep problems during the quarantine, which may have been associated with the risk of developing OCD (15,16). Consistent with this, in our study, many patients with OCD had disturbing thoughts and repetitive performances like counting or making the bed, which in turn influenced the development of OCD. Several research articles and meta-analysis found that depression and anxiety play a key role in the sleep disturbances among OCD patients (34)(35)(36). Unfortunately, we did not recruit subjects with anxiety and depressive disorders, but an emerging idea of interactive effects among these mental diseases should be explored in future investigation. There are insufficient investigations focused on the distribution of obsessions and compulsions. A study on Chinese Han population in 2012 indicated that the commonly detected obsessions were aggressive (42.4%), miscellaneous (42.2%), and contamination (21.6%); while compulsions were checking (52.1%), miscellaneous (25.2%), and washing/cleaning (25.2%) (37). Compared with their results, our study showed a more wide-ranging distribution of overlapped symptoms, and detection rates of both obsessions and compulsions related to hygiene in our study were higher than in theirs. The exact influencing factors are opaque at present, but, as pointed out, a distinction does exist regarding the distribution and detection rate of symptoms. Medical response refers to how we react to the pandemic medically, what measures we prefer to take to contain the pandemic, the speed with which we take medical-related actions, etc. Further research should clarify if the symptom dimension/severity would be associated with the ways people deal with the COVID-19. In our study, some variables such as age, employment status, asymptomatic status, and social support, which were significant factors in univariate analysis, revealed no significance in the multi-factorial regression model. This could be explained by interactions between the variables and differences in research periods, populations, or selected scales. The local government did provide much support for people in Wuhan, such as providing coupons and tax deduction; therefore, the fact that basic level of social support was high might be another reason that this variable was not a significant factor. However, our study found a somewhat high prevalence of OCD even in the regular pandemic control stage, which might provide some basic information or reference for other countries. Limitations and Prospects Despite several findings mentioned above, our study has some limitations. First, the cross-sectional design with a limited sample size makes it hard to figure out a causal relationship between the factors and OCD; therefore, in future research, we will be following-up on these residents as well as including a larger sample as possible. Second, considering the fact that people experienced a huge impact not so long ago, a more unbiased randomized sampling method was not applied; it should be adopted in future research when appropriate. Third, we did not compare the differences among groups with different levels of OCD (mild, moderate, severe); future research should address this limitation. Finally, it remains to be seen to what extent and how people from other parts of the world experience OCD in this special situation. Conclusions The present cross-sectional study conducted among urban people in Wuhan indicated that OCD, with wide-ranging symptomatic dimensions, was very pervasive in the stage of regular epidemic control. In addition, it was observed that obsessions and compulsions occurred independently. Being single or a student, family history of OCD, comorbid status, and longer sleep latency appear to be potential predictors of OCD in this situation; therefore, more attention should be paid to these factors, allowing for early detection and intervention in OCD. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Clinical Research Ethics Committee of Renmin Hospital of Wuhan University. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. AUTHOR CONTRIBUTIONS YZ, LX, and HW: design and concept. GW: supervision and management. YZ and LX: draft of manuscript. YZ and YX: processing, analysis of statistics, collection, acquisition, and verification of data. All authors contributed to the article and approved the submitted version.
2020-12-16T14:13:49.458Z
2020-12-16T00:00:00.000
{ "year": 2020, "sha1": "de94ded7a0a689fd9f1d35f5a52a814546dda304", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2020.594167/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "de94ded7a0a689fd9f1d35f5a52a814546dda304", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259147952
pes2o/s2orc
v3-fos-license
DNA polymerase ε leading strand signature mutations result from defects in its proofreading activity The evidence that purified pol2-M644G DNA polymerase (Pol)ε exhibits a highly elevated bias for forming T:dTTP mispairs over A:dATP mispairs and that yeast cells harboring this Polε mutation accumulate A > T signature mutations in the leading strand have been used to assign a role for Polε in replicating the leading strand. Here, we determine whether A > T signature mutations result from defects in Polε proofreading activity by analyzing their rate in Polε proofreading defective pol2-4 and pol2-M644G cells. Since purified pol2-4 Polε exhibits no bias for T:dTTP mispair formation, A > T mutations are expected to occur at a much lower rate in pol2-4 than in pol2-M644G cells if Polε replicated the leading strand. Instead, we find that the rate of A > T signature mutations are as highly elevated in pol2-4 cells as in pol2-M644G cells; furthermore, the highly elevated rate of A > T signature mutations is severely curtailed in the absence of PCNA ubiquitination or Polζ in both the pol2-M644G and pol2-4 strains. Altogether, our evidence supports the conclusion that the leading strand A > T signature mutations derive from defects in Polε proofreading activity and not from the role of Polε as a leading strand replicase, and it conforms with the genetic evidence for a major role of Polδ in replication of both the DNA strands. The evidence that purified pol2-M644G DNA polymerase (Pol)ε exhibits a highly elevated bias for forming T:dTTP mispairs over A:dATP mispairs and that yeast cells harboring this Polε mutation accumulate A > T signature mutations in the leading strand have been used to assign a role for Polε in replicating the leading strand. Here, we determine whether A > T signature mutations result from defects in Polε proofreading activity by analyzing their rate in Polε proofreading defective pol2-4 and pol2-M644G cells. Since purified pol2-4 Polε exhibits no bias for T:dTTP mispair formation, A > T mutations are expected to occur at a much lower rate in pol2-4 than in pol2-M644G cells if Polε replicated the leading strand. Instead, we find that the rate of A > T signature mutations are as highly elevated in pol2-4 cells as in pol2-M644G cells; furthermore, the highly elevated rate of A > T signature mutations is severely curtailed in the absence of PCNA ubiquitination or Polζ in both the pol2-M644G and pol2-4 strains. Altogether, our evidence supports the conclusion that the leading strand A > T signature mutations derive from defects in Polε proofreading activity and not from the role of Polε as a leading strand replicase, and it conforms with the genetic evidence for a major role of Polδ in replication of both the DNA strands. The "division of labor" model and designation of DNA polymerase (Pol) ε as the leading strand replicase and of Polδ as the lagging strand replicase has been derived from studies involving mutator alleles of yeast Polε and Polδ and their effects on the distribution of leading or lagging strand mutations. For instance, yeast cells harboring the Polε pol2-M644G allele, whose encoded polymerase generates dTTP:T mispairs with an 40-fold bias over dATP:A mispairs, exhibit an increased incidence of spontaneous A to T signature mutations in URA3 integrated near ARS306 (1) that can be ascribed to T:dTTP mispair formation in the leading strand. A similar study with the Polδ pol3-L612M allele indicated the prevalence of lagging strand signature mutations consistent with the mispair formation bias exhibited by this Pol3 allele (2). However, in extensive genetic studies in different yeast strains, we subsequently provided evidence contradictory to the "division of labor" replication model, wherein L612M-Polδ generated errors occur on both the leading and lagging DNA strands in pol3-L612M msh2Δ strains (3). We postulated that a more proficient removal of errors by mismatch repair (MMR) from the leading strand accounts for the lack of L612M-Polδ specific errors on this strand and concluded from these studies that Polδ replicates both the leading and lagging DNA strands (3). The four subunit yeast Polε holoenzyme is comprised of the Pol2 catalytic subunit and the Dpb2, Dpb3, and Dpb4 accessory subunits. While Dpb3 and Dpb4 are not essential (4,5), deletion of either the Pol2 or Dpb2 subunits leads to cell inviability (6,7). Within the Pol2 protein, the N-terminal half encompasses the active polymerase and the extreme C-terminus harbors a zinc-finger motif that is involved in binding the Dpb2 subunit. Importantly, the essential role of Pol2 lies in its ability to bind Dpb2, whereas the N-terminal catalytic polymerase domain of Pol2 is dispensable, although cells grow slowly (8). The Dpb2 subunit also binds directly to GINS (9, 10), a component of the CMG helicase that encircles and travels on the leading strand in the 3 0 →5 0 direction, unwinding the replication fork. Thus, via assembly of the CMG complex, the Pol2 C-terminus plays an essential role in replication by promoting origin firing and DNA unwinding (9,11,12). Extrapolating from our genetic evidence that Polδ replicates both the leading and lagging DNA strands (3), we hypothesized that leading strand A > T signature mutations in pol2-M644G reflect Polδ misinsertions which escape proofreading by Polε 3'→5 0 exonuclease. To verify this hypothesis, in this study, we determine the rate of A > T signature mutations in Polε proofreading defective pol2-M644G and pol2-4 mutants wherein the pol2-4 mutation abolishes Polε proofreading, and the pol2-M644G mutation impairs mispair recognition (13) rendering proofreading ineffective. However, compared to the highly elevated bias of purified pol2-M644G Polε for forming T:dTTP mispair over the reciprocal A:dATP mispair, purified pol2-4 Polε exhibits no bias for T:dTTP mispair formation (14,15). Hence, if A > T signature mutations in the leading strand resulted from the role of Polε as a leading strand replicase, A > T signature mutations would occur at a much lower rate in pol2-4 cells than in pol2-M644G cells. However, if A > T signature mutations were derived from a role of Polε proofreading activity, then these mutations would occur at nearly the same rate in the pol2-4 strain as in pol2-M644G. Furthermore if A > T signature mutations were due to Polε role in leading strand replication, then there would be no need for the PCNA ubiquitination-dependent recruitment of Polζ for their formation-given the very high proficiency of pol2-M644G Polε for promoting synthesis from T:dTMP mispairs. Our evidence that A > T signature mutations in URA3 occur at the same rate in pol2-M644G and pol2-4 strains and that PCNA ubiquitination and Polζ are required for their formation supports the conclusion that the prevalence of leading strandspecific mutations does not arise from a role of Polε in replication of this strand; rather, it derives from the role of Polε proofreading activity in the removal of Polδ misinsertions on the leading strand. Results Leading strand signature mutations in pol2-M644G are dependent upon PCNA ubiquitination and Polζ In both lacZ and steady-state kinetic DNA polymerase fidelity assays, mutant pol2-M644G Polε has been shown to exhibit an 40-fold bias for the misincorporation of dTTP opposite template T than for the complementary dATP opposite template A (1). Since yeast cells that harbor the pol2-M644G mutation exhibit an elevated rate of spontaneously arising A > T hotspot mutations, namely A686T and A279T, in a URA3 reporter gene when integrated into the antisense orientation (OR2) to the left of ARS306 (1-3) (Fig. 1); these A > T mutations have been proposed to arise from T: T mispairs formed during replication of the leading strand by Polε. As shown in Table 1, the pol2-M644G strain exhibits a URA3 mutation rate 24-fold higher than WT cells. To examine the specific effect on rates of the A686T and A279T signature mutations, we determined the rates of these mutations through sequence analysis of ura3 mutations arising in a large number of independent cultures. As shown in Table 2, the rate of A > T mutations is extremely elevated in the pol2-M644G strain compared to WT (1100 fold increase). Since Polζ is involved in DNA damage-induced and spontaneous mutation generation (16), and since it is a very proficient extender of synthesis from mispaired termini (17), we next examined whether Polζ was required for spontaneous signature mutations generated in the pol2-M644G strain. We find that deletion of the catalytic subunit of Polζ (rev3Δ) in pol2-M644G cells results in an 4-fold reduction in the URA3 spontaneous mutation rate compared to that in the pol2-M644G strain (Table 1). When examined for specific A > T signature mutations, rev3Δ reduces the rate of A686T mutations in pol2-M644G by 4-fold, as was the reduction in the overall A > T mutation rate ( Table 2). Since PCNA ubiquitination is required for Polζ function in cells (16), we next examined the effect of the pol30-119 mutation, which harbors an Arg mutation at Lys164 and thus inhibits PCNA ubiquitination (18,19). Although the overall drop in the spontaneous mutation rate of URA3 in pol2-M644G pol30-119 was similar to that found in the pol2-M644G rev3Δ strain (Table 1), there was a more pronounced effect on the signature A > T mutations. For instance, signature A686T mutation rates in pol2-M644G pol30-119 dropped by nearly 8-fold, and the overall rate of A > T mutations was also reduced by 8-fold in pol2-M644G pol30-119 (Table 2). When we examined signature mutation rates in pol2-M644G cells harboring both the rev3Δ and pol30-119 mutations, the rates were similar to those in the pol2-M644G pol30-119 strain, indicating that rev3Δ and pol30-119 act epistatically in pol2-M644G dependent A > T hotspot mutation formation (Table 2). Altogether, we deduce from our data (Table 2) that the formation of leading strand signature mutations in URA3 in pol2-M644G entails a major PCNA ubiquitination and Polζ dependent pathway (Fig. 2), and suggest that an alternative Polζ and PCNA ubiquitination independent pathway would account for the residual A > T signature mutations that remain in the absence of PCNA ubiquitination or Polζ. The exonuclease defective pol2-4 mutation confers a similar rate of signature mutations as pol2-M644G We and others have previously observed A686T and A279T hotspot mutations occurring in the URA3-OR2 reporter gene in strains harboring the pol2-4 mutation, defective in Polε 3'→5 0 proofreading exonuclease (3,20). This was unexpected since purified Pol2-4 Polε does not exhibit a bias for the generation of dTTP:T mispairs over dATP:A mispairs (14,15). To examine this further, we determined the rates of A > T signature mutations in the pol2-4 strain. The spontaneous forward mutation rate in URA3 in the pol2-4 strain was 44fold higher than the wild type strain (Table 3). Remarkably, the rate of specific A > T signature mutations was similar to that in the pol2-M644G strain. For instance, the rate of A686T formation was 15.8 × 10 −8 in the pol2-M644G strain ( Table 2) and 14.3 × 10 -8 in the pol2-4 strain ( Table 4). The A279T mutation rate in the pol2-M644G and pol2-4 strains was 4.0 × 10 −8 and 6.0 × 10 −8 , respectively (Tables 2 and 4). Overall, compared to the WT strain, A > T mutations were elevated 1100-fold in the pol2-M644G strain, and 1300-fold in the pol2-4 strain (Tables 2 and 4). A>T signature mutations in pol2-4 are dependent upon PCNA ubiquitination and Polζ Since the formation of pol2-M644G dependent A > T signature mutations requires PCNA ubiquitination and Polζ, we next examined whether PCNA ubiquitination and Polζ were also required for pol2-4 dependent signature mutations. As shown in Table 3, the spontaneous URA3 forward mutation rate in pol2-4 was lowered 7 to 8-fold by either the rev3Δ, pol30-119, or the rev3Δ pol30-119 double mutation. The overall rate of A > T mutations dropped by 13-fold in the pol2-4 rev3Δ pol30-119 strain, similar to that in the pol2-4 rev3Δ or in the pol2-4 pol30-119 strains (Table 4). Our results that the overall rate of A > T mutations in the pol2-4 rev3Δ pol30-119 strain is reduced to the same extent as in the pol2-4 rev3Δ or pol2-4 pol30-119 strains concur with an epistatic interaction of rev3Δ with pol30-119 in pol2-4 Polε dependent mutation generation (Table 4). Altogether, we infer from these data that A > T signature mutation formation observed in the pol2-4 strain occurs via a pathway involving PCNA ubiquitination and Polζ (Fig. 2); and another pathway that operates independently of PCNA ubiquitination and Polζ would account for the mutations that remain. The sequence data for the various strains are shown in Figures 3-6. Discussion Signature mutations in pol2-M644G do not signify Polε role in leading strand replication Polε has been implicated as the leading strand replicase, in part from the evidence that the elevated rate of A > T signature mutations observed in pol2-M644G yeast strains correlates with an extreme bias of M644G Polε for the formation of dTTP:T mispairs that would occur in the leading strand. During replication, M644G Polε would therefore have a high propensity for dTTP:T mispair formation and for proficiently extending synthesis from those mispairs, rather than proofread them. However, we find that these signature mutations are Polζ-dependent and they require ubiquitination of PCNA. If A > T mutations were generated by pol2-M644G Polε as the leading strand replicase via the formation and extension of synthesis from dTMP:T mispairs, then there would have been no need for Polζ. Thus, by that measure, i.e. the formation of leading strand signature mutations, the requirement of Polζ would suggest that it too is a major replicase for the leading strand, which it is not. Furthermore, the reduction in URA3 signature mutations by pol30-119 implies that their formation depends upon the ubiquitination of PCNA, a process not required for replication of the leading strand. Thus, the Polε proofreads Polδ errors on the leading strand high incidence of spontaneously arising A > T signature mutations in the pol2-M644G yeast strain is not an indicator of the role of Polε as the major leading strand replicase. Leading strand signature mutations result from lack of removal of Polδ misinsertions in the absence of proofreading by Polε Remarkably, the yeast pol2-4 mutation confers a nearly identical increase in the rate of A > T signature mutations in the URA3 reporter gene as the pol2-M644G mutation. Thus, the A > T mutations in pol2-M644G cells which were thought to have resulted from the 40-fold bias of M644G Polε for dTTP:T mispair formation (1) arise at the same high rate in pol2-4 cells, despite the fact that this exonuclease deficient polymerase exhibits no bias for generating dTTP:T mispairs (14,15). Hence, these pol2-4 dependent leading strand-specific A > T signature mutations in URA3 must derive from a process that is not dependent upon Polε mispair insertion, but are rather dependent upon the lack of removal of dTTP:T mispairs already present in the leading strand. The only way to explain these results is that A > T mutations in pol2-M644G and pol2-4 cells derive from a major role of Polδ in the replication of the leading strand (3), and that they reflect Polδ mis-insertions which escape proofreading by its own 3'→5 0 exonuclease and which are recalcitrant to removal by MMR (21). Thus, A > T signature mutations would accumulate on the leading strand in these Polε mutants because of the reduced ability of pol2-M644G Polε to recognize (13) and the inability of pol2-4 Polε to proofread such Polδ generated T T TT T TTT T TT TTT T T T T A A A T T T A A A TT T T T + G A A T TT T T T A TTTTTTTTT TTTTTTTTTT TTTTTTTTTTTTT TTTTTTTTTTTTTT T T T TTTTTTT TTTTTTTT TTTTTTTT Ty mispairs, and not because mutant Polε generates dTTP:T mispairs at a high rate during replication. Somatic Polε proofreading domain mutations in cancers The conclusions of this study imply that the high prevalence of mutations that occur in a large variety of cancers harboring somatic Polε proofreading domain mutations (22)(23)(24)(25)(26)(27)(28)(29) derive from PCNA ubiquitination and Polζ dependent extension of synthesis from Polδ generated mispairs on the leading strand that do not get removed in the absence of Polε proofreading function. Furthermore, the indispensability of Polδ for replication of both the DNA strands (3) explains the dearth of somatic Polδ proofreading domain mutations; and the requirement of Polε proofreading activity for the removal of specific Polδ generated mispairs on the leading strand explains the high prevalence of somatic Polε proofreading domain mutations that occur in cancer genomes (29). Dispensability of Polε polymerase activity for viability In striking contrast to the indispensability of Polδ polymerase activity for viability (30)(31)(32)(33), the lack of N-terminal Polε polymerase domain supports viability, although cell growth is affected (8). Nevertheless, the observation that the lethality of pol2Δ cells is efficiently rescued by the pol2 mutation that is defective in its polymerase activity and in its PCNA binding PIP domain (34) reinforces the dispensability of Polε polymerase activity for cell survival. These results and the evidence that Polδ signature mutations occur on both DNA strands in pol3-L612M msh2Δ (3,35) and that defects in Polε proofreading activity account for Polε leading strand signature mutations in pol2-M644G or pol2-4 cells (this study) can be explained only if Polδ replicated both the DNA strands and Polε contributed primarily to DNA repair roles on the leading strand. Yeast strains All genetic experiments were carried out in isogenic derivatives of the S288C-based yeast strain BY4741 (MATa his3Δ1 leu2Δ0 met15Δ0 ura3Δ0) (36). The pol2-4 and pol2-M644G mutations were integrated into the yeast genome by direct replacement of the wild-type POL2 gene using either pPOL550 or pPOL520, respectively (3). The pol2-pip (FF1199,1200AA) mutation was generated by PCR using mutagenic oligonucleotides, and the resulting PCR fragment was subcloned into the Pol2 direct replacement vector, generating pPOL551. The pol2-M644G, pip double mutant replacement plasmid, pPOL779, was constructed similarly. Yeast strains harboring the pol2 M644G, pol2 pip, and pol2-M644G pip mutations were generated by transformation with the respective plasmids digested with FspI/SwaI restriction endonucleases, and selected for growth on synthetic complete (SC)-uracil media. Excision of the URA3 selectable marker integrated into the 5 0 UTR of pol2 was selected by plating on media containing 5-fluoro-orotic acid (FOA) and confirmed by PCR analysis of yeast genomic DNA. To generate yeast harboring the pol2-4 pip double mutation, the pol2 pip yeast strain YPO-861 was transformed with pPOL550 digested with EcoRI, which integrates the pol2-4 mutation while leaving the pol2 pip mutation intact. The rev3Δ mutation was generated by transformation with plasmid pRev3.75 digested with EcoRI/BamHI and the pol30-119 mutation was integrated into the genome by gene replacement with plasmid pPCNA1.44 digested with Asp718/XbaI. Loss of the URA3 geneblaster was selected by plating cells on 5-FOA media. All genomic mutations were confirmed by either restriction enzyme digestion and/or by sequence analysis of PCR products amplified from yeast genomic DNA. URA3 forward mutation analysis To monitor spontaneous forward mutations of URA3 integrated near ARS306, the various yeast strains were transformed to URA3 + with pBJ2176 digested with XhoI/SalI, which targets the integration of the URA3 gene in the antisense orientation (OR2) 1100 bp to the left of ARS306, between the FUS1 and HBN1 genes, in chromosome 3. We previously showed that integration of URA3 at this genomic position in the yeast genome does not alter the firing of ARS306 (3). URA3 to ura3 mutation rates and spectra Spontaneous forward mutation rates of URA3 OR2 were determined for each yeast strain using the method of the median (37). For each strain, 9 to 15 independent cultures, each starting from 100 URA3+ cells were grown in 3 ml of YPD medium for 3 days. Cells were sonicated, harvested by centrifugation, and then washed and resuspended in sterile water. To determine the median number of mutations arising in the cultures, appropriate cell numbers were plated on SC complete media containing 5-FOA. To determine cell culture viability, appropriate dilutions were plated on SC complete media (Sunrise Science Products). Experiments were repeated 3 to 4 times. For sequence analyses, additional independent cultures were grown as described above, washed, and plated on media containing 5-FOA. A single FOA r colony arising from each culture was patched onto YPD and genomic DNA was extracted. The ura3 gene was amplified via PCR and the products were sequenced using oligos LP2221 and LP2222 (3). Data availability All of the study data are included in the article. Funding and additional information-This study was supported by National Institutes of Health (NIH) grant R01-GM129689 (to S. P.). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
2023-06-14T06:17:22.483Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "5356020e12e731d78203c230759a833bc00dfcaa", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/article/S0021925823019415/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bcf2402dfc494b29e883cd3eaafac5acb398b431", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
45752607
pes2o/s2orc
v3-fos-license
Multiple Functions of Jab1 Are Required for Early Embryonic Development and Growth Potential in Mice* Jab1 interacts with a variety of signaling molecules and regulates their stability in mammalian cells. As the fifth component of the COP9 signalosome (CSN) complex, Jab1 (CSN5) plays a central role in the deneddylation of the cullin subunit of the Skp1-Cullin-F box protein ubiquitin ligase complex. In addition, a CSN-independent function of Jab1 is suggested but is less well characterized. To elucidate the function of Jab1, we targeted the Jab1 locus by homologous recombination in mouse embryonic stem cells. Jab1-null embryos died soon after implantation. Jab1-/- embryonic cells, which lacked other CSN components, expressed higher levels of p27, p53, and cyclin E, resulting in impaired proliferation and accelerated apoptosis. Jab1 heterozygous mice were healthy and fertile but smaller than their wild-type littermates. Jab1+/- mouse embryonic fibroblast cells, in which the amount of Jab1-containing small subcomplex, but not that of CSN, was selectively reduced, proliferated poorly, showed an inefficient down-regulation of p27 during G1, and was delayed in the progression from G0 to S phase by 3 h compared with the wild-type cells. Most interestingly, in Jab1+/- mouse embryonic fibroblasts, the levels of cyclin E and deneddylated Cul1 were unchanged, and p53 was not induced. Thus, Jab1 controls cell cycle progression and cell survival by regulating multiple cell cycle signaling pathways. The successful identification and characterization of the COP9 signalosome (CSN) 1 complex from yeast (1-3) to mammals (4) and higher plants (5,6) revealed that the function of CSN is not restricted to light/dark-mediated signal transduction in plants but is connected to divergent biological responses (7)(8)(9)(10)(11) such as development (12)(13)(14)(15), oogenesis (14,16,17), immune response (18,19), DNA metabolism (20), apoptosis (21), checkpoint control (22), DNA repair (23), and cell cycle control (1,24,25). The core CSN complex is composed of eight subunits (4), and disruption of one component results in the loss of the whole complex (13, 26 -28). Protein kinases capable of phosphorylating c-Jun, NF-B, and p53 are associated with CSN (4,29,30). p53 is destabilized by CSN-mediated phosphorylation at Thr-155 in proliferating cells, and disruption of CSN leads to accumulation of p53 and eventual cell cycle arrest/apoptosis (31). CSN interacts with the Skp1-Cullin-F box protein (SCF) ubiquitin ligase and removes a ubiquitin-like polypeptide, Nedd8, from the Cul subunit (deneddylation) (32,33), thereby regulating the ligase activity (34,35). The JAMM domain within the Jab1/CSN5 subunit plays an essential role in this reaction (32), but the monomeric form of the Jab1/CSN5 polypeptide alone failed to manifest the activity. Disruption of the CSN complex in Drosophila results in accumulation of the hyper-deneddylated Cul subunit and cyclin E polypeptide (one of the substrates of SCF) and failure of oogenesis (17). It is not clear whether all biological responses correlated with CSN are mediated either by phosphorylation or deneddylation (27). Other biochemical functions associated with CSN include regulation of the subcellular localization of the target protein (24,36,37) and recruitment of the deubiquitination enzyme (38), but the mechanisms involved remain to be investigated. In addition to the component of the intact 450-kDa CSN complex, CSN subunits are found as a small complex or a monomeric form (12,13,26,28,39). Because disruption of one CSN subunit does not necessarily result in the same phenotype as nullification of the others (28), it seems likely that each CSN subunit has its own unique function (40) in addition to being a component of the CSN complex. Jab1 (also known as the fifth component of the CSN complex, CSN5) has been shown to interact with and control multiple intracellular signaling molecules (41), including c-Jun (42), p27 (24), LFA-1 (integrin) (43), MIF (18), HIF1␣ (44), Smad4 (45), Bcl3, I B␣, p53 (31), and CUL1 (SCF) (33,46). Although Jab1 was shown to play a critical role in other organisms such as Caenorhabditis elegans (14) and Drosophila (13,16), some Jab1 targets are unique to mammalian cells, and it is important to know how these targets are regulated in living organs. Jab1 was also found as a smaller form not part of the CSN complex in various species (12,13,26,39). It was originally found as a monomeric form in Arabidopsis (26) and later as a smaller cytoplasmic complex in mammalian cells (39). Although Jab1 plays an essential role in phosphorylation, deneddylation, and translocation, it remains to be determined how these activities are regulated by the large CSN complex and possibly by the small complex. Furthermore, Jab1 was found to be highly expressed in human cancers (47)(48)(49)(50)(51)(52)(53)(54)(55)(56), which, in some cases, correlates with a poor prognosis and low level expression of the CDK inhibitor p27. To understand better the function of Jab1 in development, cell proliferation, and oncogenesis, we targeted the Jab1 locus by homologous recombination in ES cells. Jab1null embryos did not survive, whereas Jab1 heterozygous mice were viable and fertile but smaller than the wild-type littermates. Jab1Ϫ/Ϫ cells lacked other CSN components and expressed higher levels of p27, p53, and cyclin E, resulting in impaired proliferation and accelerated apoptosis. In contrast, Jab1ϩ/Ϫ MEF cells, in which the amount of small Jab1 subcomplex but not that of CSN was selectively reduced, were delayed in the progression from G 0 to S phase by 3 h due to an inefficient down-regulation of p27 during G 1 . Most interestingly, in Jab1ϩ/Ϫ MEFs, the levels of cyclin E and neddylated Cul1 were unchanged, and p53 was not induced. Thus, Jab1 controls cell proliferation and survival in mice through multiple cell cycle regulatory pathways in both CSN-dependent and -independent ways. EXPERIMENTAL PROCEDURES Targeted Disruption of the Mouse Jab1 Gene-The gene structure of mouse Jab1 was determined by PCR and DNA sequencing 2 and subsequently confirmed by Blast (NCBI) analysis of the complete cDNA sequence (AF068223) and the genomic sequence (NT039169). The Jab1 targeting vector was constructed by subcloning a 1-kb genomic DNA fragment containing the sequence upstream from the initial methionine and a 5-kb genomic DNA fragment downstream of exon 6, both of which had been amplified by genomic PCR and confirmed by sequencing, into the ploxPNT vector at the EcoRI and XhoI sites, respectively. The targeting vector was linearized with XhoI and was electroporated into mouse RF8 ES cells (57). ES clones selected in 200 g/ml G418 and 0.2 M FIAU were subjected to Southern blot analysis by using probes external to both the 5Ј and 3Ј end of the targeting construct ( Fig. 1, a and b). We did not detect truncated polypeptides from the putative open reading frame (corresponding to amino acids 286 -334) in exons 7 and 8 by Western blotting by using antibody recognizing the C terminus of the Jab1 protein, indicating that the mutant allele is truly a null locus. Jab1ϩ/Ϫ ES cells were microinjected into blastocyst stage C57BL/6 mouse embryos. Chimeric males were crossed to C57BL/6 females, and offspring were genotyped by genomic PCR using Jab1-specific primers as follows: a (5Ј-CTC TCT GTC CTG GGC TTT CAT TAC CAT TTC-3Ј), b (5Ј-GCT CTC CAC ACC CTT CAT CTC CCA CCC CTC-3Ј), and a neo gene-specific primer c (5Ј-CCT GCG TGC AAT CCA TCT TGT TCA CA-3Ј) ( Fig. 1, a and c). p53ϩ/Ϫ mice were purchased from Taconic Farms. p27ϩ/Ϫ and p27Ϫ/Ϫ mice were generated basically according to the method described previously (58). Histology and Immunohistochemistry-Uteri from pregnant females were dissected, fixed overnight in 4% paraformaldehyde, embedded in paraffin, cut into 4-m sections, and stained with hematoxylin and eosin. For antibody staining, sections were deparaffinized, rehydrated, and placed in a 3% solution of hydrogen peroxide for 20 min. This was followed by blocking in 5% bovine serum albumin for 30 min. After incubation with primary antibodies overnight at 4°C, peroxidase-conjugated secondary antibody was applied (Histofine Simple Stain MAX PO, Nichirei Co.). The staining was visualized with diaminobenzidine, and the sections were counterstained with hematoxylin. The antibodies used included rabbit polyclonal antibodies to Jab1/CSN5 (1:100) (24), CSN1 (1:50) (39), cyclin E (M-20 Santa Cruz Biotechnology, 1:50), and Cul1 (Zymed Laboratories Inc., 1:50), and mouse monoclonal antibodies to p27 (Transduction Laboratories, 1:50) and p53 (Oncogene Science, 1:50 and Calbiochem, 1:250). We incubated sections with either no primary or no secondary antibodies to control for nonspecific staining (data not shown). For detection of apoptosis, TUNEL staining of the sections was carried out according to the manufacturer's instructions (ApopTag Red, Intergen). Blastocyst Outgrowth and Immunofluorescence Analysis-Blastocysts were isolated from the uterus at embryonic day 3.5 (E3.5), cultured in ES medium in 5% CO 2 at 37°C, and photographed. Cells cultured on a Lab-TekII Chamber (Nalge Nunc) were fixed in 3% paraformaldehyde, permeabilized in 0.5% Triton X-100, stained with primary antibodies, and incubated with fluorescein isothiocyanatelinked anti-mouse and Texas Red-linked anti-rabbit IgG (Amersham Biosciences). For the determination of BrdUrd incorporation, cells were incubated in 10 M bromodeoxyuridine for 24 h, stained with anti-Jab1 rabbit polyclonal antibody followed by Texas Red-linked anti-rabbit IgG, treated with 1.5 M HCl, and stained with anti-BrdUrd mouse monoclonal antibody (Amersham Biosciences) and fluorescein isothiocyanate-linked anti-mouse IgG. The TUNEL assay was performed with Jab1-stained cells according to the manufacturer's instructions (see above). The cell samples were viewed by phase-contrast or fluorescence microscopy. The genotype of the cultured embryos was determined by anti-Jab1 immunofluorescence staining and by genomic PCR using primers a, b, and c ( Fig. 1, a and c). MEF Assays-Primary MEFs were isolated from E13.5 embryos and cultured in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum (FBS). For growth curve assays, only early passage MEF cells (passage 2-4) were seeded at 10 5 cells on 6-cm plates and quantified at given time points. The 3T6 protocol was employed by plating 2 ϫ 10 6 cells on 10-cm plates and replating at the same cell density every 3 days. To analyze S phase entry, 2 ϫ 10 5 cells per 6-cm dish were starved in Dulbecco's modified Eagle's medium supplemented with 0.1% FBS for 48 h before being stimulated with 10% FBS for given periods. Collected cells were suspended in a 1-ml solution of 0.1% sodium citrate and 0.1% Triton X-100 containing 50 g/ml of propidium iodide and treated with 1 g/ml of RNase for 30 min at room temperature. Fluorescence from the propidium iodide-DNA complex was measured with a FACScan flow cytometer (BD Biosciences), and the percentages of cells in phases G 1 , S, and G 2 /M of the cell cycle were determined with Cell Fit cell cycle software. Protein Analyses-Cell lysis, gel electrophoresis, and immunoblotting were performed by using standard procedures (24,39,59). Developed films were quantitatively analyzed with a densitograph (ATTO, Japan). Rabbit polyclonal antibodies against ␥-tubulin, p27, Cul1, cyclin E, Cdk4, Skp2, p21, p53, MDM2, and p16 and mouse monoclonal antibody to cyclin D1 were purchased from Santa Cruz Biotechnology. Mouse monoclonal antibody to mouse p27 was obtained from Transduction Laboratories. For nondenaturing gel electrophoresis, cells were lysed in modified EBC buffer containing 0.1% digitonin as a detergent. Lysates were separated in a pre-made nondenaturing gel (Biocraft) without SDS and analyzed by immunoblotting. The fractions from the glycerol gradient centrifugation (39) containing only the large CSN complex or the small Jab1 complex were separated by nondenaturing gel electrophoresis, and the positions of each complex were determined. In this assay, the amount of the small Jab1 subcomplex was 10 -20% of total Jab1 protein (equivalent to the results of the glycerol gradient centrifugation analysis), and the CSN complex migrated slower and appeared as a broad band, which contained all eight CSN subunits (CSN1-8), suggesting multiplicity of the modified CSN complex. 3 The in vitro kinase assay for Cdk2 and Cdk4 was performed as described (59) using a recombinant retinoblastoma protein as a substrate. The phosphorylated retinoblastoma protein was separated by SDS-PAGE and quantified for 32 P incorporation by a Fuji BAS-2500 analyzed image. To obtain the quantitative results, we routinely used several different MEF lines (usually five lines or more) prepared from different mice for each experiment. The averages and the standard deviations were calculated and are shown in the text. We show only the representative data in the figures. RESULTS Targeted Disruption of the Jab1 Gene and Requirement of Jab1 in Early Embryogenesis-To examine the physiological requirement of Jab1 in development, cell proliferation, and cell survival, we attempted to disrupt the Jab1 gene in mice. The murine Jab1 gene spans about 13 kb, containing eight coding exons (Fig. 1a). Our strategy was to delete coding exons 1-6, including the start codon, and replace them with a neo marker gene through homologous recombination in ES cells. Correct targeting was confirmed in two independent ES clones by Southern blot hybridization analysis with 5Ј and 3Ј probes, respectively (Fig. 1b). One ES clone was injected into blastocysts from C57BL/6 mice to generate chimeras, and germ line transmission was established. Jab1ϩ/Ϫ heterozygous mice were fertile and were intercrossed to produce Jab1Ϫ/Ϫ mice. No nullizygous mice were born live among 186 offspring of different heterozygous intercrosses, and the ratio of heterozygous mice to wild-type mice was 2.0 to 1 (Table I), indicating that loss of Jab1 was embryonic lethal. Embryos were isolated from timed heterozygous intercrosses from embryonic day 8.5 to as early as E3.5 (Table I) and were genotyped by genomic PCR (Fig. 1c) or histochemical staining with antibody to Jab1 (Fig. 2). No viable Jab1Ϫ/Ϫ embryos were found at E8.5, and Jab1Ϫ/Ϫ embryos at E7.5 and E6.0 exhibited a disrupted development compared with wild-type and heterozygous littermates (Fig. 2). Normal E7.5 embryos underwent gastrulation and were transformed into a multilayered, three-chambered conceptus containing mesoderm, whereas nullizygous embryos were severely growth-retarded, smaller, disorganized, and started to be resorbed, although extra-embryonic cells were retained. E6.5 Jab1Ϫ/Ϫ embryos were slightly smaller in size and abnormal in shape (Fig. 2), although trophoblast giant cells were seen. At E3.5, however, nullizygous embryos of normal appearance were evident (Table I and Fig. 3, see below). These results indicate that Jab1Ϫ/Ϫ embryos survived to the blastocyst stage and died soon after implantation without undergoing gastrulation. Loss of Jab1 Results in Accelerated Apoptotic Cell Death and Elevated Levels of Cyclin E, p27, and p53-To assess the effect of homozygous disruption of Jab1 on early embryos, an immunohistological analysis using specific antibodies was carried out on E6.5 embryos (Fig. 2). CSN components (CSN1 and CSN5/Jab1) were ubiquitously expressed in the early stage embryos, and levels were markedly reduced in Jab1Ϫ/Ϫ embryos. Among putative targets of the Jab1/CSN pathway, the tumor suppressor p53 (31) and Cdk inhibitor p27 (24) were completely absent in wild-type embryos and markedly induced in Jab1Ϫ/Ϫ embryonic cells. Cyclin E was expressed in wildtype cells, and its level was higher in mutant embryos, and cyclin E up-regulation was more prominent in extra-embryonic cells. The total level of Cul1 expression was the same regardless of the Jab1 genotype. In addition, TUNEL staining of the section showed that the apoptotic process was accelerated in nullizygous embryos. To investigate further the function of Jab1 by an alternative approach, we cultured blastocysts in vitro and examined their outgrowth (Fig. 3). Newly isolated Jab1Ϫ/Ϫblastocysts were viable and morphologically indistinguishable from blastocysts of the wild-type and the heterozygous mice (Table I). Both wild-type and Jab1Ϫ/Ϫ blastocysts, hatched from the zona pellucida, attached onto the culture dish and produced apparently normal trophoblast giant cells. The inner cell mass (ICM) cells in Jab1Ϫ/Ϫ embryos grew similarly to those of the normal littermates for 3 days, but the number of mutant ICM cells was greatly reduced after 5 days of culture. The BrdUrd incorporation assay shows that vigorous DNA synthesis occurred in normal ICM and trophoblast cells throughout the entire outgrowth, whereas in Jab1Ϫ/Ϫ blastocysts, ICM and trophoblast giant cells ceased to proliferate by the 4th day. In addition, the TUNEL assay revealed that nuclear fragmentation was enhanced in Jab1Ϫ/Ϫ ICM cells FIG. 2. Histological analysis of normal and mutant embryos. Sections of uteri from pregnant females at E7.5 (a and b) and E6.5 (c-r) were stained with hematoxylin and eosin (a-d) and immunostained with antibodies to Jab1 (e and f), CSN1 (g and h), cyclin E (i and j), p27 (k and l), p53 (m and n), and Cul1 (o and p). A TUNEL assay was performed on sections of E6.5 embryos (q and r). Only the magnified embryonic portions are shown (e-r). The signal around the rim of the embryo stained with antibodies to p27 (k and l) and p53 (m and n) is the background staining due to the nonspecific binding of the mouse monoclonal antibody. Up-regulation of cyclin E was more prominent in extraembryonic cells (j). Marked down-regulation of CSN3 and CSN8 was also seen in E6.5 Jab1Ϫ/Ϫ embryos (data not shown). after day 3 of culture. Immunofluorescent staining of the cultured blastocysts indicated basically the same results as obtained in the immunohistochemical analysis of the embryonic sections in Fig. 2, a reduction in CSN1 and an increase in cyclin E, p27, and p53. Thus, loss of Jab1 resulted in a disruption of the CSN complex, an increase in the level of cyclin E, p27, and p53, cell cycle arrest, and enhanced cell death. So far, a nullizygous genetic background at the p27 and p53 loci has failed to rescue the embryonic lethality induced by the loss of Jab1 (data not shown), suggesting that Jab1 functions through multiple regulators to support embryonic cell proliferation, survival, and development. Jab1 Heterozygous Mice Are Smaller Than Wild-type Littermates-Jab1ϩ/Ϫ animals were born with expected frequency and were viable and fertile. However, they were significantly smaller than their wild-type littermates (Fig. 4a). The difference was marginal during the embryonic stage and immediately after birth, but body weight was reduced ϳ15% on average in the 15th week after birth (Fig. 4b). No particular organ was missing or selectively reduced in size, and Jab1ϩ/Ϫ cells were no smaller than wild-type cells, suggesting that each organ consisted of fewer cells due to impaired cell proliferation. Impaired Cell Growth of Jab1 Hetero-MEF Cells-To evaluate the growth potential of Jab1 heterozygous cells, we isolated and analyzed embryonic fibroblasts (MEF) from the wild-type and Jab1ϩ/Ϫ mice. When cultured according to the 3T6 protocol (relatively high density, plated at 4 ϫ 10 4 cells/cm 2 every 3 days), both wild-type and Jab1-heterozygous MEF cells proliferated (3-fold per passage) and entered a quiescent state (after ϳ7th passage) with a similar growth rate and kinetics (data not shown). However, when they were spread at a lower cell density (4 ϫ 10 3 cells/cm 2 ), Jab1ϩ/Ϫ cells grew significantly slower than wild-type cells (Fig. 4c). (Note that the growth rate of Jab1ϩ/Ϫ cells was still fast enough for them to reach confluence every 3 days with the 3T6 protocol.) Western blotting analysis showed that the total Jab1 level was reduced by 22.7 Ϯ 5.0% in Jab1ϩ/Ϫ cells (Fig. 4d). Because Jab1 is present both in the CSN complex and in a smaller form (12,13,26,39), we separated these two forms by a native-PAGE method (see "Experimental Procedures" for details), and we found that the level of the CSN complex was reduced only by 19.1 Ϯ 5.9%, whereas the amount of the Jab1-containing small complex was markedly reduced (by 59.4 Ϯ 1.6%) (for representative data, see Fig. 4d, and for quantitative measurement, see "Experimental Procedures" for the details). Among the cell cycle regulators, we observed no major detectable differences in expression levels of Cul1, cyclin E, cyclin D1, Cdk4, Skp2, p21, MDM2, p16, and p53. Furthermore, the amount of neddylated Cul1 subunit (the FIG. 3. Blastocyst analysis in vitro. Wild-type (a-e and k-q) and Jab1Ϫ/Ϫ (f-j and r-x) E3.5 blastocysts were cultured in vitro for 24 (a and f), 72 (b and g), and 120 h (c and h). Blastocysts were incubated in the presence of BrdUrd for 24, 96, and 120 h, fixed, and immunostained with anti-BrdUrd antibody (d and i). Seventy two-hour cultures were fixed (k and r) and assayed for apoptosis (e and j) or immunostained with antibodies to Jab1 (l and s), CSN1 (m and t), cyclin E (n and u), p27 (o and v), p53 (p and w), and Cul1 (q and x). Genotypes were determined by PCR after 3-5 days of culture (Fig. 1c) and by immunostaining with antibody to Jab1 (l and s). FIG. 4. Impaired growth of Jab1؉/؊ cells. a, photograph of a Jab1ϩ/Ϫ mouse and a control littermate at 15 weeks of age. b, body weights of representative Jab1ϩ/Ϫ mice and control littermates. Mouse genotypes were determined by genomic PCR as described in Fig. 1c. c, growth curves of primary wild-type (black circles) and Jab1ϩ/Ϫ (red circles) MEFs. Cells (1 ϫ 10 5 ) were plated onto a 6-cm dish and enumerated at the indicated time points. Data shown are means Ϯ S.D. derived from four independent clones. d, immunoblot analysis of wildtype and Jab1ϩ/Ϫ MEFs with antibodies directed against Jab1 (total Jab1, CSN, and small complex), ␥-tubulin (␥-Tub), p27, Cul1, cyclin E, p53, p21, MDM2, cyclin D1, Cdk4, Skp2, and p16. Cell lysates were separated by standard SDS-PAGE (for total Jab1, ␥-tubulin, p27, Cul1, cyclin E, p53, p21, MDM2, cyclin D1, Cdk4, Skp2, and p16) and by nondenaturing PAGE (for CSN and small complex). The representative results of the in vitro kinase assay for Cdk2 and Cdk4 are also shown (for Cdk2-kinase and Cdk4-kinase, respectively). e, kinetics of S phase entry after restimulation of serum-starved, wild-type (black circles) and Jab1ϩ/Ϫ (red circles) MEFs. Data are means derived from three independent clones. f, immunoblot analysis of p27 in wild-type and Jab1ϩ/Ϫ MEFs after restimulation of serum-starved cells. An antibody against ␥-tubulin was used as a loading control. slower migrating form) was equivalent between wild-type and Jab1 heterozygous cells (Fig. 4d). One exception was that the Cdk inhibitor p27 was significantly up-regulated in Jab1ϩ/Ϫ cells (2.66 Ϯ 0.89-fold) (Fig. 4d). Consistent with this observation, the quantitative in vitro kinase assay revealed that Cdk2 and Cdk4-associated kinase activities in Jab1ϩ/Ϫ MEF cells were reduced by 31.7 Ϯ 3.1 and 29.7 Ϯ 3.9%, respectively (for representative data, see Fig. 4d). Serum-starved Jab1ϩ/Ϫ MEFs entered S phase with delayed kinetics (ϳ3 h) compared with their wild-type counterparts after serum stimulation (Fig. 4e). In these cells, the down-regulation of p27 during G 1 was markedly impaired (Fig. 4f). These results suggest that Jab1 specifically participates in the regulation of p27 in vivo. DISCUSSION In this study, we showed that Jab1 plays an important role in early embryonic development and cell proliferation in mice. Jab1/CSN5 is essential in other multicellular organisms such as Drosophila (12) and C. elegans (14), and because other CSN subunits are also essential in these organisms (Drosophila (13), Arabidopsis (15), and mice (21,60)), one may presume that the whole CSN complex is required for the development and maintenance of multicellular organisms. In the case of Arabidopsis, the situation is more complicated because of the duplication of the CSN5/Jab1 gene (AJH1 and -2), and double mutation is required to reveal the phenotype. Embryonic lethality at a similar developmental stage with activation of p53 and/or dysregulation of cyclin E is commonly seen in mice deficient in Uba3 (61) and Cul1 (62,63), in addition to CSN2 (60), CSN3 (21), and CSN5/Jab1, indicating that the integrity of the NEDD8-SCF-CSN pathway is critical during early embryogenesis. The cause of embryonic death is not fully uncovered. Up-regulation of cyclin E seems to accelerate cell proliferation rather than provoking cell death, and induction of p53 may only partly be involved in embryonic lethality, because the p53Ϫ/Ϫ genetic background did not rescue lethality in Jab1Ϫ/Ϫ embryos, and knockdown of Jab1 in human cancerderived cell lines resulted in cell death regardless of the p53 genotypes. 4 Furthermore, the cause of embryonic death could be different between mice lacking different components; no substantial increase in the TUNEL signal was observed in CSN2Ϫ/Ϫ mice, whereas a marked enhancement of apoptosis was seen in CSN3Ϫ/Ϫ and Jab1Ϫ/Ϫ mice. The loss of subunitspecific function may contribute to the difference. Expression of p53 was commonly seen in mice with a defective NEDD8-SCF-CSN pathway, but the inductive mechanism is not clear because the SCF complex is unlikely to be the ubiquitin ligase for p53. It is possible that embryos that failed to develop adequately may be discriminated by the induction of p53 expression. Alternatively, a recently discovered p53-ubiquitin ligase, COP1 (64), may participate in this process, and loss of the CSN subunit possibly inactivates COP1 resulting in activation of p53, analogous to the case in Arabidopsis in which loss of CSN precludes COP1 from entering the nucleus and thereby activating the transcription factor Hy5 in the nucleus (37). Embryonic lethality is common in nullizygous mice missing different CSN subunits (21,60), whereas the up-regulation of p27 in both Jab1Ϫ/Ϫ and Jab1ϩ/Ϫ cells and the impaired cell proliferation in heterozygous mice are features unique to Jab1/ CSN5. Among the CSN components, it is often observed that disruption of one subunit does not necessarily result in the same phenotype as the loss of other subunits (28). This could be because each subunit plays a slightly different role in CSNmediated deneddylation and phosphorylation. Alternatively, each subunit may form unique complexes other than the CSN complex to exert their specific functions. Several researchers including ourselves have found that CSN subunits exist as a smaller form (a smaller complex or a monomeric form) outside the CSN complex in a variety of organisms (12,13,26,28,39). In the case of Jab1, a monomeric form was originally found in Arabidopsis, and most interestingly, its appearance was regulated by other gene products, COP1 and DET1 (26). We reported previously that Jab1 forms a smaller complex in mouse fibroblasts besides CSN, which contains only a subset of CSN components (39). We recently found that Jab1 forms multiple different subcomplexes, which do not necessarily contain other CSN subunits, in mammalian cells. 4 The precise function of Jab1-containing subcomplexes, their relationship to the large CSN complex, and their regulations should be investigated in detail in the near future. The intracellular abundance of the Cdk inhibitor p27 is known to be regulated by cyclin E-Cdk2-mediated phosphorylation, Skp2-SCF-mediated ubiquitination, and proteasome-mediated degradation in the late G 1 -S phase in mammalian cells (65,66). It is also suggested that there are additional mechanisms that regulate the intracellular abundance of p27, especially in early to mid-G 1 (67). The participation of Jab1 (24) and CSN (25) in p27 regulation was shown previously, and both the small Jab1-containing subcomplex (39) and the large CSN complex (25) are suggested to be involved, but the precise mechanism has yet to be fully uncovered. In a mouse reverse genetic approach, Skp2Ϫ/Ϫ mice exhibit up-regulation of p27 (68), whereas Uba3Ϫ/Ϫ (61) and Cul1Ϫ/Ϫ (62, 63) embryos do not seem to contain higher levels of p27 (Uba3Ϫ/Ϫ embryos were shown to express higher levels of p57, a family member of p27-related Cdk inhibitors). This could be because these embryos died long before the effect on p27 became manifest. The results of this study showed that the Jab1 gene-targeted embryos and cells contained up-regulated p27, and this will help us to understand the overall mechanism of p27 regulation in the G 1 phase. It is a tempting hypothesis that the Jab1-containing subcomplex is a part of the p27 regulatory mechanism in early to mid-G 1 , and CSN controls p27 through Skp2-SCF in late G 1 . Furthermore, high expression levels of Jab1 are observed in human cancers and are sometimes correlated with a poor prognosis and a low level of p27 (47)(48)(49)(50)(51)(52)(53)(54)(55)(56). Jab1ϩ/Ϫ cells and animals may help determine the role of Jab1 in the regulation of potential target molecules in cancer such as p27 and the pathologies that accompany their dysregulation.
2018-04-03T06:10:03.371Z
2004-10-08T00:00:00.000
{ "year": 2004, "sha1": "dd80a7c91399915ec077717f00b2d33fbba4e3a3", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/279/41/43013.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "0587d4da6d96de89dd06c4b14443880e8cd0ff10", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
248620661
pes2o/s2orc
v3-fos-license
Feasibility of Optical Bearing Fabrication Using Radiation Pressure A three-dimensional (3D) printer was used to create a model device to discuss the reduction in friction generated by rotation and investigate the possibility of friction reduction in microelectromechanical systems (MEMSs) using light as a future technology. Experiments on this model showed that friction could be reduced using the light radiation pressure. In addition, the possibility of reducing the effect of the friction generated during rotation was demonstrated by adding a mechanism to the rotating rotor mechanism that reduces friction based on the radiation pressure. The effectiveness and associated problems of 3D printers as a fabrication technology for MEMSs were explored. Introduction Microelectromechanical systems (MEMSs) incorporate microelements on a single substrate using various nano/microfabrication technologies. Low power consumption, high performance, and low cost are achieved using wafer-level batch processing with certain phenomena together with new analysis and measurement technologies based on quantum mechanics [1][2][3]. Electrical phenomena, such as electrostatic forces, and piezoelectric technologies, such as piezoelectric elements, have been used as driving power sources for MEMSs [3]. However, electrical driving requires fabrication techniques, such as those for micro-wiring and electromagnetic noise, that can precisely create microstructures. Furthermore, according to the driving principles of mechanical and electrical elements based on various scientific technologies, achieving the intended functions of MEMSs by simply downsizing the driving principles of the conventional size to a micro area without changing the conventional structure is challenging [3]. A typical case is the approach to friction [3]. Dust suspended in the air has a mass; thus, under Newtonian mechanics, it should immediately fall under the influence of gravity, according to its mass. However, dust does not fall as easily as an apple owing to the effect of friction with the air acting on the surface of the dust and the effect of viscosity. In addition, as a microstructure becomes finer, the Reynolds number decreases, and mechanical elements must be operated in a low-Reynolds fluid, even though they are in the air. The behavior of dust in the air and the condensation of water by the capillary action are examples of peculiar behavior in the microscopic region [4]. As many similar examples exist, MEMSs should be fabricated by considering not only their size difference with mechanical elements according to the conventional methods in our daily lives but also phenomena that differ from the driving principles of structures employed on a daily basis. Therefore, MEMSs must be treated from this perspective [3]. In particular, the friction phenomenon in the microscopic region differs from that in normal-size structures. In the early 1990s, the fabrication of micromotors based on semiconductor manufacturing was reported as a power source for MEMSs [5][6][7], which would be intensely affected by such friction. Without a sufficient friction reduction for the rotating body, fabricated motors could not effectively function over time owing to adhesive wear [8][9][10] between solid bodies. In conventional large-scale mechanical elements used in daily applications, the friction of the rotating shafts and other parts is reduced by ball bearings, thrust bearings, etc. However, to use ball bearings or oil/air thrust bearings for mechanical elements with a rotating shaft as MEMSs, a microstructure must be fabricated in accordance with the physical phenomena in the microscopic region [11]. Although studies were conducted on the reduction of friction [12][13][14][15] using bearing balls at sizes of several hundred micrometers, only bearings with balls smaller than the MEMS body can realistically be used as microstructures in MEMSs. Furthermore, for MEMSs to be used under vacuum conditions in the future (e.g., in outer space), employing bearings that use oil and air based on the conventional driving principle will be challenging. In addition, a mechanism using repulsive forces owing to electrostatic forces is typically considered in MEMSs [15]. However, the use of electrical phenomena may result in the fine design and fabrication of wiring and influence the electromagnetic noise. In this study, we investigated the possibility of light-based friction reduction as a futuristic technology in MEMS. Indeed, it differs from conventional technologies in that it does not use oil or air as a lubricant. Light has a force referred to as optical radiation pressure, as reported by Ashkin [16][17][18][19][20][21][22]. In this study, a model to examine the friction reduction by light was fabricated using a three-dimensional (3D) printer, and the possibility of friction reduction in MEMSs using light was examined experimentally. In addition, some problems are reported when a 3D printer is used to fabricate the models to examine the friction reduction by light. Radiant Pressure The concept of the radiation pressure of light was extensively investigated by Ashkin, considered in the 17th century by J. Kepler in his discussion of the behavior of comet tails approaching the sun [16], and J. C. Maxwell in his electromagnetic field theory [23]. Moreover, its existence was experimentally demonstrated more than a century ago by P. N. Lebedev in Russia, E. F. Nichols, and C. F. Hull in the United States [16,24]. Under the concept of radiation pressure, a force is applied to the mirror when light is reflected off the mirror. In this case, according to a previous study [25], if the energy of light is assumed to be E and the speed of light is C, the momentum is expressed as |p| = E/C, while the force is written as F = dp/dt. If the power of light is P and the angle of incidence is θ, the magnitude of the generated force can be expressed as where r = R + (1 − R)α/2, R is the reflectivity of the mirror, and α is the fraction of the non-reflected light absorbed by the mirror. Furthermore, Ashkin used Stokes' law and reported that the force magnitude owing to the radiation pressure can be explained as a physical phenomenon based on the relationship between the velocity of a sphere in a viscous fluid and the force owing to the radiation pressure [20]. In addition, the specific magnitude of the radiation pressure was measured. For example, when a platinum-coated silica sphere with a diameter of 10 µm is irradiated at 100 mW in water, as presented in the previous study [26], the silica moves at a speed of 179.2 µm/s. In addition, according to Stokes' theorem, a force of 15.0 pN is applied. When this radiation pressure is applied to an 8-tooth rotor with a diameter of 100 µm by irradiating light with a total power of 53 mW from two directions, the rotor rotates at a rotational velocity of 8.33 rpm (0.87 rad/s). When light is irradiated onto an object in this manner, force is generated on the surface of the reflected object. This phenomenon is used to develop a new technology to reduce the friction between the contacting surfaces of MEMSs and achieve an optical MEMS [27][28][29]. Optical Bearing for Friction Reduction on the Side Wall of a Rotating Shaft In this study, we first examined the possibility of reducing friction using the light pressure for structures as large as 100 µm using the structure presented in Figure 1, which models the occurrence of friction. In the experiments, a 3D printer was used to fabricate a micromechanical element, in which the outer wall of the shaft contacted the inner wall of a rotating ring that rotated around the shaft. The rotating ring generates friction with the base floor, as shown in Figure 1. As illustrated in Figure 1, a hollow ring (rotor) with a hole of diameter, dh, in a disk of diameter, dr, is inserted around a fixed shaft of diameter, ds. Light is incident on the circumferential gap (ε1) between the inner wall of the hollow ring and the outer wall of the shaft. The structure is designed to reduce the friction between the two walls owing to the radiation pressure generated by multiple reflections within the gap. As for the friction on the contact surfaces in the vertical direction, the structure was designed to lift the bottom of the rotating ring from the base by generating a radiation pressure. To this end, light is irradiated to the vertical gap (ε2) between the bottom of the ring and the floor surface, and multiple reflections are implemented to reduce the friction in the vertical direction of the hollow ring. In the experiments, a 3D printer was used to fabricate a micromechanical element, in which the outer wall of the shaft contacted the inner wall of a rotating ring that rotated around the shaft. The rotating ring generates friction with the base floor, as shown in Figure 1. As illustrated in Figure 1, a hollow ring (rotor) with a hole of diameter, dh, in a disk of diameter, dr, is inserted around a fixed shaft of diameter, ds. Light is incident on the circumferential gap (ε1) between the inner wall of the hollow ring and the outer wall of the shaft. The structure is designed to reduce the friction between the two walls owing to the radiation pressure generated by multiple reflections within the gap. As for the friction on the contact surfaces in the vertical direction, the structure was designed to lift the bottom of the rotating ring from the base by generating a radiation pressure. To this end, light is irradiated to the vertical gap (ε2) between the bottom of the ring and the floor surface, and multiple reflections are implemented to reduce the friction in the vertical direction of the hollow ring. The cross-section of the fabricated structure, as shown in Figure 2, is used to illustrate the propagation paths of light. To reduce friction in the circumferential direction, light incident downward from the top of the shaft by bending by mirror-1 (3) is split to the left and right by triangular mirror-2 (5), as shown in Figure 2a. Each light irradiates to the inner wall of the ring through aperture-1 (4) by passing through the light path fabricated in the shaft. These lights are then multiplied when reflected by the circumferential gap (1) between the inner wall of the ring and the outer wall of the axis of rotation and then spread over the circumference of the ring. Moreover, from the floor surface of the base, as shown in Figure 2b, the light incident from the side of the base passing through the light path in the base is bent upward by mirror-3 (7) and irradiated through aperture-2 (6) on the floor surface to the vertical gap (ε2) between the bottom of the ring and the floor surface for multiple reflections. Light from the base was used to reduce the friction on the floor of the ring. from the base was used to reduce the friction on the floor of the ring. In this case, light from the floor surface enters the ring from two entrances (9), as shown in Figure 2d. Ring (2) is supported upward by the radiation pressure from three mirrors-3 (7) placed at three points on the floor surface. We investigated the possibility of friction reduction by the radiation pressure using a model with such a structure in the order of approximately 100 µ m. The evaluation method for the friction reduction state is as follows: A hand (4) is constructed in advance at point P0 on the circumference of the rotating ring (2), as shown in Figure 1, where a micro-torque wrench (3) is connected to point P0, and point P1 at the other end of the micro-torque wrench is moved tangentially by a piezoelectric element. The friction reduction owing to the radiant pressure was evaluated by measuring the deflection of the torque wrench at the moment when the ring began to rotate and comparing torques required for rotation with and without the radiation pressure. Therefore, the possibility of manufacturing bearings using the radiation pressure was investigated. The torque based on the static frictional force in the dry state, which occurs between the two contacting surfaces, was observed using this measurement method. Light-Guiding Structure to the Inner Wall of the Rotating Ring As shown in Figure 2a, light is guided downward parallel to the axis of rotation from mirror-1 (3) and emitted to the inner wall of the rotating ring (2) using mirror-2 (5) installed inside the shaft. Finally, as an angle was set between the emitted light and the inner wall of the rotating ring (2), the structure was designed to spread the radiated light around the circumference with multiple reflections. In this case, light from the floor surface enters the ring from two entrances (9), as shown in Figure 2d. Ring (2) is supported upward by the radiation pressure from three mirrors-3 (7) placed at three points on the floor surface. We investigated the possibility of friction reduction by the radiation pressure using a model with such a structure in the order of approximately 100 µm. The evaluation method for the friction reduction state is as follows: A hand (4) is constructed in advance at point P0 on the circumference of the rotating ring (2), as shown in Figure 1, where a micro-torque wrench (3) is connected to point P0, and point P1 at the other end of the micro-torque wrench is moved tangentially by a piezoelectric element. The friction reduction owing to the radiant pressure was evaluated by measuring the deflection of the torque wrench at the moment when the ring began to rotate and comparing torques required for rotation with and without the radiation pressure. Therefore, the possibility of manufacturing bearings using the radiation pressure was investigated. The torque based on the static frictional force in the dry state, which occurs between the two contacting surfaces, was observed using this measurement method. Light-Guiding Structure to the Inner Wall of the Rotating Ring As shown in Figure 2a, light is guided downward parallel to the axis of rotation from mirror-1 (3) and emitted to the inner wall of the rotating ring (2) using mirror-2 (5) installed inside the shaft. Finally, as an angle was set between the emitted light and the inner wall of the rotating ring (2), the structure was designed to spread the radiated light around the circumference with multiple reflections. With such multiple reflections, for example, when the center and axis of rotation of the rotating ring (2) are misaligned, the center of rotation is misaligned when the distance between the inner wall of the rotating ring (2) and the axis of rotation is larger on the outwardly displaced side and smaller on the opposite side. Consequently, the number of multiple reflections increases when the misalignment is narrower and decreases in other directions. Thus, the radiation pressure force is larger at narrower gaps than that at wider gaps, which is considered to cause the self-alignment of the bearing. It is expected that a bearing with a self-alignment capability can be realized. Figure 2c, the cross-sectional structure of the model was designed and fabricated such that the rotating ring (2) was separated from the shaft (1) by support (8) at the time of fabrication. The inner wall of the rotating ring (2) and outer wall of the shaft (1) are not in contact at the stage of fabrication using the 3D printer. When the rotating ring (2) is used, by removing the support (8), the ring (2) is inserted into the shaft (1), and the two surfaces contact each other. Light-Guiding Structure to the Bottom of the Rotating Ring As shown in Figure 2b, light is introduced from the side to the base of the friction-reduction confirmation mechanism and irradiated to the bottom of the rotating ring (2) by mirror-3 (7) on the base, where it is multiplied, reflected, and spread. As indicated by the horizontal cross-section of the base in Figure 2d, the base has two light entrance points (9). The light incident from the two points (9) is designed to reach the bottom of the rotating ring (2) through three apertures (6) and (7) at intervals of 120 • . The ring was designed such that the balanced radiation pressure from the three locations pushed the rotating ring (2) upward. Based on the above design strategy, the friction-reduction confirmation mechanism was designed in this study using Autodesk Inventor as a 3D computer-aided design (CAD) software. Fabrication of a Friction-Reduction Confirmation Mechanism In this study, a friction-reduction confirmation mechanism was fabricated using a Nanoscribe Photonic Professional GT with a fabrication resolution of 200 nm as the 3D printer and an IP-Dip resist as the structural material. Because the movable range of the 3D printer was 300 µm, the diameter of the model shaft was set to 80 µm, as shown in Figure 1, whereas the outer and inner diameters of the rotation ring (2) were set to 180 and 90 µm, respectively; thus, the experiment could be performed within the field of view of an optical microscope. The gap in the circumferential direction between the inner wall of the rotation ring (2) and the outer wall of the shaft (1) was set to 5 µm. However, as the 3D printer could not fabricate a gap of 2 µm or smaller, as shown in Figure 3a, the rotating ring (2) was fabricated with supports (8) in four directions such that it could be separated by supports (8) upon fabrication using the 3D printer. After the walls of the structure were processed to a mirror surface (aluminum film thickness = 100 nm) by aluminum sputtering, the supports (8) were removed, and the rotating ring (2) was assembled by dropping it onto the shaft (1). A gap width of 5 µm was consistently achieved using this fabrication process. The experimental apparatus was fabricated, as shown in Figure 3b, by CASTEM Inc. (Hiroshima, Japan) Scanning electron microscopy (SEM) images confirmed that the 3D printer fabricated the structure, according to the CAD data presented in Figure 3a. Fabricated Friction-Reduction Confirmation Mechanism The experimental apparatus, as shown in Figure 3b, was fabricated using a 3D printer using the 3D CAD data presented in Figure 3a. Figure 3b shows a top view of the friction-reduction confirmation mechanism. There are two light entrance points (5) on the base (10). Mirror-1 (5) is introduced on the top for light entrance and propagation in the circumferential direction of the ring (2). To prevent the rotating ring (2) from dissipating after the support (8) is removed, a ring stopper (9) with a six-directional projection was introduced at the top. The experimental apparatus was mounted on the glass substrate of the 3D printer used in the fabrication process, and the base was fixed with an adhesive. In addition, to generate radiation pressure over the entire area of the device, propagate light, and increase the strength of the structure, the entire structure was sputtered with aluminum (film thickness of approximately 100 nm) to create a mirror finish. Moreover, the rotating ring (2) supported from four directions, as presented in Figure 3b, was manufactured according to the design. Subsequently, the supports were removed, the rotating ring (2) was dropped into the shaft (1), and the device was assembled to form a structure that allowed rotation of only the rotating ring (2). After manual confirmation under a microscope, the rotating ring (2) was rotated, and the state of reduced friction with and without the radiation pressure was checked. Experimental Apparatus and Method for Confirmation of the Friction Reduction In the friction-reduction experiment, friction reduction with and without the light radiation pressure was observed using an observation device, as outlined in Figure 4. The friction-reduction confirmation mechanism was fixed on the table. The light was introduced from the upper and lower light inlets using a fiber (Lensed Tip Fiber Patch Cable; Thorlabs). Based on the camera images, as shown in Figure 5a, a micro-torque wrench (3) composed of an ultrafine platinum wire with a diameter of 625 nm (The Nilaco Corporation) was attached to the four-way protrusion (hand) (4) of the rotating ring (2) that was removed from the support, as shown in Figure 5b. Furthermore, Figure 5a demonstrates that a piezoelectric actuator, manufactured by PI, was used to rotate the hand at a constant speed in the tangential direction. The speed was observed at 27.5 μm/s because of the relationship between the frame rate of the used camera and the pixel size of the camera. Rotational torque was applied to the rotating ring (2) using a micro-torque wrench (3). In this case, the change in deflection of the micro-torque wrench at the start of the rotation of the rotating ring with and without incident light was recorded as a movie. Subsequently, the deflection of the torque wrench was determined by measuring the number of pixels related to the deflection of the micro-torque wrench on the image. The torque applied to the device was determined from the deflection obtained. The friction-reduction status was confirmed by changes in torque with and without the optical radiation pressure. Based on the camera images, as shown in Figure 5a, a micro-torque wrench (3) composed of an ultrafine platinum wire with a diameter of 625 nm (The Nilaco Corporation) was attached to the four-way protrusion (hand) (4) of the rotating ring (2) that was removed from the support, as shown in Figure 5b. Furthermore, Figure 5a demonstrates that a piezoelectric actuator, manufactured by PI, was used to rotate the hand at a constant speed in the tangential direction. The speed was observed at 27.5 µm/s because of the relationship between the frame rate of the used camera and the pixel size of the camera. Rotational torque was applied to the rotating ring (2) using a micro-torque wrench (3). In this case, the change in deflection of the micro-torque wrench at the start of the rotation of the rotating ring with and without incident light was recorded as a movie. Subsequently, the deflection of the torque wrench was determined by measuring the number of pixels related to the deflection of the micro-torque wrench on the image. The torque applied to the device was determined from the deflection obtained. The friction-reduction status was confirmed by changes in torque with and without the optical radiation pressure. Camera relationship between the frame rate of the used camera and the pixel size of the camera. Rotational torque was applied to the rotating ring (2) using a micro-torque wrench (3). In this case, the change in deflection of the micro-torque wrench at the start of the rotation of the rotating ring with and without incident light was recorded as a movie. Subsequently, the deflection of the torque wrench was determined by measuring the number of pixels related to the deflection of the micro-torque wrench on the image. The torque applied to the device was determined from the deflection obtained. The friction-reduction status was confirmed by changes in torque with and without the optical radiation pressure. In the experiment, the micro-torque wrenches were broken several times; thus, we used microcantilevers with a length in the range of 185.6-237.8 μm to determine the contact point of the micro-torque wrench with the device. The length was measured by counting the pixels of the camera (Table 1) and detailed observation of the contact point between the tip of the micro-torque wrench and the projection of the rotating ring. In the experiment, a green laser (wavelength = 532 nm, output power = 5 W) with a wavelength that is relatively easy to obtain and capture with a camera was branched using a bifurcated optical coupler and irradiated from the entrance of light (5) at three upper and lower locations, as shown in Figure 3b. The fiber output at each entrance was measured using a power meter. The experiment was conducted in the air. The laser power supplied by the In the experiment, the micro-torque wrenches were broken several times; thus, we used microcantilevers with a length in the range of 185.6-237.8 µm to determine the contact point of the micro-torque wrench with the device. The length was measured by counting the pixels of the camera (Table 1) and detailed observation of the contact point between the tip of the micro-torque wrench and the projection of the rotating ring. In the experiment, a green laser (wavelength = 532 nm, output power = 5 W) with a wavelength that is relatively easy to obtain and capture with a camera was branched using a bifurcated optical coupler and irradiated from the entrance of light (5) at three upper and lower locations, as shown in Figure 3b. The fiber output at each entrance was measured using a power meter. The experiment was conducted in the air. The laser power supplied by the fiber was adjusted such that the optical output was 40 mW from the top entrance and 15 mW from each aperture of the three lower locations on the base. Initially, a laser power of approximately 5 mW was irradiated from each incident light. However, no change in torque related to the presence or absence of light was observed when using the micro-torque wrench. Although the same torque remained approximately independent of the presence or absence of light up to approximately 20 mW, the starting rotational torque changed with the presence or absence of light when the light was irradiated in the circumferential direction beyond 30 mW. Therefore, we decided to conduct this experiment under an incident power of 40 mW. However, as the experiment was carried out in the air, it was not possible to carry out the experiment with light energy higher than 40 mW because the excessive laser input caused damage even to the aluminum-sputtered experimental apparatus. In the future, we will need to modify the intensity of the device to provide higher power, collect data on the light power and friction force improvement, and obtain results of the 40 mW case to determine the optimal light input power. However, in this study, under irradiation values of 40 mW in the circumferential direction and 15 mW (each) from the lower direction, a reduction in frictional torque could be confirmed. Experimental Results The experimental results are listed in Table 1. The micro-torque wrenches were damaged several times and experiments were carefully numbered to avoid any data confusion. When the wrench was damaged, new micro-torque wrenches were sequentially fabricated and evaluated in five experiments, with and without light incidence. Although the amount of deflection generated varied with the length of the micro-torque wrench used in each experiment, the standard deviation of the rotational torque generated with and without light was measured to be 0.28 and 0.26 pNm, respectively, confirming similar variability. The test of variability using the F distribution confirmed that no differences existed in the variability of results between the two sets of data at a confidence level of 95%. According to the comparison of the cases without and with light incidence, the ring began to rotate with an average torque of 2.22 pNm by the micro-torque wrench when light incidence did not exist. This torque was considered to be generated by the static frictional force. Moreover, in the case of light irradiation, the ring began to rotate when a rotary torque of 1.21 pNm was applied using a micro-torque wrench. The rotational torque relative to friction was reduced by approximately 45% with light irradiation. A t-distribution test of the difference in mean values with and without light confirmed at a confidence level of 95% that the rotational torque was lower with light incidence. We believe that a more precise observation will be necessary in the future and the surface roughness and characteristics under vacuum should be analyzed in consideration of future applications. We experimentally confirmed the possibility of improving the operating efficiency of MEMSs by reducing friction using light radiation pressure. Application of the Friction-Reduction Phenomenon Using the Radiation Pressure to a Rotating Rotor We considered the possibility of friction reduction by light obtained in this study for application in the rotor rotation experiment using the radiation pressure, which is a future technology to supply the power source of MEMSs, as presented in a previous study [26]. Because the friction-reduction experiment was conducted on a rotating rotor, the reduction of dynamic friction in addition to the static friction will be examined. In a previous study, an eight-bladed rotor (thickness = 10 µm) with a radius of 50 µm, as shown in Figure 6a, rotated with an angular velocity of 0.87 rad/s under light irradiation with a total power of 53 mW from two directions [26]. Micromachines 2022, 13, x 9 of 13 light incidence did not exist. This torque was considered to be generated by the static frictional force. Moreover, in the case of light irradiation, the ring began to rotate when a rotary torque of 1.21 pNm was applied using a micro-torque wrench. The rotational torque relative to friction was reduced by approximately 45% with light irradiation. A t-distribution test of the difference in mean values with and without light confirmed at a confidence level of 95% that the rotational torque was lower with light incidence. We believe that a more precise observation will be necessary in the future and the surface roughness and characteristics under vacuum should be analyzed in consideration of future applications. We experimentally confirmed the possibility of improving the operating efficiency of MEMSs by reducing friction using light radiation pressure. Application of the Friction-Reduction Phenomenon Using the Radiation Pressure to a Rotating Rotor We considered the possibility of friction reduction by light obtained in this study for application in the rotor rotation experiment using the radiation pressure, which is a future technology to supply the power source of MEMSs, as presented in a previous study [26]. Because the friction-reduction experiment was conducted on a rotating rotor, the reduction of dynamic friction in addition to the static friction will be examined. In a previous study, an eight-bladed rotor (thickness = 10 μm) with a radius of 50 μm, as shown in Figure 6a, rotated with an angular velocity of 0.87 rad/s under light irradiation with a total power of 53 mW from two directions [26]. In this study, a system was fabricated in which light incident from the tangential direction of the shaft, as shown in Figure 6c, was multiply reflected at the gap (3 μm) where the inner wall of the cylinder supports the shaft and outer wall of the shaft contact, as shown schematically by the red arrow, to confirm the friction reduction. Using this experimental setup, the possibility of reducing the friction of a rotating rotor was investigated by comparing changes in the rotational angular velocity with and without light incidence in the tangential direction. As shown in Figure 7a, the fabricated structure was an eight-blade rotor (rotor thickness = 10 μm, diameter = 70 μm) (2) that was In this study, a system was fabricated in which light incident from the tangential direction of the shaft, as shown in Figure 6c, was multiply reflected at the gap (3 µm) where the inner wall of the cylinder supports the shaft and outer wall of the shaft contact, as shown schematically by the red arrow, to confirm the friction reduction. Using this experimental setup, the possibility of reducing the friction of a rotating rotor was investigated by comparing changes in the rotational angular velocity with and without light incidence in the tangential direction. As shown in Figure 7a, the fabricated structure was an eight-blade rotor (rotor thickness = 10 µm, diameter = 70 µm) (2) that was irradiated with light from three directions to rotate the rotor. In this experimental apparatus, as shown in Figure 7a, a light entrance with a diameter of 15 µm was provided at point (3) (bearing section), where the rotating shaft was supported, and the light was incident on the gap (3 µm) between the inner wall of the cylinder that supports the rotating shaft and the outer wall of the rotating shaft. Figure 7b shows an SEM image of the experimental apparatus fabricated using a 3D printer observed from the top. The structure was designed for light incidence from the direction indicated by a thick white arrow to generate a bearing effect in addition to the light to rotate the rotor. Micromachines 2022, 13, x designed for light incidence from the direction indicated by a thick white arrow to g ate a bearing effect in addition to the light to rotate the rotor. In particular, the friction-reduction effect was investigated by examining the ch in the rotational angular velocity of the rotor when the rotor was rotated with a power of 50 mW from three directions and light incident on the bearing section at a p of 3 mW, which was expected to have a bearing effect. The case must be analyzed w high-power light was incident on the bearing area to determine the light intensit could be expected to have a bearing effect. However, when the light was irra through a hole with a diameter of 15 μm in the gap between the shaft and bearing wall, the application of a power of 5 mW or higher damaged the bearing compo Thus, this study was limited to experiments that used a maximum power of 3 mW. However, even with this simple structure, as shown in Figure 7c, when the ligh incident, a ring of light was observed around the shaft of the rotor at a position op to that of the incident light. This phenomenon is thought to be caused by the mu reflections of the light incident through a hole in the gap with a diameter of 15 μ detailed study of the realization of multiple reflections in the gap will be conducted future to investigate more effective friction-reduction conditions. Under the above conditions, the difference in angular velocity of rotation wit without light was analyzed by movies (frame rate at 1/30 s) of the rotor rotating un microscope, as shown in Figure 8. This experiment was conducted under similar conditions in a previous study [2 avoid a temperature increase in the device owing to light, the device was immer ethyl alcohol and cooled using a Peltier element such that the temperature exactly the axis of rotation was always maintained below 0 °C during the experiment. In the case without light incidence, which was expected to have a bearing eff shown in Figure 8a, the rotor (3) of the landmark rotated by approximately 25° in as shown in Figure 8b. The rotor rotates at an average angular velocity of 1.15 rad/s In particular, the friction-reduction effect was investigated by examining the changes in the rotational angular velocity of the rotor when the rotor was rotated with a light power of 50 mW from three directions and light incident on the bearing section at a power of 3 mW, which was expected to have a bearing effect. The case must be analyzed when a high-power light was incident on the bearing area to determine the light intensity that could be expected to have a bearing effect. However, when the light was irradiated through a hole with a diameter of 15 µm in the gap between the shaft and bearing inner wall, the application of a power of 5 mW or higher damaged the bearing components. Thus, this study was limited to experiments that used a maximum power of 3 mW. However, even with this simple structure, as shown in Figure 7c, when the light was incident, a ring of light was observed around the shaft of the rotor at a position opposite to that of the incident light. This phenomenon is thought to be caused by the multiple reflections of the light incident through a hole in the gap with a diameter of 15 µm. A detailed study of the realization of multiple reflections in the gap will be conducted in the future to investigate more effective friction-reduction conditions. Under the above conditions, the difference in angular velocity of rotation with and without light was analyzed by movies (frame rate at 1/30 s) of the rotor rotating under a microscope, as shown in Figure 8. This experiment was conducted under similar conditions in a previous study [26]. To avoid a temperature increase in the device owing to light, the device was immersed in ethyl alcohol and cooled using a Peltier element such that the temperature exactly below the axis of rotation was always maintained below 0 • C during the experiment. Furthermore, a different light beam (the blue arrow) from the driving light w diated at a power of 3 mW from the newly installed fiber-2 with a lens to the gap b the shaft and ring. As shown in Figure 8c, experiments were conducted to evalu bearing effect when the rotor was rotated with the light beam irradiated in three tions. As shown in Figure 8d, the landmark rotor (3) rotates by approximately 35° s. The rotor rotates at an average angular velocity of 3.12 rad/s. The introduction o which is expected to have a bearing effect, roughly tripled the angular velocity of tation. However, the power of the light used for reducing the friction was limit mW. We could confirm the possibility of friction reduction, even with a simple st and low input power. This result is significant for the future development of optic ings. Therefore, we believe that it is possible to develop effective bearings that use reduce friction in MEMSs, which used to be difficult. However, a problem was identified when a 3D printer was used for fabricat structures. The 3D-printed structures were fabricated using a 3D printer and th cessed using aluminum sputtering to improve the mechanical strength of each e However, during the experimental process, the structure was repeatedly damaged laser beam and deformed owing to the low strength of the components, and the reduction could not be studied when the light input was more than 3 mW durin irradiation. All problems were caused owing to the fragility of structures fabricated 3D printer. In the future, therefore, it is necessary to improve the strength of fab structures, investigate means of reinforcement using sputtering technology, and d appropriate design methods. If the above problems are solved, we believe that 3D ers, which enable precision processing, will become an indispensable fundament nology for MEMS. In the case without light incidence, which was expected to have a bearing effect, as shown in Figure 8a, the rotor (3) of the landmark rotated by approximately 25 • in 0.40 s, as shown in Figure 8b. The rotor rotates at an average angular velocity of 1.15 rad/s. Furthermore, a different light beam (the blue arrow) from the driving light was irradiated at a power of 3 mW from the newly installed fiber-2 with a lens to the gap between the shaft and ring. As shown in Figure 8c, experiments were conducted to evaluate the bearing effect when the rotor was rotated with the light beam irradiated in three directions. As shown in Figure 8d, the landmark rotor (3) rotates by approximately 35 • in 0.20 s. The rotor rotates at an average angular velocity of 3.12 rad/s. The introduction of light, which is expected to have a bearing effect, roughly tripled the angular velocity of the rotation. However, the power of the light used for reducing the friction was limited to 3 mW. We could confirm the possibility of friction reduction, even with a simple structure and low input power. This result is significant for the future development of optical bearings. Therefore, we believe that it is possible to develop effective bearings that use light to reduce friction in MEMSs, which used to be difficult. However, a problem was identified when a 3D printer was used for fabricating the structures. The 3D-printed structures were fabricated using a 3D printer and then processed using aluminum sputtering to improve the mechanical strength of each element. However, during the experimental process, the structure was repeatedly damaged by the laser beam and deformed owing to the low strength of the components, and the friction reduction could not be studied when the light input was more than 3 mW during light irradiation. All problems were caused owing to the fragility of structures fabricated by the 3D printer. In the future, therefore, it is necessary to improve the strength of fabricated structures, investigate means of reinforcement using sputtering technology, and develop appropriate design methods. If the above problems are solved, we believe that 3D printers, which enable precision processing, will become an indispensable fundamental technology for MEMS. In addition to the development of MEMS power supplies using micro rotors, we plan to conduct further studies on the MEMS processing technology, particularly for the widespread use of 3D printers, which are effective in fabricating microstructures. Conclusions The results of this study can be summarized as follows: (1) To investigate the possibility of friction reduction in MEMSs using light as a future technology, a model was fabricated using a 3D printer to study the friction generated by rotation on a 100 µm square surface. It was confirmed that friction could be reduced using radiation pressure. (2) We fabricated an experimental apparatus that adds a device to promote the friction reduction based on the radiation pressure to a rotating rotor mechanism using light, which is currently being developed as a power source for MEMSs [26]. We used this apparatus to confirm that the effect of friction could be reduced by the radiation pressure, even in the case of dynamic friction generated during rotation. (3) The effectiveness of 3D printers as a fabrication technology for MEMSs was demonstrated. However, to achieve a stable application of structures fabricated by a 3D printer using resist materials, it is necessary to develop a reinforcement method that simulates the skeletal structure of crustaceans in the current aluminum sputtering technology (thickness = 100 nm), reinforcement method using metals other than aluminum, and reinforcement method to improve the strength of the structure.
2022-05-10T16:20:12.023Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "a1c24efbe2e55cc6f71248c598fe3ee34e13d3ad", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-666X/13/5/733/pdf?version=1652076126", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fda0214cc3b77b4ab47f50d8f2924611c379b262", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
25166785
pes2o/s2orc
v3-fos-license
Cell Type-specific Transcription of the α1(VI) Collagen Gene Analysis of the chromatin of different cell types has identified four DNase I-hypersensitive sites in the 5′-flanking region of the α1(VI) collagen gene, mapping at −4.6, −4.4, −2.5, and −0.1 kilobase (kb) from the RNA start site. The site at −2.5 kb was independent from, whereas the other three sites could be related to, α1(VI) mRNA expression. The site at −0.1 kb was present in cells expressing (NIH3T3 and C2C12) but absent in cells not expressing (EL4) the mRNA; the remaining two sites were apparently related with high levels of mRNA. DNase I footprinting and gel-shift assays with NIH3T3 and C2C12 nuclear extracts have located a binding site for transcription factor AP1 (activator protein 1) between nucleotides −104 and −73. When nuclear extracts from EL4 lymphocytes were used, the AP1 site-containing sequence was bound by proteins not related to AP1. The existence of the hypersensitive site at −0.1 kb may be related to the binding of AP1 and of additional factors to the core promoter (Piccolo, S., Bonaldo, P., Vitale, P., Volpin, D., and Bressan, G. M. (1995) J. Biol. Chem. 270, 19583–19590). The function of the AP1 binding site and of the core promoter in the transcriptional regulation of the Col6a1gene was investigated by expressing several promoter-reporter gene constructs in transgenic mice and in cell cultures. The results indicate that regulation of transcription of the Col6a1gene by different cis-acting elements (core promoter, AP1 binding site and enhancers) is not completely modular, but the final output depends on the specific interactions among the three elements in a defined cell type. Collagens are the most abundant extracellular matrix proteins of vertebrates (1). 19 types have been characterized so far, differing in structural features and tissue distribution. In addition to maintaining the structural integrity of organs, collagens endow tissues with peculiar mechanical and biological properties depending on the pattern and the levels of expression. For this reason the regulation of expression is a key issue in collagen biology. For most collagen genes, transcription is the major regulatory step, and analyses of cis-and trans-acting elements have been obtained mainly for ␣1(I), ␣2(I), and ␣1(II) genes (2-7 and references therein). Several types of regulatory regions necessary for high level transcription have been iden-tified in collagen genes. As for other genes, these include the core promoter, which comprises sequence motifs usually within Ϫ40 and ϩ40 nucleotides from the RNA start site and may or may not include a TATA box motif; the proximal upstream activating region, which extends from about Ϫ50 to Ϫ200 base pairs from the RNA start site and contains recognition sites for a subgroup of sequence-specific DNA-binding transcription factors; and enhancers, cis-acting DNA sequences that increase transcription in a manner that is independent of their orientation and distance relative to the RNA start site (8). Other important transcription control regions, such as the locus control region, have not been identified yet in collagen genes. The locus control region was recognized initially in the ␤-globin gene cluster (9) and has now been characterized in several other genes (8,10). The locus control region is necessary to convert an inactive locus to a state competent for transcription, a condition detected by an increase in sensitivity of chromatin to digestion by DNase I. Subsequent transcription ensues by additional specific regulatory sequences, which, when active, usually introduce additional DNase I-hypersensitive sites. For example, five hypersensitive sites have been detected in the ␤-globin locus control region (10), and additional hypersensitive sites are located close to the core promoter of transcribed genes (9). Although, as stated above, no locus control regions have been defined yet, a correspondence between hypersensitive sites and actual transcription has been found also for collagen genes, in particular ␣2(I) and ␣1(I) (5,11). As for the manner in which the different cis-acting regulatory elements contribute to the transcriptional regulation of a collagen gene, the available data suggest that they act in a modular way (4,5,12,13). As proposed recently by Arnone and Davidson (14), this means that each region contributes "a particular regulatory function that is a subfraction of the overall combined regulatory function executed by the complete system" independently from the other regions. A corollary of this view is that tissue specificity of transcription is contributed by enhancers and is independent of the core promoter, whose function is the assembly of the basal transcription apparatus; hence, in experiments with transgenic animals, promoter-reporter gene constructs are expected to give rise to the same temporal and spatial pattern of expression whether using the homologous or a heterologous promoter. The few experiments addressing this issue for collagen genes confirm the above prediction (5). We have recently undertaken a study of the regulation of transcription of the ␣1 chain of type VI collagen, a gene that has been linked to Bethlem myopathy in humans (15). These studies have identified several regulatory regions within the 7.5 kb 1 of 5Ј-flanking sequence, including the basal promoter; module(s) activating expression at low levels in tendons and at * This work was supported in part by a grant from the Progetto Finalizzato Biotecnologie of the Italian CNR and by Grants E22 and E704 from Telethon-Italy. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. ‡ To whom correspondence should be addressed: Institute of Histology and Embryology, Via G. Colombo high levels at the insertions of the superficial and muscular aponeurotic system within about 600 bases from the transcription start site; enhancer modules for transcription in articular cartilage, intervertebral discs, vibrissae, the peripheral nervous system, and subepidermal mesenchyme, located between about Ϫ5.4 and Ϫ4.0 kb; and region(s) stimulatory for transcription in articular cartilage, intervertebral discs, meninges, and skeletal muscle between Ϫ7.5 and Ϫ6.2 kb (12,13,16). In this paper we have identified several DNase I-hypersensitive sites in the 5Ј-flanking region of the gene. One of these sites, located at about Ϫ0.1 kb from the transcription initiation site, is detectable only in cells expressing collagen VI mRNA and contains a recognition motif for the transcription factor AP1. Analysis of the function of the AP1 site in vitro and in vivo in the context of the homologous and of a heterologous promoter indicates that both the AP1 site and the core promoter play an important role in the regulation of tissue-specific transcription of the Col6a1 gene. Isolation of Nuclei and Analysis of DNase I-hypersensitive Sites in the Chromatin-NIH3T3 and C2C12 cell lines were propagated in Dulbecco's modified Eagle's medium supplemented with 10% fetal calf serum. EL4 lymphocytes were grown in RPMI 1640, 10% fetal calf serum, 4 mM glutamine, and 20 M ␤-mercaptoethanol. Nuclei were prepared from six large plates (22.5 ϫ 22.5 cm) of confluent NIH3T3 or C2C12 cells and from 600 ml of EL4 lymphocytes (44 ϫ 10 5 cells/ml) as described (17) with minor modifications. Adherent cells were washed extensively with phosphate-buffered saline, scraped in the same buffer using a Cell-Lifter (Costar), and harvested by centrifugation for 10 min at 400 ϫ g. EL4 cells were collected by centrifugation, resuspended twice in phosphate-buffered saline, and centrifuged. The packed cell volume was measured and cells resuspended in 10 ϫ packed cell volumes of buffer 1 (15 mM Tris-HCl, pH 7.5, 15 mM NaCl, 60 mM KCl, 1 mM EDTA, 0.5 mM EGTA, 1.9 M sucrose, 0.1% Triton X-100, 0.5 mM spermidine, 0.15 mM spermine, 1 mM dithiothreitol, 1 mM phenylmethylsulfonyl fluoride, and 10 g/ml each pepstatin, leupeptin, and aprotinin). Cells were then lysed in a Dounce cell homogenizer with five strokes of a type B pestle, 10 packed cell volumes of buffer 2 (buffer 1 without Triton X-100) were added, and the refractive index of the suspension was adjusted to 1.40 -1.42 with buffer 3 (buffer 1 without Triton X-100 and sucrose). The samples were centrifuged at 12,000 ϫ g for 10 min and resuspended in 5 packed cell volumes of buffer 4 (buffer 1 lacking EDTA, EGTA, and Triton X-100 but containing 0.34 M sucrose). The absorbance of 5 l of the suspended nuclei was measured at 260 nm. The nuclei were pelleted again at 12,000 ϫ g for 10 min and resuspended in buffer 4 at an absorbance of about 0.2 A 260 /ml. All manipulations were carried out at 4°C. For each digestion, 5 A 260 nuclei were adjusted to a volume of 80 l with buffer 4, and 10 l of DNase I assay buffer (400 mM Tris-HCl, pH 7.5, 60 mM MgCl 2 ) and 10 l of DNase I (Sigma) diluted to a concentration of 0 -6 units/l were added. The samples were incubated at 37°C and the reaction interrupted with 200 l of stop buffer (50 mM Tris-HCl, pH 8.0, 100 mM EDTA, 100 mM NaCl, 1% SDS). The nuclei were then treated with 3 l of RNase (50 g/ml) at 37°C for 1 h followed by the addition of 40 l of proteinase K (20 mg/ml) at 37°C for one night under mild agitation. The DNA was extracted by phenol/chloroform and precipitated by adding 1 volume of isopropyl alcohol and 0.1 volume of 5 M NaClO 4 . After centrifugation, the DNA was suspended in 10 mM Tris-HCl, pH 8.0, and 1 mM EDTA. 5 g of DNA was digested with the selected restriction endonuclease and run in a 0.8% agarose gel. The DNA fragments were transferred into nylon filters (GeneScreen Plus, NEN Life Science Products) and hybridized with an appropriate 32 Plabeled probe (18). DNA Constructs-The cloning of fragments Ϫ82/ϩ41 and Ϫ215/ϩ41, which include the indicated nucleotides from the transcription start site of the Col6a1 gene into pGEM3 vector and the fusion of the fragments into plasmid pBL6CAT to derive p82CAT and p215CAT, was described previously (16). Both plasmids contain the core promoter, which extends from nucleotides Ϫ75 to ϩ25. To obtain pEn82CAT and pEn215CAT, the BamHI-EcoRI fragment extending from Ϫ5.4 to Ϫ3.9 kb (19), which acts as a strong enhancer for expression in a specific set of tissues (12), was cloned into p82CAT and p215CAT upstream of the promoter region. Similar fusion constructs with the Escherichia coli lacZ gene replacing the CAT gene were synthesized starting from the promoterless plasmid pNSlacZ, in which the ␤-galactosidase sequence is fused with the nuclear localization signal of SV40. First, the Ϫ82/ϩ41 and Ϫ215/ϩ41 fragments were inserted into pNSlacZ to give p82lacZ and p215lacZ, and then the Ϫ5.4/Ϫ3.9 enhancer region was cloned upstream of the promoter fragments to produce pEn82lacZ and pEn215lacZ. A homologous set of CAT and lacZ constructs was also synthesized, where the human ␤-globin substitutes for the Col6a1 gene promoter. The steps in the synthesis of these vectors were the release of the fragment Ϫ215/ϩ41 from pEn215CAT or pEn215lacZ and the cloning, in its place, of a fragment from pBGZA, which contains sequences from Ϫ37 to ϩ12 of the human ␤-globin gene (20), thus obtaining pEn␤GCAT and pEn␤GlacZ. The AP1 binding site of Col6a1 gene was added to these plasmids by amplifying the region from Ϫ71 to Ϫ124 of p215CAT by polymerase chain reaction and ligating it between the enhancer region and the ␤-globin promoter. The resulting vectors were identified as pEnAP1␤GCAT and pEnAP1␤GlacZ. Finally, the ␤-globin promoter from pBGZA and the fragment containing the AP1 site fused with the ␤-globin promoter from pEnAP1␤GCAT were cloned into pBL6CAT to give p␤GCAT and pAP1␤GCAT. All plasmids were purified by CsCl gradient centrifugation and sequenced to verify correct cloning. Generation and Analysis of Transgenic Mice-lacZ constructs were microinjected into fertilized B6D2F1 ϫ B6D2F1 mouse oocytes and the developing embryos analyzed at E14.0 -E15.5. Transgenic embryos were identified by dot-blot assay of DNA purified from the yolk sac, and the transgene copy number analysis and histological examination for ␤-galactosidase expression were carried out exactly as described (12). Promoter Assays-NIH3T3 and C2C12 cells were grown as described above; 3 ϫ 10 5 cells were plated into 10-cm Petri dishes and transfected the following day with the CAT plasmids using the calcium phosphate method (21). All subsequent manipulations and assays were performed as detailed previously (16). DNase I Footprinting-The fragment Ϫ215/ϩ41 was labeled at either end with 32 P-dNTPs and Klenow enzyme and purified by agarose gel electrophoresis (22). DNase I digestion and electrophoretic analysis of the products were carried out using established protocols (16). Identification of a DNase I-hypersensitive Site Proximal to the Basal Promoter of the Col6a1 Gene-Given the frequent association of DNase I-hypersensitive sites with regions that control transcriptional regulation (9, 10), the hypersensitive sites located within 7.5 kb of the 5Ј-flanking sequence of the Col6a1 gene were identified. Because the presence of hypersensitive sites is usually related to the state of transcriptional activity of a gene in a given cell type, mapping was carried out in three cell lines that express different levels of ␣1(VI) mRNA. These lines include NIH3T3 fibroblasts, in which the steadystate concentration of mRNA is the highest; C2C12 myoblasts, which contain about 10-fold less mRNA; and the T cell line EL4, in which the mRNA is undetectable (data not shown). Isolated nuclei were treated with DNase I, and the purified DNA was digested with either SphI or BamHI and analyzed by Southern blotting. Fig. 1 shows the results obtained after digestion of DNA with SphI, but similar results were observed after treatment with BamHI (data not shown). In addition to the 9-kb SphI-SphI fragment, the probe hybridized with four other bands in NIH3T3 cells. One band of about 4 kb (labeled * in Fig. 1) was similarly present in C2C12 and EL4 cells and was therefore not related to the level of expression of the ␣1(VI) mRNA. The corresponding hypersensitive site, which maps at about Ϫ2.5 kb from the RNA start site, is probably caused by a region of chromatin constitutively susceptible to DNase I di-gestion and not dependent on the state of transcription of the gene, as described for some DNase I-hypersensitive sites in the Col1a1 gene (11). A broad band at about 6 kb was very strong in NIH3T3 fibroblasts, very faint in C2C12 cells, and absent in EL4 lymphocytes. Rehybridization of the filter with a probe located at the 5Ј-end of the SphI-SphI fragment revealed that the hybridization signal was composed by two bands (data not shown) and was therefore marked HS2 and HS3 in Fig. 1. The characterization of these two hypersensitive sites, which map at about Ϫ4.4 and Ϫ4.6 kb and were associated with high expression of ␣1(VI) mRNA, will be described in a separate report. 2 Finally, the band of about 1.6 kb was distinctive of cells expressing ␣1(VI) mRNA because it was lacking in nonexpressing EL4 T cells. This band corresponded to a hypersensitive site located at about Ϫ0.1 kb (HS1 in Fig. 1). Characterization of the Region Corresponding to HS1-The hypersensitivity of chromatin to nucleases is caused by structural features of chromatin brought about by assembly of nuclear factors at defined sequence elements (9, 10). In a previous paper we showed that several nuclear factors bind to nucleotides Ϫ75 to ϩ8 from the RNA start site (16), a region that partially overlaps with the Col6a1 core promoter (see "Discussion"). To locate other possible transcription factors binding sites close to the region where HS1 maps, DNase I footprinting assays were carried out with a probe spanning nucleotides Ϫ215 to ϩ41 and nuclear extracts from NIH3T3 cells. One protected sequence was identified extending from Ϫ104 to Ϫ73 (Fig. 2, upper panel). The sequence contained the core motif of the binding site for transcription factor AP1 (TGAG/CTC/AA) (Fig. 2, lower panel) (23). Actual binding of AP1 to the protected sequence was tested by gel-shift assay, in which a probe including the AP1 site of the Col6a1 gene gave rise to one retarded band in the presence of proteins isolated from NIH3T3 nuclei (Fig. 3A). The formation of the band was inhibited by the cold oligonucleotide (lanes labeled AP1-Col6a1 in Fig. 3A) and by an oligonucleotide with the consensus sequence of the AP1 binding site (22) (AP1-cons in Fig. 3A), but not by an oligonucleotide with a mutated version of the consensus motif (AP1-mut in Fig. 3A). Supershift assays with antibodies against the molecular components of AP1 factor c-Fos, Fra-1, c-Jun, JunB, and JunD revealed that the complex contained JunD (Fig. 3B). A retarded band with similar characteristics was detected with nuclear extracts purified from C2C12 cells (data not shown). On the contrary, the retarded bands produced by EL4 nuclear extracts with the AP1-Col6a1 probe had completely different properties: they were not competed by the AP1-cons oligonucleotide (Fig. 3C), and none of the antibodies mentioned above induced supershifting (data not shown). Parallel gel-shift experiments using the AP1-cons oligonucleotide as probe were also performed. These experiments established that the band retarded by incubation with NIH3T3 or C2C12 nuclear extracts was competed by both AP1-cons and AP1-Col6a1 oligonucleotides and that the band was supershifted only by antibodies against junD (data not shown). Incubation of the AP1-cons probe with EL4 nuclear proteins produced one major band that was competed by cold oligonucleotide AP1-cons and, unexpectedly also by AP1-Col6a1 (Fig. 3D). The band was supershifted by antibodies to fra-1 and junD (data not shown). These results suggests that the AP1 recognition site of the Col6a1 gene has the potential to bind AP1 complexes of EL4 cells, although, as shown in Fig. 3C, direct binding could not be detected. Role of the AP1 Binding Site and of the Core Promoter in Tissue-specific Transcription in Vivo-In previous papers we have reported on transient transfections carried out with various CAT-Col6a1 promoter constructs (13,16). A comparison of CAT expression from plasmids p215CAT and p82CAT, which contain and lack the AP1 binding site, respectively, suggested an activating role of the site. However, the same plasmids, or similar constructs carrying the E. coli lacZ instead of the CAT gene, were not expressed in mouse transgenic lines, so that the function of the AP1 site in vivo could not be determined (12,13). To overcome this difficulty the constructs of Fig. 4A were designed, with the rationale that the presence of the enhancer containing region Ϫ5.4 to Ϫ3.9 (12) would overcome silencing of the basal promoter, with or without the AP1 site, in vivo. Moreover, to test whether or not the function of the AP1 site and of the enhancer region was dependent on the type of basal promoter, the constructs depicted in Fig. 4B were synthesized, in which the ␤-globin promoter, which contains a TATA box, replaced the core promoter of Col6a1, which lacks a TATA box. The four constructs were microinjected into fertilized oocytes, and ␤-galactosidase expression was examined in the founder transgenic embryos. The presence of the AP1 binding site increased the percentage of expressing mouse lines, and the effect was particularly relevant (3-fold) with the constructs containing the ␤-globin basal promoter (Fig. 4). Although the pattern of expression of the transgenes resembled that described previously (12), the histological analysis revealed interesting functional features of the AP1 binding site (Table I). The parameters considered to estimate the effect of the AP1 site FIG. 4. Constructs used to generate transgenic mouse lines to analyze the function of the core promoter and of the AP1 binding site of Col6a1 gene in vivo. All of the constructs include the enhancer region of the Col6a1 gene (En) identified previously (12), which extends from about Ϫ5.4 to Ϫ3.9 kb from the RNA start site. Constructs in panel A contain sequences of the Col6a1 promoter indicated by the numbers; therefore both constructs include the core promoter (nucleotides Ϫ75 to ϩ25), whereas the AP1 binding site (nucleotides Ϫ104 to Ϫ73) is present only in En215lacZ. Constructs in panel B contain the human ␤-globin core promoter (␤G) (nucleotides Ϫ37 to ϩ12); EnAP1␤GlacZ contains, in addition, nucleotides Ϫ124 to Ϫ73 (AP1 box) of the Col6a1 promoter, which span the AP1 binding site. The fractions indicate the number of expressing over the total of transgenic mouse lines produced. The percentage is given in parentheses. FIG. 3. Analysis of nuclear factors binding using electrophoretic mobility shift assays. Double-stranded oligonucleotide AP1-Col6a1, which spans the protected sequence identified by DNase I footprinting in Fig. 2, bases Ϫ104 to Ϫ73, was used as probe in panels A-C. Double-stranded oligonucleotide AP1-cons, which contains the binding consensus sequence of AP1 (23), was used as probe in panel D. 2-4 g of nuclear extracts from NIH3T3 cells (panels A and B) or from EL4 lymphocytes (panels C and D) was employed in each reaction. Competition assays (panels A, C, and D) were carried out with cold oligonucleotide AP1-Col6a1 and with oligonucleotides AP1-cons and AP1-mut, which contain inactivating mutations of the consensus binding sequence for AP1. For supershift experiments (panel B), either preimmune Ig (p-Ig) or the indicated antibodies against the molecular components of AP1 were added to the reaction mixture. were the percentage of lines expressing in one particular tissue over the total of expressing lines (frequency) and the average level of expression attained in expressing lines (intensity), evaluated by the relative amount of 5-bromo-4-chloro-3-indolyl-␤-D-galactopyranoside-positive nuclei in a defined tissue on an arbitrary scale as defined previously (12). The presence of the AP1 site was particularly important for expression at high frequency and high intensity in subepidermal mesenchyme, insertion of superficial and muscular aponeurotic system, and tendons with both the ␤-globin and the ␣1(VI) promoter. A stimulating effect of the AP1 site on frequency and intensity was also noted in articular cartilage and intervertebral discs with both promoters. The site was also required for high frequency expression in vibrissae. In the peripheral nervous system expression from the Col6a1 gene promoter was increased slightly by the AP1; on the contrary, expression was apparently not different with or without the AP1 site for constructs carrying the ␤-globin promoter. The data of Table I also show that expression in different tissues was dependent on the core promoter. Thus, compared with the ␣1(VI) promoter, the ␤-globin promoter was less efficient in tendons, insertions of superficial and muscular aponeuroses, articular cartilage, and intervertebral discs; however, it induced a higher frequency of expression in the peripheral nervous system. Context and Cell Type Dependence of Function of the Core Promoter and of the AP1 Binding Site-The data reported in the preceding paragraph suggest that the AP1 binding site is absolutely required for transcription in some tissues in vivo and that expression in different tissues changes when the core promoter is replaced by a heterologous one. However, a quantitative evaluation of the stimulatory activity of each element was not possible. In addition, the results did not allow analysis of the function of the sequences in the absence of the Ϫ5.4/Ϫ3.9 enhancer region, which was necessary for expression in vivo. These issues were addressed by transient promoter assays in cultured cell lines. The constructs used were similar to those described in Fig. 4 but carried the CAT instead of the lacZ gene. Four additional constructs lacked the upstream enhancer region Ϫ5.4/Ϫ3.9 and contained only the ␤-globin or the ␣1(VI) basal promoter with or without the AP1 site (the constructs are defined under "Experimental Procedures"). The cell cultures chosen were NIH3T3, in which DNase I HS2 and HS3 were very strong (Fig. 1), and C2C12, in which HS2 and HS3 were barely detectable (Fig. 1). The results are shown in Table II. In constructs with the ␤-globin promoter the AP1 site did not increase transcription in the absence of the enhancer region (compare p␤GCAT with pAP1␤GCAT) in both NIH3T3 and C2C12 cells. When the enhancer region was added, transcription was only slightly (2-3-fold) increased in C2C12 myoblasts with or without the AP1 site (compare p␤GCAT with pEn␤GCAT and pEnAP1␤GCAT), suggesting that the only activating interaction was between enhancer and promoter. On the contrary, in NIH3T3 fibroblasts the enhancer region stimulated transcription about 20-fold in the absence (compare p␤GCAT with pEn␤GCAT) and 80-fold in the presence (compare p␤GCAT with pEnAP1␤GCAT) of the AP1 site. In this case the mutual interactions among the AP1 binding site, the enhancer region, and the ␤-globin promoter can be defined as synergistic, because transcription elements synergize when their combination produces a transcriptional rate that is greater than the sum of the effects produced by individual elements. In our experiments the amount of transcription reached in the presence of the three elements was 3.5-fold greater (fold synergism) than the sum of the effects produced when the ␤-globin promoter was combined separately with either the AP1 site or the enhancer region. The results were completely different with constructs containing the basal promoter of the Col6a1 gene. Transcription from enhancerless constructs was stimulated about 5-6-fold in both NIH3T3 and C2C12 cells by the AP1 site (compare p82CAT with p215CAT). The enhancer region increased transcription 7-fold in C2C12 myoblasts (compare p82CAT and pEn82CAT), and the pres- Fig. 4 were injected into fertilized mouse oocytes, and the embryos were collected and stained with 5-bromo-4-chloro-3-indolyl-␤-D-galactopyranoside. Dot-blot assays of the DNA purified from the yolk sacs were performed to identify transgenic embryos and to determine the transgene copy number (12). The intensity of ␤-galactosidase staining was evaluated by microscopic examination of serial sections on an arbitrary scale as described (12). Only tissues for which the presence of activating sequences in the 5Ј-flanking region of the Col6a1 gene was previously clearly established (12) ence of both elements, AP1 site and enhancer region, resulted in an additive stimulation of about 12-fold (compare p82CAT with pEn215CAT). On the other hand, expression of pEn82CAT and pEn215CAT was similar in NIH3T3 fibroblasts, suggesting that the stimulating function of the AP1 binding site was abolished in the presence of the Ϫ5.4/Ϫ3.9 enhancer region in these cells. DISCUSSION The results described in this paper contribute substantial information on the function of the proximal promoter region of the Col6a1 gene. A DNase I-hypersensitive site (identified as HS1) was localized in the chromatin at about Ϫ0.1 kb from the transcription initiation site. HS1 was detectable only in cell lines that express ␣1(VI) collagen mRNA, suggesting that a rearrangement of the chromatin structure in the proximal 5Јregion is a necessary condition for transcriptional activation of the gene. The analysis of DNase I-hypersensitive sites also suggested that distinct levels of expression in different cell types were achieved by additional rearrangements of the chromatin at other sites. Thus, high amounts of mRNA were detected in NIH3T3 fibroblasts, where three hypersensitive sites were easily detectable at Ϫ4.6, Ϫ4.4, and Ϫ0.1 kb, whereas 10-fold lower levels of ␣1(VI) mRNA were found in C2C12 myoblasts, where the site at Ϫ0.1 kb was strongly and the other two sites very weakly sensitive to DNase I. Regions containing a defined set of DNase I-hypersensitive sites in chromatin are usually required for position-independent transcription of transgenes in vivo (9,10). When tested alone, sequences corresponding to HS1 were completely inadequate to overcome the constraints of chromatin structure. On the other hand, they improved the function of other sites, as indicated by the relative increase of mouse transgenic lines expressing the lacZ reporter gene (from 54 to 87% for constructs of Fig. 4A and from 23 to 75% for those of Fig. 4B) when the AP1 binding site was present. As indicated by the high percentage of expressing lines in Fig. 4, hypersensitive sites HS2 and HS3 are very efficient in making chromatin transcriptionally competent at the site of insertion of transgenes. However, the data also point out that the hypersensitive sites detected were not sufficient for complete independence of transgene expression from the insertion site. Therefore, additional regulatory sequences and DNase I-hypersensitive sites should be identified to understand fully the transcriptional regulation of the Col6a1 gene. DNase I footprinting and band-shift assays have located a recognition site for transcription factor AP1 at Ϫ104 to Ϫ73 base pairs, close to where HS1 maps, suggesting that this site and probably the GA box-containing sequences identified previously between Ϫ75 and ϩ8 (16) play an important role in determining DNase I hypersensitivity of chromatin. An AP1 binding site proximal to the basal promoter is a conserved feature of the Col6a1 gene since the site has been found also in chicken and in human (24,25). In addition, a similar element was recognized in the chicken ␣2(VI) collagen gene (26), suggesting that an AP1 binding site may be a key element in the regulation of collagen VI genes. In NIH3T3 and C2C12 cells, which express the ␣1(VI) mRNA, the site was actually bound by an AP1 factor complex containing JunD. In contrast, in nuclear extracts from EL4 cells, which do not express the ␣1(VI) mRNA, the same sequence was recognized by factor(s) not related to AP1, although the cells contain various molecular forms of the AP1 transcription factor. An obvious speculation stimulated by these results is that the presence or absence of DNase I HS1 may be determined by the difference of nuclear factor binding at sequences including the AP1 site. One possibility is that the AP1 factors of EL4 lymphocytes bind with low affinity to the Col6a1 gene promoter, whereas the molecular form(s) present in NIH3T3 and C2C12 cells have high affinity for the site. Differences in the molecular composition of AP1 binding to distinct promoters have already been observed in various cell types (27). Alternatively, EL4 cells might contain peculiar transcription factors that are absent in the other cells and compete with AP1 protein for binding to the site. Future studies will elucidate this issue. Analysis of transgenic mice carrying promoter-lacZ constructs has shown that the frequency of expressing lines and the average level of expression in the lines are variously af- a CAT activity of individual constructs is expressed as a percent of that obtained with the pEn215CAT construct. The data represent the mean Ϯ S.D. derived from at least four samples obtained in two separate experiments. b Fold induction was calculated assuming as the unit the CAT activity of the construct containing only the core promoter. c Student's t test was used to compare CAT activity expressed by constructs having the same design and differing only for the presence or absence of the AP1 binding site. This comparison allows the evaluation of the contribution of the AP1 binding site to the transcriptional activation in different core promoter-enhancer contexts. d The type of interaction was deduced by comparing the CAT activity of constructs with the same core promoter with or without the AP1 binding site and/or the enhancer region and considering the existence of a positive interaction between the regulatory elements only when the expression from constructs containing or lacking the activating element was statistically significant. See "Discussion" for definition of various types of interactions. fected by the AP1 binding site in different tissues. Both parameters are particularly dependent on the presence of the AP1 site in subepidermal mesenchyme, at the insertion of the superficial muscular and aponeurotic system, and in tendons. The frequency parameter can be attributed to the capacity of a cis-acting region to make chromatin accessible to the transcriptional machinery, indicating that AP1 has an important structural role in these tissues. This function of the AP1 site is clearly evident also in vibrissae, where the frequency, but not the intensity, was greatly stimulated. The level of expression of a transgene probably depends on the activating potential of the cis-acting elements, i.e. the ability of the factors binding to DNA modules to recruit the transcription preinitiation complex (8). Our data lead us to conclude that AP1 is a strong activator of transcription in cells of subepidermal mesenchyme, at insertions of the superficial muscular and aponeurotic system, and tendons. On the contrary, the AP1 site does influence only marginally both the frequency and the intensity of expression of transgenes in cells of the peripheral nervous system. To explain the independence of frequency from the AP1 site it may be hypothesized that, in the peripheral nervous system, either the function of the site is replaced by another site not active, and hence not detected, in the cell cultures we have used, or opening up of chromatin is almost completely dependent on the upstream enhancer region. An intermediate situation is apparent in the remaining tissues, articular cartilage and intervertebral discs, where the AP1 site increases to some extent the frequency and intensity of expression. The in vivo data also suggest a role for the core promoter in tissue-specific transcriptional regulation of the Col6a1 gene, in a way similar to that of the AP1 binding site. In fact, expression of transgenes in tendons and at the insertions of superficial muscular and aponeurotic system was more evident with the Col6a1 promoter than with the ␤-globin promoter. Conversely, the frequency of expression in the peripheral nervous system was higher with the ␤-globin promoter. The core promoter of the Col6a1 gene was partially characterized in previous work and was shown to exhibit several unusual features among the TATA-less promoters (16,19). The RNA start sites are spread on a sequence of more than 70 base pairs, and the most upstream site has been denoted as ϩ1. The major transcription initiation site is at base ϩ21 and a second strong site at base ϩ9. These sites resemble, but do not match exactly the consensus sequence proposed for the initiator element (ϩ21 site: 8Py C A ϩ1 G C 3Py; ϩ9 site: 9Py G ϩ1 G C T 8Py; consensus sequence for initiator: Py Py A ϩ1 N T/A Py Py; where Py indicates a pyrimidine) (28). Because it has been noted that a large number of pyrimidines surrounding the start site can impart low levels of initiator activity in the absence of either the A at ϩ1 or the T at ϩ3 (29), it is very likely that the sequences around ϩ21 and ϩ9 constitute weak initiators. These initiator elements, however, do not drive transcription unless they are linked to an upstream sequence, containing repeated GA boxes, which extends from Ϫ75 to ϩ8 (16). This region has intrinsic promoter activity, as suggested by the observation that the fragment Ϫ82 to ϩ41 is equally active in both orientations, 3 a property not shared by initiators (28). Considering both the putative initiator sites and the GA boxrich region, the core promoter of the Col6a1 gene extends from Ϫ75 to ϩ25, a sequence that closely corresponds to that used to synthesize our Col6a1 core promoter constructs (Ϫ82/ϩ41). In a previous report we located the region inducing transcription in tendons and at the insertions of the superficial muscular and aponeurotic system within 0.6 kb upstream from the RNA start site (12). The new results point at the AP1 binding site as an important element contributing to activation of transcription in these tissues. In the same paper, the modules responsible for transcription in the subepidermal mesenchyme were assigned to the Ϫ5.4/Ϫ3.9 enhancer region. The present data show that expression in this tissue is strongly dependent on the homologous promoter and on the presence of the AP1 binding site. Thus, it may be speculated that transcription in the subepidermal mesenchyme requires a synergistic action of the three regulatory elements: the core promoter, the AP1 site, and the enhancer region. The overall message coming from the in vivo experiments is, therefore, that transcription in different tissues depends on a peculiar interplay among the three regulatory elements. The complexity of the mechanisms of tissue-specific regulation of the Col6a1 gene observed in vivo was defined further in transfections in vitro. The quantitative analysis of the results leads to a conclusion similar to that of the in vivo data: the 3 S. Piccolo, unpublished observations. FIG. 5. Representation of different types of core promoter, enhancer region, and AP1 binding site interactions identified in transfection experiments with different cells. The three regulatory elements are bound by specific protein complexes: the core promoter, either from ␤-globin (␤G) or from the Col6a1 gene (Col6a1), is depicted in association with the basal transcription apparatus (BTA); the AP1 binding site is occupied by a molecular form of the AP1 transcription factor containing JunD; the enhancer region from Ϫ5.4 to Ϫ3.9 of the Col6a1 gene (En) is hypothesized to bind a cell type-specific enhanceosome (33). In panel D the two mutually exclusive interactions of the BTA are represented: when the enhancer region is inactive or absent, the AP1 factor binds to the BTA (dashed line); this interaction is disrupted when an active enhancer region binds to the BTA (solid line). For definition of various types of interactions, see "Discussion" and Table II. levels and the features of transcriptional activation in different cell types depend on the specific interactions among the core promoter, the proximal activating region, and the enhancer region. Four distinct types of interaction could be identified by the data reported in Table II, as outlined in Fig. 5. In C2C12 cells the AP1 site did not interact positively with the Ϫ3.9/Ϫ5.4 kb enhancer region (Fig. 5, A and B). When the ␤-globin promoter was used, the only interaction was between the promoter and the enhancer (Fig. 5A). On the other hand, the homologous promoter was stimulated by both the AP1 site and the enhancer, and the final induction of transcription achieved with the three modules together was the sum of those obtained from the separate combinations of the promoter with the other modules (Fig. 5B). A completely different situation was apparent in NIH3T3 cells. The use of the ␤-globin promoter resulted in a synergistic activation of about 3.5-fold when all of the modules were present. The synergism can be explained by assuming that the protein complex assembled at each module interacted positively at the same time with those brought together by the other modules as indicated in Fig. 5C. By replacing the TATAcontaining ␤-globin promoter with the TATA-less promoter of the Col6a1 gene, synergism did not take place, and a fourth type of interaction of modules was observed, tentatively identified as competitive (Fig. 5D). Namely, although the AP1 site stimulated considerable transcription from the promoter, expression in the presence of the enhancer was similar with or without the AP1 site. This condition can be accounted for by hypothesizing that an activating interaction takes place between the homologous promoter and the AP1 site if the enhancer is inactive (or absent) and that this interaction is disrupted when the enhancer is turned on and binds to the core promoter. The model of Fig. 5 differs considerably from the view of DNA regulatory elements acting in a modular way to control transcription deduced from studies on expression of transgenes in vivo by several authors including ourselves (2-5, 12, 14). The modularity of the function of cis-acting elements in these reports only applies to the fact that, for genes expressed in several tissues, such as most collagens, different enhancer regions activate transcription in specific subsets of tissues. A closer look at the function of the regions involved, however, shows the existence of more complex interactions among regulatory elements, which may explain peculiar features of a gene's regulation. One example is enhancer-promoter selectivity, in which the activation of only one of multiple promoters by a nearby enhancer depends on cognate interactions between the two elements (8,30). As for our experiments, these results provide evidence that different core promoters possess distinct regulatory activities. The model of Fig. 5 is also consistent with the present knowledge on the molecular mechanisms of transcription activation, in which the final result is the consequence of specific interactions of protein complexes bound to different cis-acting regulatory elements. In a simplified view, the core promoter associates with the general transcription factors (31), whereas activators are bound at proximal activating sequences or at enhancers. Enhancers are usually made of specific clusters of binding sites for nuclear factors, which impose a precise alignment of the proteins on the DNA, resulting in the formation of a stable, highly stereospecific threedimensional nucleoprotein complex called enhanceosome (32,33). The interaction between the general transcription factors and the enhanceosome (or single activators at isolated binding sites) then determines the recruitment of the RNA polymerase II holoenzyme and the formation of a stable preinitiation complex (33,34). It is clearly apparent from this model that any change in the composition of the three types of protein complexes (general transcription factors, enhanceosome, and activators bound at the proximal activating region) could influence RNA polymerase II recruitment. The molecular analysis of the interactions among the cis-acting regulatory regions of the Col6a1 gene in different cell types will require the delineation of binding of general transcription factors to the core promoter and the characterization of the protein complex assembled at enhancer regions. These studies are presently in progress.
2018-04-03T01:31:45.462Z
1999-01-15T00:00:00.000
{ "year": 1999, "sha1": "a6e06e794fc0992da63900068961710c60ea7349", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/274/3/1759.full.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7c0ee840d5ba3609fa54599bf86c812c6c6cd982", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
225896765
pes2o/s2orc
v3-fos-license
Current Nutritional Statuses and Gastrointestinal Complications in Critically Ill Patients Admitted to ICUs in Iran: A Cross-Sectional Study Background and Objectives: Adequate nutrition is closely linked to clinical outcomes. Therefore, this study was carried out to assess nutritional statuses of the ICU patients in Isfahan, Iran. Materials and Methods: In this cross-sectional study, 55 critically ill adult patients receiving enteral nutrition for a minimum of seven days were participated. Nutritional screening, including acute physiology and chronic health evaluation (APACHE) score, nutrition risk in critically (NUTRIC) score and nutritional assessments of laboratory data, energy and protein balance, was carried out. Moreover, gastrointestinal problems was assessed. Results: In total, 55 patients (35 men and 20 women) with the median [IQR] age of 49 [18–77] years and the median [IQR] weight of 75 (55–100) kg were included in this study. The average of albumin concentrations were 3 g/dl ±0.7 in ICU inpatients, indicating decreased albumin levels compared to normal ranges (3.5–5 g/dl). During inpatient period, nutrition screening showed a median range of NUTRIC score of 3 (2–5) and APACHE score of 23 (18–27). In addition, median range of weight decreased to 71 (50–96) kg. Median intakes of energy and proteins for seven days seemed inadequate (1920 [1200–2740] and 86 [49–129], respectively). After gastrointestinal assessment, 20% of the participants had nausea and vomiting, 10% had obstipation, 5% had diarrhea and 20% had enteral feeding intolerance (assessed by GRV > 250 mL at repeated regular (6 h) measurements). Conclusions: Results have suggested that although imbalanced energy, insufficient protein intakes, and gastrointestinal complications are common in ICU patients especially in women, risk assessment of malnutrition has shown no critical results. Therefore, designing and providing more sensitive methods for the screening of nutrition and assessment of nutritional adequacy is essential to prevent malnutrition in societies. Introduction Malnutrition is a spread problem in hospitalized patients; however, this public health problem remains widely unrecognized (1). Poor nutritional statuses can occur due to deficiencies in diet plans, increased nutritional requirements due to illness, complications of diseases, and poor nutrient absorption, or a combination of them (2,3). Based on previous studies, prevalence rates of the hospital malnutrition have been reported at 20-50%, depending on the patient population, and criteria of diagnoses (2)(3)(4). In Iran, rates of the hospital malnutrition are reported as nearly 43% (5). It is well known that insufficient nutritional consumptions can include significant effects in increased risks of infectious and non-infectious complications, prolonged durations of stay at hospitals and intensive care units (ICUs) and further frequent readmissions and mortalities (6)(7)(8)(9)(10)(11)(12)(13)(14). In critically ill patients, this condition can be induced by systemic inflammatory responses to critical illnesses or traumas, which increases significant metabolic demands and results in development of malnutrition and further increases in risks of infectious complications, multiple organ dysfunctions and mortalities (15,16). Numerous studies have revealed that nutritional statuses and cares include significant effects on hospitalization outcomes (17)(18)(19). Therefore, nutritional assessment and screening are integral parts of the treatment in critically ill patients (20)(21)(22) nutritional assessment criteria of the ICU patients (20). Studies have shown that low circulating levels of magnesium, phosphorus and albumin can lead to energy deficiency and cardiac and neuromuscular disorders (23). Decreased serum potassium levels can results in severe muscle ache and cardiac arrhythmia and arrest (24). Previous observational study has indicated that malnutrition is spread in ICU patients. The study assessed nutritional statuses of 100 critically ill patients admitted to ICUs in a hospital of Al-Zahra University from February to April, 2012. However, daily calorie and protein balances and occurrence of gastrointestinal problems were not assessed in that study (25). Furthermore, biomarkers and indices such as albumin, blood urea nitrogen (BUN), creatinine, potassium and magnesium levels were assessed (25). Due to the high prevalence of hospital malnutrition in Iran, the current study was carried out to assess nutritional status in ICU patients receiving nutritional supports in Isfahan, Iran. Study design This cross-sectional study was carried out to assess clinical nutrition cares in 55 critically ill adults in ICUs in Isfahan from March to May, 2019. Inclusion criteria Patients over 18 years old, hospitalized more than three days in ICUs, received EN and/or PN on the screening day and were hemodynamically stable were recruited to this study. The observation period was described from the screening day (Day 1) for maximum of seven days. Exclusion criteria Patients admitted to the hospital for less than three days were excluded from the study because effects of nutrition on these patients were not considerable. Moreover, data of the patients were collected. Data collection and variables Nutritional statuses were assessed through measurement of anthropometric indices, clinical characteristics, laboratory values and medical histories by a registered dietitian using standard methods. Estimate weight was measured indirectly using Devine's method due to the lack of access to accurate weight measuring instruments (25). Mid upper arm circumference was used to estimate body mass index (BMI) (26). Biochemical indices such as blood glucose, albumin, magnesium, potassium, and BUN and creatinine levels were assessed during hospitalization. Dietary assessment was carried out to assess nutritional statuses, nutritional risks, types and volumes of the nutrition therapy and daily calorie and protein balances during the observation period. Patients' nutritional statuses were assessed based on the nutritional risks in critically ill (NUTRIC) score (27,28). The NUTRIC score included six variables counting age, acute physiology, and chronic health evaluation (APACHE) II score, sequential organ failure assessment (SOFA) score, number of comorbidities, days from hospital to ICU admission and interleukin-6 (IL-6) (29). Gastrointestinal problems were assessed in these patients. Available data from the patients' records were collected for each day of stay in ICUs for seven days. The formula used by the patients during hospitalization included 1 kcal of energy per milliliter of the formula; from which, 15% derived from proteins. Calorie balances were calculated as differences between daily caloric targets and daily calories provided by enteral and/or parenteral nutrition and other sources of calorie intakes. Daily protein balances were calculated as differences between clinician-derived daily protein targets and daily protein intakes (30). Daily calorie targets were derived by the clinician using standard formulas (25 kcal/kg actual body weight on the screening day for patients in ICUs with no obesity) and daily protein targets using standard formulas (1.2 g/kg) (31). Cumulative calorie and protein balances were reported as sum of the mean daily balances for all days divided to seven days. Statistical analysis Quantitative variables were presented as mean ±SD (standard deviation) and qualitative variables were reported as frequencies (%).Primary outcomes were presented as continuous variables using mean differences between daily calorie targets and daily calorie intakes. Also, calorie intake of the patients were summarized as numbers and proportions in each of the following categories of > 90% of daily targets, and calorie deficits (≤ 90% of daily targets). In addition, nutritional characteristics of patients in each sex were categorized based on the following parameters: nutritional statuses, types of nutritional therapy and 7-days protein intake. The APACHE II score also was clarified based on severity of illness for each patients. Results The current study population included 35 men and 20 women with a median age of 30 and 52 years, respectively. Demographic, anthropometric measurements and biochemical values of the ICU inpatients are shown in Table 1. Of the patients, 7.3% included BMI of less than 18.5. The most common primary reason for admission to ICUs included trauma (45.45%). The median points of APACHE-II and NUTRIC were 22 score (23 for males and 22 for females) and 3 score, respectively. The NUTRIC score in our study revealed that the risk of malnutrition was mild. The average albumin concentration included 3 g/dl ±0.7 in ICU inpatients, demonstrating decreases in Downloaded from nfsr.sbmu.ac.ir at 21:46 +0330 on Monday December 28th 2020 albumin levels compared to normal ranges (3.5-5 g/dl) (32). Plasma magnesium and potassium levels were 0.65 and 4.06 mmol/L respectively, which showed hypomagnesia in patients (33). Of 55 patients admitted to ICUs, proportions of enteral, enteral/parenteral and total parenteral nutrition included 76.4, 18.2 and 5.5%, respectively ( Table 2). Analysis of the cumulative calorie balances from seven days revealed that the median 7-day energy intake included 1691 kcal, while the median target energy included 1851 kcal. Results showed that calorie intake in men was less than 90% of the target calorie intake, while it was sufficient in women (Fig. 1). The median of 7-day protein intake was 69 g, whereas the median target protein was 84 g, showing that daily intake of protein was less than 1.2 g/kg in patients (Fig. 2). Overall, 5.5% of the patients suffered from diarrhea, 12.7% from obstipation, 20% from gastric residual volumes of greater than 250 mL and 18.2% from nausea and vomiting (Table 3). Routes of feeding in patients with calorie deficits on the screening day are shown in Fig. 3; from which, the most common route was enteral feeding. Discussion In this cross-sectional study of nutritional statuses and gastrointestinal complications in ICU patients, critical malnutrition statuses were not observed in ICU inpatients based on the NUTRIC score. However, gastrointestinal problems were common in these cases. Moreover, decreases in serum albumin levels and hypo-magnesia were seen. Patients' protein and energy intakes were limited. In the present study, albumin levels were lower than normal ranges (32). Decreased levels of albumin, total protein and phosphorus were associated to malnutrition in the present study and patients with good nutritional statuses included higher albumin levels than those malnourished patients did (25,34). It is noteworthy that each 10 g/L decrease in serum albumin concentration significantly increased the odds of mortality by 137%, morbidity by 89%, prolonged ICU and hospital stays by 28 and 71% respectively and increased resource utilization by 66% (35). Evidence indicated that two important potential confounding variables, malnutrition and inflammation, might explain the low albumin level effect (35). Interestingly, increased dietary protein intake and nutritional supplementation can improve low circulating albumin concentrations. In further analysis, magnesium levels decreased in male and female ICU patients, compared to the normal range (33). Studies have shown that decreased circulating magnesium can be associated with higher mortality rates in critically ill patients (36). However, low levels of magnesium may result in decreased gastrointestinal (GI) absorption and increased renal loss, diarrhea, malabsorption and inadequate dietary protein and energy intakes (37). In critically ill patients with mild to moderate decreased plasma magnesium, administration of 1 g (8 mEq) of intravenous Mg can increase serum Mg concentrations by 0.15 mEq/L within 18 to 30 h (38). Treatment of the associated electrolyte abnormalities and general management of the patients with focus on their nutritional therapy are necessary (39). Moreover, the current study has shown that daily calorie intake of the patients was lower than 90% of the energy targets. In published studies, the most important causes of malnutrition in ICU inpatients include calorie intake deficiency during the first days of admission and mechanical ventilation, which usually result in higher metabolic rates and further complications (40)(41)(42). Moreover, recent studies have shown that low calorie intakes are linked to nosocomial bloodstream infections and include negative effects on clinical outcomes of the ICU patients (43,44). In the last stage of analysis, consumed protein quantity failed to meet the target requirement (1.2 g/kg). It is noteworthy that the formula used by the patients included 15% of proteins, which are not sufficient for the patients. The earlier studies have shown that lower protein intakes in ICU patients can lead to prolonged hospital stays, increased risks of malnutrition and impaired clinical outcomes (25,30,45). Moreover, studies have shown that this lack of adequate protein intake may be due to insufficient energy intake. Supplemental PN can improve energy and protein deliveries and potentially decrease risks of clinical side effects (30). Despite the low risks of malnutrition in patients this study, patients possibly develop malnutrition in long times. Therefore, necessary assessments must be carried out later since newly admitted patients receive insufficient energy and protein. In this study, more than half of the patients suffered from gastrointestinal problems. Obviously, gastrointestinal problems are common complications of the critical illnesses that can be characterized by constipation, abdominal distension, pain, nausea and vomiting and is associated with significant morbidities such as feeding intolerance, inadequate absorption of nutrients and medications and prolonged hospitalization (46). In ICUs, these gastrointestinal problems are linked to gastrointestinal motility, altered by drugs, immobility, surgery, enteral feeding, head and spinal injuries, inflammation and sepsis (46). Therefore, understanding causes of gastrointestinal intolerance and resolving them can significantly affect the clinical status of ICU patients. Strengths of the study Strengths of the study include assessment of nutritional statuses of ICU patients that can be used in future interventions and use of standard instruments to assess biochemical values. However, it is noteworthy that the current study included several limitations as follows: 1) cross-sectional study and follow-up impossibility, 2) lack of direct calorimeter devices and inability of accurate calculation of the necessary calories, 3) lack of body composition analyzers and use of arm circumference to estimate current weights, and 4) lack of other biochemical and clinical tests other than hospital routine tests. Conclusions In conclusion, these results have suggested that clinical and biochemical factors such as energy and protein requirements, gastrointestinal complications and magnesium levels are abnormal in ICU patients, especially in women. However, risk assessments of the malnutrition have shown no critical results using nutritional screening tools. Therefore, these cases can be used as warning predictors of the malnutrition in long terms and must be reassessed in the future. In addition, well-designed clinical trials are necessary to clarify all aspects of the nutritional supplementation. Financial disclosure
2020-10-28T18:20:53.022Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "5febc977e1557885091891e777e1e7cdc82e42f6", "oa_license": "CCBYNC", "oa_url": "http://nfsr.sbmu.ac.ir/files/site1/user_files_e3fcde/drshokri-A-10-617-1-7912bd2.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1cd91c12f147831357a67603a756f43f98a06095", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
121024315
pes2o/s2orc
v3-fos-license
Investigating graphical representations of slope and derivative without a physics context By analysis of student use of mathematics in responses to conceptual physics questions, as well as analogous math questions stripped of physical meaning, we have previously found evidence that students often enter upper-level physics courses lacking the assumed prerequisite mathematics knowledge and/or the ability to apply it productively in a physics context. As an extension from this work on students’ mathematical competency at the upper level in physics, we report on a preliminary investigation of mathematical understanding of fundamental concepts of slope and derivative among students in a thirdsemester multivariable calculus course. Among the first published findings of physics education research are investigations on students’ understanding of kinematics, with particular attention to graphical representations of position-, velocity-, and acceleration-versus-time graphs. Underlying these physical quantities are relationships that depend on derivatives and slopes. We report on our findings as we attempt to isolate students’ understanding of these mathematical concepts. I. INTRODUCTION Among the earliest findings in the physics education research (PER) literature are those difficulties reported by Trowbridge and McDermott concerning student understanding of kinematics [1,2].A significant portion of this work was done through the analysis of student ideas about graphical representations of various kinematic processes and was followed a few years later by the work of McDermott et al. [3], which further analyzed student thinking of these concepts.A decade later, Beichner developed a multiple-choice survey to assess student knowledge of graphical representations in the context of kinematics called the Test of Understanding Graphs-Kinematics (TUG-K) [4].Beichner's results corroborated findings previously reported by McDermott et al., including ''slope/height/area confusion'' in the context of kinematics among students in the introductory physics sequences. The work we present here grew out of a broader study on the learning and teaching of thermal physics, dealing with identifying and addressing student conceptual difficulties with the physics content.A major subtheme of our research into students' understanding of thermal physics involves investigating the extent to which any mathematical conceptual difficulties may affect students' understanding of associated physics concepts in thermodynamics [5,6].In this area, as with many physics areas, we expect that specific mathematical concepts are required for a complete understanding and appreciation of the physics.Although several of the mathematical concepts we probed are not taught until the third-semester of calculus, a number of those which are essential to junior-level thermal physics (e.g., derivatives, integrals) are also considered essential for calculus-based introductory physics.Our research question in this context is, to what extent can students answer conceptual math questions that appear identical to conceptual physics questions that are stripped of their physical context? II. METHODOLOGY We sought to probe students' ideas about a few of the mathematical concepts that we expect them to use in the physics classroom.This led to creating questions that looked like physics questions but were simply stripped of their physical context.Since these questions often involve representations that deviate from those typically used in the mathematics domain, we have labeled them ''physicsless physics questions'' [7]. We have asked physics-less physics questions about integrals and line integrals, based on findings in the context of P-V diagrams [7,8] and about partial differentiation and the product and chain rules based on findings related to material properties in thermodynamics [5,9].Our findings suggested that these questions were challenging to a significant population of physics students at the upper division. While these questions were originally developed for physics students, we realized that an interesting comparison, and one that could provide insight on issues of epistemic framing [10], transfer [11][12][13], and/or disciplinary conventions, would be to ask these questions to students in a third-semester calculus course, after all relevant mathematics instruction.Results should capture student thinking about many concepts that had been taught up to that point (e.g., slope, derivative, integration, partial differentiation, etc.).We designed a short written, free-response survey containing only the physics-less questions.This survey has been given to more than 150 students over multiple semesters of the University of Maine's third-semester calculus course, Multivariate Calculus, always in the last week of class. The data here are not matched across questions.Instructor participation decreased after the first semester, which accounts for the sample size nearly halving from semester 1 to semesters 2 and 3.Many students did not answer every question in the survey, usually due to time constraints, so that for any given semester the numbers are slightly different from one question to the next.We also assume that, on average, the students from the three different semesters are samples drawn from the same population. III. ASSESSMENT TASKS AND RESULTS Given our results on other physics-less physics questions in physics classes, we did not want to make any assumptions about multivariable calculus students' understanding of slope and (single-variable) derivatives.Therefore, we included physics-less physics questions about concepts of slope and derivative on our survey in order to shed additional light on student thinking regarding these concepts without the burden of the physics context.The result was two questions that are effectively physics-less versions of questions from the PER literature: the Slope Ranking Task and the Derivative Sign and Ranking Task. A. Slope Ranking Task In the Slope Ranking Task (Fig. 1), students are asked to order the slope of the drawn function at four different values of x.In other words, we wanted students to identify the value of the instantaneous slope at each point.The question attempts to dissuade ranking absolute values and contains a great deal of language explaining the desired form of the response.A correct response on the Slope Ranking Task requires students to associate slope with the steepness of the tangent line of the curve at the four given points. Difficulties in mathematics consistent with physics difficulties among introductory students Over three semesters, roughly 85% of students were able to complete this task successfully (see Table I).Fewer than half of the students with correct rankings provided any reasoning for their ranking.Any mention of steepness of the curve, or the slope at the curve at the points, was considered a correct explanation.We do not presume that this is the full extent of students who are able to explain this response, but rather that many students simply did not write anything down. Although the incidence of incorrect rankings was small (roughly 15%), a few commonalities could be determined, though none of them accounted for more than 5% of the total responses.The most common incorrect response was a ranking that is consistent with the average slope between points rather than the instantaneous slope at a point.A line segment drawn between the values of the function from points 0 to a would have a larger slope than a line segment drawn between the values from c to d. (See Fig. 2 for a sample response of this nature.) This type of confusion-students interchanging average and instantaneous velocity for objects that are f is a function of the variable x, i.e. f = f(x).Consider the graph of f(x) versus x shown on the right. Rank the value (NOT absolute value!) of the slope of f(x) at each of the four values of x (i.e., a, b, c, d) from greatest to least.Keep in mind that positive values are greater than negative values, and that a larger negative value is less than a smaller negative value.If two slopes are equal, state this explicitly.If there is not enough information to decide, state so explicitly.Explain your reasoning.not experiencing constant acceleration-has been documented in the PER literature among first-semester calculus-based physics students in the context of kinematics [4,14]. The second most common incorrect response given was a ranking consistent with the value of the function at each point rather than the slope at each point.This confusion has been previously reported in the PER literature by Beichner in the context of graphical interpretation of kinematics [4].It is unclear to what extent the errors reported by Beichner may occur due to student confusion with the mathematics rather than the physical context, but it seems highly plausible that some students are struggling simply with the mathematics implicit in the question. While only a few students ($ 5%) in the population of third-semester calculus enrollees are making this kind of error, it seems reasonable that students exiting a firstsemester course might be displaying similar or increased difficulty.We are currently exploring the extent of these difficulties among introductory students to further illustrate these findings. B. Derivative Sign and Ranking Task The Derivative Sign and Ranking Task (Fig. 3) asks students to determine the signs of and compare the magnitudes of the derivatives of three different functions of the independent variable x at the same value of x, based on a set of graphs of the three functions.This question requires students to make a connection between a derivative and either the slope or the change in the function, which must then be interpreted from the graph.This task was specifically written to overlay the assessed concept in the previous question, while also identifying students who could rank the slopes of a line but might not be able to connect ''derivative'' with the slope of the line. The curves of the three functions were drawn to allow common incorrect reasoning to be more clearly determinable.We found that more than half of the students were able to state that the derivatives for all three functions were positive (see Table II).Because of the potentially mislead-ing flatness of curve hðxÞ, some students stated the derivative at x ¼ a was zero or not determinable, which we allowed as correct in our analysis. A curve with a more clearly positive slope was added in place of hðxÞ after the first administration, which seemed to effectively eliminate this ''alternative'' correct explanation. Results of the derivative tasks The responses of signs and ranking tasks were fairly consistent, and most student reasoning was easily inferred from the paired set of responses.The most common incorrect response (7% of all responses) on the original version was a set of signs and a ranking consistent with that for the values of the 2nd derivatives of these curves.A student categorized as such would give the signs as (positive, zero, and negative) for the derivatives at fðaÞ, gðaÞ, and hðaÞ, respectively.This is consistent with the rate of change of the slope of the function at x ¼ a rather than the rate of change of the function itself.Most responses in this category had very little, if any, accompanying reasoning.One student did seem to correct himself in the middle of his response, using ''curvature'' to justify his signs (see Fig. 4), and then switching to reasoning about the function(s) ''increasing'' to decide about the derivative.We were concerned that the question wording may have been unclear.The first version stated, ''For each of the derivatives listed above, state whether the derivative is positive, negative, zero or there is not enough information to decide.''In response, we altered the language to indicate more clearly that we sought the signs of the derivatives of the functions and gave the expressions for the derivatives in the response area for emphasis (Fig. 5).However, the semester 2 students gave 2nd derivative responses at the much higher rate of 18% (the rate in semester 3 was 13%) with the modified version (see Table II), implying that the question wording and presentation were not the issue, and that this is still a significant difficulty for students even after multivariable calculus. 2. 2nd derivative responses are not likely to be an issue of reading the slope of the curve We can cast additional light on the thinking of those students that gave responses consistent with 2nd derivative reasoning by examining their responses to the Slope Ranking Task. Nearly all (95%) of those students who gave a 2nd derivative response on the derivative tasks gave a correct ranking on the Slope Ranking Task.This suggests that these students are able to make sense of the slope of a curved surface, but do not match the idea of derivative with the instantaneous slope of the curve.One possible explanation (among many) would be that a student might carry two notions of derivative: that of the rate of change of the function and that of the slope of the function.Students may use one notion or the other as they see fit in a given context, but may, at times, use them simultaneously.Thus, a question about derivative may cause them to think of the ''change in the slope'' and give a 2nd derivative response.Additional research would be necessary to fully reveal this phenomenon. Analysis of the responses to the derivative ranking portion of the task [see part (b) in Fig. 5] revealed only one noteworthy result.Most students who gave a correct sign for the derivatives in part (a) correctly ranked the values of the derivatives in part (b).Among those responses that had all positive in part (a), there were some instances (< 5%) of student rankings that were consistent with a ranking of the areas under the curves of the functions, with at least two instances of students explicitly justifying their answers as ''areas under the curve.''If this question were given in a kinematics context, researchers would likely identify student confusion with the associated kinematics quantities; instead, this physics-less physics question points to at least a few students who use notions about area to answer questions about derivatives independent of physical context. IV. CONCLUSIONS Preliminary results from questions about slopes and derivatives administered in a multivariable calculus course suggest that students have difficulties conceptualizing mathematics tasks that are common to the ways in which we ask questions in physics courses.There is a growing body of work on transfer [11][12][13], with findings that students have difficulties transferring mathematical ideas across disciplines.The type of mathematical tasks we want our students to do in a physics class may simply be foreign to their mathematical ways of thinking.Some of their demonstrated difficulties seem to have origins in the understanding of the math concepts themselves.These results are consistent with results from questions about integrals in thermodynamics contexts [8].This aspect of our results will be explored further in future research.We are continuing to collect and analyze data from written questions with an eye toward expanding the scope of our investigation to additional populations at the introductory level, both in physics and in mathematics. FIG. 3. Original Derivative Sign and Ranking Task.[Later versions featured a curve with a more clearly positive slope for hðxÞ at x ¼ a; see Fig. 5.] FIG. 4 . FIG.4.Example of student response to derivative sign task that started to use 2nd derivative reasoning-note the ''positive curvature'' in the response-but switched to the correct reasoning. TABLE II . Response distribution for Derivative Sign and Ranking Task.Percentage in the ''All correct responses'' row are the sum of the ''Preferred'' and ''Alternative correct responses''; 2nd derivative responses are those students whose sign choice is consistent with that of the 2nd derivative, but not necessarily identified by the student as a 2nd derivative response.
2019-01-21T15:18:53.769Z
2012-07-26T00:00:00.000
{ "year": 2012, "sha1": "0ea25d89b541160ec8f45816c733f0dbfeae8638", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevSTPER.8.023101", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "0ea25d89b541160ec8f45816c733f0dbfeae8638", "s2fieldsofstudy": [ "Education", "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
38615303
pes2o/s2orc
v3-fos-license
Description of socioeconomic and demographic profile of young women vulnerable to infection by human papillomavirus and risk behavior in a school in Rio de Janeiro Background: HPV is one of the main sexually transmitted diseases, especially among the female population. This is an important etiologic agent for the development of cervical intraepithelial lesions and cervical cancer. It is considered a public health problem, since young women are the most vulnerable group to this virus. Therefore, it is important that the socioeconomic and demographic profile of these women and their risk behaviors are known, so that it is possible to contribute in reducing infection occurrences in the studied population. Objectives: To describe the socioeconomic and demographic characteristics and investigate the behavioral sexual-affective aspects of risk of adolescents and young students from Rio de Janeiro/Brazil when tackling HPV infection. Methods: A group composed by 128 individuals susceptible to HPV—classified as adolescent women and young women who are students at a high school in one unity of the municipality of Rio de Janeiro. The studied period was from May to November. A quantitative descriptive approach was used, in which data were highlighted in variables, divided into economic, demographic and behavioral characteristics. Data were entered into an Excel spreadsheet and organized by descriptive statistics performed by simple frequency (%). Results: The age range of the young women who have the possibility of being infected with HPV was from 15 to 25. The focused family income among these young women was up to 2 minimum wages. The risk behavior detected in 37.5% of adolescent women and in 43.8% of young women is that these women never used condoms in sexual intercourse. Conclusion: The research showed that the studied women are vulnerable due to risk behavior practices that may lead to the virus acquisition. More focuses on educational actions of preventive measures regarding HPV infection should be emphasized, favoring a lower incidence of human papillomavirus infection and cervical cancer. INTRODUCTION Human papillomavirus (HPV) is a virus that infects the epithelial cells of the skin and mucosa, being one of the most common sexually transmitted diseases in the female population.This infection occurs preferentially in the genital organs, as the vulva, vagina, uterine cervix, penis and perianal areas and oropharynx.Thanks to the molecular biology techniques, the association between HPV and cancer has been increasingly investigated.There are over 100 viral types and subtypes, and some of these viruses may cause lesions, which, if not treated, may evolve to cancer [1].HPV is grouped in viral sub-types of low-risk (6, 11, 42, 43 and 44) and high-risk (16,18,31,33,34,35,39,45,46,51,52,56,58,59,66,68,70), thereby establishing the relationship between persistent infection by HPV with some viral types and cervical cancer.Thus, in 95% of cases, HPV is associated to cervical cancer [2].The highest prevalence of this infection is among adolescent and young women, from 15 to 25, which causes this group to be considered as the one of higher vulnerability to HPV [3].Early initiation of sexual activity and multiple partners are some behavioral factors that lead this population segment to a greater susceptibility [3].Adding to that, there are the environmental and individual factors, which, together with HPV virus, modulate the risk of transition from infection to malignancies, including genetic susceptibility, immune and nutritional condition, tobacco use, multiparity, infections by other sexually transmitted agents-such as HIV, chlamydia trachomatis and herpes type 2 [4]. However, studies have reported that there are still barriers that prevent young people to adopt effective preventive measures against this infection, since big challenges are found in the difficulty of the adolescent to understand himself as vulnerable and make decisions and act in order to face this infection and other sexually transmitted diseases [5].In this scenario, it is necessary to know the characteristics of this populational group, indicating their behaviors and attitudes that make them vulnerable to the infection [6].Studies on this topic become relevant when building strategies of assistance that may reduce the chain of transmission of HPV and consequently morbimortality through precursor lesions and cervical cancer.From these arguments, this study aims to describe the socioeconomic demographical characteristics and to investigate the behavioral sexual-affective aspects of risk in adolescents and young students from Rio de Janeiro/Brazil against HPV infection. METHODS This is a descriptive study with quantitative analysis, held in a high school (public school) of the municipality of Rio de Janeiro.Such study had, as participants, 128 young women enrolled in the referred school, aged between 15 and 25 years.The sample was divided in two groups: adolescent women (AW) and young women (YW).The first group was composed by adolescent women in the age group from 15 from 19 and the second group by young women from 20 to 25.This division was partially grounded in the chronological limits of adolescence, as defined by World Health Organization (WHO), which are from 10 to 19.For young people, this limits are from 20 to 24, which division is used mainly for statistical and political purposes.As inclusion criteria, this study had adolescents and young people who were not infected by HPV, and had as exclusion criteria ages be-low 15 and above 25.To collect data, we used a form with the purpose of tracing the socioeconomic and demographic profile of the sample and raising the sexualaffective behaviors that may lead these young women to contract HPV.The data was divided in two blocks of variables: identification data in which were highlighted variables such as age, ethnicity, education, marital status, family income; the second block dealt with the characteristics related to risk behavior variables: beginning of sexual activity, number of partners in the last year, condom use and type of sexual activity.The sampling was determined by non-probability sampling technique by convenience, after the confirmation that the studied ones did not have HPV.At school the students were subjected to individual interviews between the months of May and November of 2012, to respond to the form in private local to ensure the privacy of the individuals.Data was entered into an Excel spreadsheet and organized by descriptive statistics performed by simple frequency (%), and were subsequently inserted in tables for easy visualization of the results.The analysis was performed by percent calculation and organization by variables of each group.The study was submitted to the Comitê de Ética da Escola de Enfermagem Anna Nery (Ethics Committee of Anna Nery Nursing School)/Hospital-Escola São Francisco de Assis (Teaching Hospital São Francisco de Assis)/Universidade Federal do Rio de Janeiro (Federal University of Rio de Janeiro), being approved by record number 030/2011.The recommendations of Resolution number 196/96 of the Conselho Nacional de Saúde (National Health Council) Brazilian document that comprises standards and rules for the development of human research.Also, as recommended, the data was collected only after signing the Free and Clarified Consent Form and the Assent Form, signed by the responsible for students under 18. RESULTS The study included 128 subjects: 64 adolescent women and 64 young women.The age between 20 and 22 was most representative among both groups.The white group predominated among women vulnerable to HPV, represented by 53% of adolescent women.As for education level, only 25.1% of adolescent women and 18.6% of young women remained in high school, with a reduction in the number of women enrolled in the last year.In terms of marital status, there was a significant number of bachelor adolescent women with partner, represented by 56.25%-being him a boyfriend or a casual partner.However, 46.8% of young women are married, and often it is not an officially legal marriage, but guided by the choice of living in a consensual union with the partner.In contrast, 53.1% of respondents said they did not have a partner at the time of the interview.In this study, the variable family income predominated among adolescent women between 3 and 4 minimum wages.This fact was different among young women with 43.7% of them with familiar income up to 2 minimum wages.As for characteristics related to sexual-affective behavior, Table 1 shows that 31.25% of adolescent women and 37.5% of young women had between 4 and 5 partners in the last year.On the issue of condom use, only 15.6% of adolescent women and 9.3% of young women assert they always use a condom during sexual intercourse.The type of sexual intercourse preferred by adolescent women was vaginal sex and oral/vaginal sex was preferred by 31.25% of young women.In the variable beginning of sexual activity, it was observed that the age group between 14 and 17 was predominant in both groups-43.7% in the adolescent women and 37.5% in the young women. DISCUSSION Given the description of the young women attending a school unit that have the possibility of being infected with HPV, Table 2 illustrates a predominance of ages between 17 and 22.There is a decline from 23 years old and above.This age group is more prone to be infected by the virus because studies show that one of the major risk factors is age-and there is a prevalence among young women up to 24 years old [7,8].It is noteworthy that this prevalence peak among these women is due to a higher level of switching of partners and early sexual initiation [9].HPV infection affects young women at the start of sexual activity, being a transient phenomenonfor that, it declines spontaneously in most cases [10].As for ethnicity, scarce literature mentions the association of ethnicity/color with a predisposition to HPV infection.The present study indicates that white women predominated among adolescents; between young women the medium brown had greater prominence, with a total of 46%.That confirms the study by Pereyra et al., in which it is referred that more than half of the cases of cervical cancer due to infection by HPV is constituted by nonwhite women [11].It was observed that the black ethnicity was associated with the prevalence and incidence of HPV carcinogenic infection, but it was not a significant risk factor for infection [12].However, another study indicates that the increased risk of HPV infection in black women is attributed not to genetics, but to socioeconomic characteristic.[13]. In terms of education level, the dropping-out of young women before completing high school was emphasized.Many of them stop attending high school.Many of these young women are confronted with events in their lives that prevent them from continuing their studies-for example, inclusion in the labor market or early pregnancy.It is noticed that the studied individuals are vulnerable when there is a hindrance to improve the knowledge of a cognitive mechanism when facing situations that may favor the acquisition of sexually transmitted diseases.Still the sub-information is also considered as one of the main barriers to be overcome in the control of sexually transmitted diseases, especially when it comes to the human papillomavirus, because the lack of education influences the risk perception; and the non-providing of attitudes to prevent HPV suggests the association of lower education with higher prevalence of HPV infection [14,15]. With regard to the marital status of these young women, more than half of the adolescents are unmarried with partner, and only 6.25% are married or are living in a consensual union.This fact illustrates that these women may be vulnerable to HPV virus, independently of appropriate sexual behavior or sexual risk behavior.Some studies show that having a steady partner establishes a condition of defenselessness to the virus since condom use is discarded after the relationship becomes stable [16].This fact may occur by reason of trust or even by submission to the partner when it comes to discussing the continuity of usage of condom in sexual intercourse [17]. Regarding family income of the analyzed vulnerable to HPV women, it is clear that incomes are still low among the subjects of study and that few women live with incomes superior to two MW.The results show that some adolescents have a higher income than the other group by reason of living with their parents and/or family, by whom they are still supported.However, in what refers to non-adolescent women, many of them no longer live with their families and are now inserted in the labor market, being responsible for their own sustenance or must support their families, and that interferes in the learning process.It is assumed that this is a vulnerable population when the low incomes lead to social inequalities, hindering the access to necessary information to prevent sexually transmitted diseases [18]. Several studies show that the number of partners among women is a relevant factor because de multiplicity of partners is a major risk factor for acquiring HPV infection [13,19].As for the present study there is a variety of partners especially among young women.Thus, the studied women with more than three partners are submitted to a vulnerable situation when there is a frequent exchange of partners, allowing unsafe sexual behavior and consequently a major chance of contracting sexually transmitted diseases, especially when condom is not used.The increased number of sexual partners during the life of the studied women also favors different sexual practices that may lead to a greater possibility of HPV infection.Age difference between the couple may also increase the risk [20].However, other than literature indicates, the survey also illustrated that 37.8% of adolescents only had one partner in the last year.This fact may lead to the absence of protective measures, as it involves a long-term relationship and the adolescent stops using condoms. Data suggests that the number of women who do not use condoms is still high, favoring a greater exposure to the human papilloma virus, because 37.5% of adolescents and 43.8% of young women said they never used condoms.This information is relevant when it is known that these women are in a situation of vulnerability to HPV, since they are exposed to beliefs, gender inequalities and are passive to the partner, becoming dependent on him to condom usage.The use of condoms among women is a subject that is related to sexual-affective relationship, because the type of relationship that young people establish-of great affection or only casual sexis a contributing factor to the decision to use condom or not [17]. There is a difficulty among the studied women to use condom when they are faced with negotiating about con dom usage with their partners, either for fear of losing the partner or insecurity in the relationship.Thus, it is clear that condom usage is linked to intimacy between partners.Some reasons are associated with this behavior, like: only having sex with a partner that the woman trusts, disliking condom usage because it would diminish pleasure in intercourse or because the studied woman thinks she will not have any disease [21]. The age group that prevailed in this research about sexual initiation was between 14 and 17, a result that agrees with the study by Martins et al., wherein from a total of 8649 women, 74% reported that the first inter-course occurred between the ages of 14 and 20 [22].This is an important fact when the early onset of sexual activeity is one of the factors for contracting HPV.This is due to the fact that young cells are more receptive to infection by human papillomavirus and to the fragility of the uterine cervix in the initiation of sexual life [23,24].Adding to that there is a relation between the early onset of sexual activity and an increased risk of infection with HPV due to the augmentation of time of exposure, leading to a larger risk of being infected with the virus [25]. In terms of sexual preferences among young women, even predominating the choice for vaginal sex, there was also a prominence for anal sex.The incidence of anal cancer associated to HPV among women has shown an increase of 40% in recent years [26].This may be linked to socio-cultural factors that may be decisive in the behavior of these women, since family, education level and access to media and social networks are important sources of information and influence the behavior of these women, especially when it comes to sexual behaveior.Adding to that, there is the fact that the studied women are vulnerable, since during intercourse there is an exchange of sexual fluids, which directly relates to the transmission of various microorganisms, such as HPV, HIV and other STDs, depending on the performed sexual practice [27].As for oral sex-also mentioned by young women-there is a significant evolution for squamous cell carcinoma in the oropharynx area, basis of the tongue and palatine tonsils, associated with HPV [28].This condition can be worsened with modifiable risk factors for cancer such as smoking and alcohol consumption [29].This situation determines the vulnerable condition of these women, finding themselves in a situation of defenselessness concerning the virus. CONCLUSION From the results, it was possible to identify that these young woman students that have the possibility of being infected with HPV are inserted into a profile of low socioeconomic conditions, especially when these women interrupt the cycle of learning, characterized by dropping out of school, which interferes in the seizure of information and in the understanding of the importance of preventive measures.The results showed that high school students are in a situation of vulnerability due to their risk behaviors, primarily by low adherence to condom usage, in addition to the multiplicity of partners.It can be seen that according to the profile of the socioeconomic and demographic level and also of some risk behaviors, there are important factors that may lead to the acquisition of HPV.The development of goals for educational actions to improve the awareness of young women population-so they will have prevention attitudes and safe sexual behaviors in sexual and affective relationships-must be encouraged.Health professionals, specially nurses, have an important role in reducing HPV infection, with procedures that identify risks, emphasizing appropriate prevention conducts. Table 1 . Risk behavioral characteristics in young women vulnerable to infection by human papillomavirus. Table 2 . Socioeconomic and demographic characteristics of young women vulnerable to infection by human papillomavirus.
2017-07-06T13:25:16.374Z
2013-11-11T00:00:00.000
{ "year": 2013, "sha1": "ed30dea0f7909b11047c0ebd8aad688992ef886a", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=39468", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "ed30dea0f7909b11047c0ebd8aad688992ef886a", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
92623535
pes2o/s2orc
v3-fos-license
Gap Junctions in the Dorsal Root Ganglia Dorsal root ganglion (DRG) or spinal ganglia are present in relation to the dorsal ramus of the spinal nerves. The neurons in the dorsal root ganglion are pseudounipolar in type. The single process from the soma or body will divide into the central and peripheral processes. Dorsal root ganglion neurons constitute the first-order neurons for the pain pathways and can be categorized as small, medium and large varieties. Peripheral process collects the impulses from the peripheral receptors and the central process reaches out to the central nervous system. The neurons in the DRG were surrounded by the satellite glial cells (SGC). These cells ensheath the neurons from all the sides. Besides cover-ing the neurons, they share features very much similar to the astrocytes such as expression of glutamine synthetase. Many quantitative studies have identified the different proportion of satellite glial cells for individual neurons. These cells have been identified to get activated when confronted by the noxious stimuli, injury or inflammation. Clinically, these cells were implied to be related to the many neurological disorders. Introduction The human nervous system is an extremely efficient, compact, fast and reliable computing system, yet it weighs substantially less than most of the computers and performs at an incredibly greater capacity. The nervous system is subdivided, morphologically into two components, the central nervous system (CNS) consisting of the brain and spinal cord and the peripheral nervous system (PNS) comprising of cranial and spinal nerves and ganglia. Discrete collections of nerve cell bodies in the CNS are known as nuclei while in PNS, these are called ganglia. The nerve cell bodies are of varying sizes and shapes. Ganglia are present in the dorsal root of spinal nerves, the sensory root of the trigeminal nerve (Vth), Facial (VIIth), Glossopharyngeal (IXth), Vagus (Xth) nerves and in the autonomic nervous system [1]. Some of them have independent nomenclature like the "Gasserian ganglion" for the Vth nerve. Thus ganglia can be divided into two types somatic and autonomic (Figure 1). The nerve cell bodies in each of these differ in their size and shape. Somatic ganglia contain small to large pseudounipolar neurons while the autonomic ganglia contain small multipolar neurons. Depending on the number of processes, a neuron can be classified into various categories. Unipolar neurons (no dendrites only an axon) are rare in vertebrates, bipolar neurons (possesses an axon and a dendrite) present in olfactory mucosa and the retina and multipolar neurons (single axon and two or more dendrites) present in the central nervous system except the mesencephalic nucleus of the Vth cranial nerve. An additional type of neuron, the pseudounipolar neuron is present in sensory ganglia and the ganglia of Vth, VIIth, IXth and Xth cranial nerves. It divides into a central and peripheral process (Figure 2). The neurons in sensory ganglia are at first bipolar, but the two neurites soon unite to form a single process during development. Structurally and electrophysiologically, both these processes show characteristic features of the axon [2]. Small satellite glial cells tightly wrap the cell bodies of the pseudounipolar neurons in the ganglion. The satellite cells that surround the pseudounipolar neuron are continuous with the Schwann cell sheath that surrounds the axon [3]. A distinctive feature of satellite glial cells by which they are distinguished from astrocytes is that they completely surround the individual sensory neuron. The neuron and its surrounding satellite glial cells form a distinct morphological and probably a functional unit [4]. The somatic ganglia of all the mammalian and avian species demonstrate this arrangement [5]. Satellite glial cells have been implicated in neuronal nutrition, homeostasis, and the process of apoptosis. It is known that astrocytes in the central nervous system perform 'spatial buffering' (regulation of K + ) and it is presumed Gap Junctions in the Dorsal Root Ganglia DOI: http://dx.doi.org /10.5772/intechopen.82128 that SGCs also perform the same function [5]. Removing K + from the perineuronal environment would reduce neuronal excitation and therefore contribute to the lowering of pain. Morphology of Dorsal root ganglia (DRG) Dorsal root ganglia (sensory ganglia) contain the cell bodies of primary afferent neurons that transmit the sensory information from the periphery into the central nervous system (CNS) [6]. Sensory ganglia were located near the entrance of dorsal root into the spinal cord, and are not a part of CNS. Sensory (somatic) ganglia lie outside the blood-brain barrier and are densely vascularized by fenestrated capillaries, making the neurons and SGCs easily accessible to compounds in the circulation, including chemotherapeutic drugs [7]. Chemotherapeutic drugs show greater accumulation in sensory ganglia than in peripheral nerves [8]. Dorsal root ganglia are more sensitive to heat than other nervous tissues [9]. It is known that pulsed radiofrequency can selectively block sensory nerves while minimizing the destruction of motor nerves. Sluijter et al. reported that the placement of a cannula 1-2 cm peripheral to the dorsal root ganglia could result in maximum effect when pulsed radiofrequency was applied on dorsal root ganglia of the spinal cord [10]. Kikuchi et al. [9] classified anatomical positions and variations of dorsal root ganglia into intraspinal (IS), intraforaminal (IF), and extraforaminal (EF) (Figure 3). Morphology and histology of sensory (somatic) ganglia The segmental nature of the spinal cord is demonstrated by the presence of 31 pairs of spinal nerves, but there is little indication of segmentation in its internal structure. Each dorsal root is broken up into a series of rootlets that are attached to the spinal cord along the corresponding segment. The ventral root arises similarly as a series of rootlets. These rootlets join to form the ventral and dorsal roots. The dorsal and ventral roots traverse the subarachnoid space and pierce the arachnoid and dura mater. At this point, the dura mater becomes continuous with the epineurium. After passing through the epidural space, the roots reach the intervertebral foramina, where the dorsal root ganglia are located on the dorsal root. Certain authors have put forward their views regarding the classification of the neurons in the dorsal root ganglia based upon their staining properties into two histological types called "large light" and "small dark", visible under the light microscope. This has been confirmed by recent electron microscopic analysis that indicates [11] the existence of two basic types of DRG neurons usually termed as type A and type B rather than large light and small dark [12]. The neurons in the dorsal root ganglion can also be divided into three types (small, medium and large neurons) based upon the size of their cell bodies. This classification seems to be more appropriate because the size of the neuronal cell bodies determine their function. The large neurons are mainly concerned with the transmission of proprioception and discriminative touch while the medium-sized neurons transmit nerve impulses associated with sensations like light touch, pressure, pain and temperature. However, the small-sized neurons exclusively transmit action potentials related to pain and temperature. Glial cells are involved in various pathological processes affecting the central nervous system [13]. There is strong evidence that CNS glial cells are involved (microglia and astrocytes) in the induction and maintenance of neuropathic pain [14]. Following injury of a peripheral nerve, satellite glial cells (SGCs) in the dorsal root ganglia undergo changes in cell number, structure and function, similar to those in the CNS [15]. Peripheral nerve transection increases gap junctions and intercellular coupling of SGCs. SGCs also upregulated the production of proinflammatory cytokines such as tumor necrosis factor-α after lumbar facet joint injury [16]. Thus it is well established that glial cells play a critical role in the genesis and persistence of pain [17]. This is particularly true for the sensory ganglia. Though there are far fewer satellite glial cells than astrocytes or Schwann cells, yet because of their unique location in sensory ganglia, SGCs can strongly influence the afferent sensation. They also respond to the nerve injury by upregulating glial fibrillary acidic protein (GFAP) [18]. One of the ways glial cells in the sensory ganglia transmit signals is through intercellular calcium waves (ICWs) via gap junctions and adenosine-5′-triphosphate (ATP) acting on purinergic type 2 (P2) receptors [19]. This signaling has been shown to be bi-directional between SGCs and neurons (Figure 4). Classification of pseudounipolar neurons of dorsal root ganglia into small, medium and large Older literature suggests that neurons in dorsal root ganglia can be divided into two histological types called "large light (LL)" and "small dark (SD)" on the basis of staining properties under the light microscope [20]. This population overlaps, but still, they show the several physiological, biochemical and functional differences. Small dark neurons transmit the sensation particularly carried by C fibers (nonmyelinated, slow conducting) [21]. Whereas Large light transmits the sensation carried via a fiber (myelinated and fast conducting). Many of the small dark neurons contain substance P or calcitonin gene-related peptide, and they are concerned with thermo-and mechanoreception, and many of them are nociceptive. The terminals of Large light neurons are low threshold mechanoreceptors [22]. Neurons in the sensory ganglia have no dendrites and also do not receive synapses but are still endowed with receptors for numerous neurotransmitters. More recently depending upon the electron microscopic appearance neurons in the dorsal root ganglia were divided into Type A and Type B for large light and small dark neurons respectively. Various other electrophysiological classification depending upon conduction velocity, modality and adaptation rate serves to distinguished large number of functional types of sensory neurons, but it is not clear how these are related to the two basic histological types. There are contradictions among the researchers regarding the classification of dorsal root ganglia neurons into small, medium and large categories. One of the studies involving chronic constriction injury model of Bennet and Xie [23] that retains the connection with the original receptive field so that hyperalgesia and allodynia can be demonstrated, classify the neurons in DRG into small (23-30 μm), medium (31-40 μm) and large (41-53 μm), based on the optical measurement of the average diameter [23]. These grouping roughly correspond to those giving rise to C, Aδ and Aβ fibers, respectively [21]. More recently sensory neurons in dorsal root ganglia were classified depending upon the immunohistochemical staining such as Nav1.8 expression in sensory neurons isolated from dorsal root ganglia into small (27-31 μm), medium (31-40 μm) and large (40-50 μm) [24]. There are two factors, namely DNA content and transcriptional activity, that are determinants of cell size [25]. Differences in neuronal body size seem to be primarily determined by the transcriptional activity. A positive correlation between the cell body and total RNA synthesis has been demonstrated in frog neurons, indicating that large neurons need higher transcriptional activities to maintain their large size [26]. The neurons transcription rate is, in turn, positively related to the magnitude of interactions between neurons and their targets, which contributes to the regulation of the soma size and metabolic activity [27]. Sensory neurons of the dorsal root ganglia express multiple voltage-gated sodium channels that substantially differ in gating kinetics and pharmacology. Small diameter (less than 25 μm) neurons isolated from the rat DRG express a combination of fast tetrodotoxin-sensitive (TTX-S) and slow TTX-resistant (TTX-R) sodium channels while large diameter neurons (more than 30 μm) predominantly expresses TTX-S Na current [28]. Viral study including adeno-associated viral vectors (AAV) are increasingly used to deliver therapeutic genes to the central nervous system where they promote transgene expression in postmitotic neurons for long periods with little or no toxicity. In adult rat dorsal root ganglia authors investigated the cellular tropism of AAV8 containing green fluorescent protein gene (GFP) after intra-lumbar DRG injection. And after injection, 2% of small DRG neurons (less than 30 μm) were GFP (+) as compared to 32% large (more than 60 μm) DRG neurons [29]. Electron microscopic features of dorsal root a ganglion divides the neurons depending upon their size and the distribution of their organelles ( Figure 5). They were further subdivided into six subtypes according to the arrangement and three-dimensional organization of the Nissl bodies and Golgi apparatus in the perikarya. Type A1 cells were large, clear neurons in which Nissl bodies, separated from each other by pale narrow strands of cytoplasm containing small stacks of Golgi saccules and rod-like mitochondria, were evenly distributed throughout the perikaryon. In type A2, the Nissl bodies assumed a similar distribution but were separated by much wider strands of cytoplasm. Type A3, the smallest of the type A category, displayed densely packed Nissl bodies and long stacks of Golgi saccules which formed a perinuclear ring in the midportion of the perikaryon. Type B cells were smaller and showed a concentric zonation of their organelles. In type B1, large Nissl bodies located in an outer cytoplasmic zone were made of long piles of parallel cisternae interrupted by curved Golgi stacks. Type B2 was characterized by a ring-like Golgi apparatus separating the perikaryon in a cortical zone composed mainly of Nissl substance and a juxtanuclear zone containing mitochondria and smooth endoplasmic reticulum. Type C cells were the smallest of the ganglion cells and contained small, poorly demarcated Nissl bodies and a juxtanuclear Golgi apparatus [30]. Neurotransmitter study involving tachykinin like substance P (SP) and neurokinin A, which are released by the C-type primary afferent terminals of the small DRG neurons, plays important role in spinal nociception. By means of non-radioactive in situ hybridization and whole-cell recording, authors showed that the small rat DRG neurons also express the NK-1 tachykinin receptor. In situ hybridization demonstrated that the positive neurons in rat DRG sections were mainly small with a diameter of less than 25 μm. And the remaining positive neurons were cells with a medium diameter between 26 and 40 μm. No positive large neurons (more than 40 μm) were observed [31]. Depending upon the molecular weight of neurofilaments and their expression in various categories of neurons in dorsal root ganglia, three different neurofilament subunits have been identified, i.e. light (NF-L), middle (NF-M) and high (NF-H). Previous data showed that all the dorsal root ganglia neurons express NF-M and NF-H while only NF-L defines a distinct group of neurons and significantly largelight neurons [32]. Peripherin: marker to differentiate the neurons in the DRG Peripherin, a protein formerly called Y, was first identified by two-dimensional gel electrophoresis in the insoluble fraction of cellular extracts from mouse neuroblastoma cell lines [34]. Its presence has been previously established in the rodent peripheral nervous system mostly by biochemical studies; moreover, biochemical characterization following nerve transection also supports its localization in neurons within the peripheral nervous system [35]. This observation leads to coining of the term "Peripherin" to designate this particular protein entity. Peripherin is a 57-kDa-type III neuronal intermediate filament protein, which is capable of either self-assembling or co-assembling with all of the individual neurofilament subunits [36]. In particular, the small cells of the dorsal root ganglia neurons selectively contain peripherin [35] and thus becoming a useful marker to define the small ganglion cell subpopulation. The exact function of the peripherin is still unknown though it has been suggested to be a determinant of the shape and architecture of the peripheral nerve axons and also provides structural integrity to the cells [37]. Peripherin immunolabeling has seen to be an important marker especially for the study of peripheral nerve development and regeneration since this intermediate filament protein is highly over-expressed during axon elongation [38]. Previously this neurofilament were thought to be inert but in fact these are highly dynamic structures with many diverse function such as relaying the signals from the plasma membrane to the nucleus [39], maintaining the position and function of cellular organelles, and also regulating the protein synthesis [40]. This neurofilament is clinically relevant because of their association with the pathogenesis of some major neuronal disorders. Mainly, accumulation of neurofilament protein and peripherin in proximal axons are associated with amyotrophic lateral sclerosis [41] and also seen in other diseases such as Alzheimer's disease [42]. Peripherin was used to identify the small to medium-sized neurons in the rat dorsal root ganglia in the present study as because these are associated with the transmission of pain from the periphery to the central nervous system. This would give an idea as to the actual number of neurons within the dorsal root ganglia involved in the transmission of pain (Figure 6). 8 Satellite glial cells Sensory neurons in the dorsal root ganglia are ensheathed by specialized glial cells termed 'satellite glial cells' (SGCs). Recently, there has been considerable interest in these cells as they are profoundly altered by peripheral injuries used to study pain behavior and appear to contribute to chronic pain [43]. Satellite glial cells are the peripheral glial cells, but share many properties with astrocytes in the central nervous system (CNS), including the expression of glutamine synthetase and transporters of amino acids neurotransmitters. However, satellite glial cells differ in some respects from astrocytes, particularly by the tight sheath they make around the neuronal cell bodies [44]. In the dorsal root ganglion, Schwann cells and the satellite cells are activated in response to ischemia, traumatic injury and inflammation [45]. Application of various cytokines to the exposed Dorsal root ganglia resulted in an increase in the discharge rate as well as increased mechanosensitivity of DRG and peripheral receptive fields [46]. Satellite glial cells are the consistent component of the DRG in all the species, yet their contribution to the basic neuronal function remains unknown, although these satellite cells were implicated in neuronal nutrition, homeostasis and the process of apoptosis [5]. Recent studies have demonstrated that a specific glial cell population, the satellite glial cells, has the ability to regulate ion concentration [47] and possess mechanisms for the release of cytokines [48], ATP [19] and other chemical messengers like calcium. Satellite glial cells influence neuronal excitability via the gap junctions [49]. The satellite glial cells undergo major changes as a result of injury to peripheral nerves and appear to contribute to chronic pain [4]. Quantitative studies on several species showed that a number of satellite glial cells per neuron increases in proportion to the neuron's volume, consistent with the idea that these satellite glial cells support the neurons metabolically [50]. During pathological conditions, such as nerve injury or inflammation, SGCs demonstrate an altered phenotype similar to that seen in activated astrocytes, which includes increased expression of the glial fibrillary acidic protein (GFAP) and synthesis of cytokines [51]. SGCs are therefore said to undergo activation due to injury. Increased coupling by gap junctions between SGCs has been observed in several inflammatory pain and axotomy models [52]. Satellite glial cells as a structural unit Satellite glial cells (SGCs) in sensory ganglia wraps completely around the neuron. Several investigators claimed that SGCs bear processes and are therefore structurally similar to astrocytes but recent researches are that SGCs are laminar and have no true processes. In general, each sensory neuron has its own SGCs sheath, which usually consists of several SGCs, and thus the neuron and its surrounding satellite glial cells form a distinct morphological and probably functional unit. The region containing connective tissue separates these units. In some cases (5.6% in rat DRG) neurons from a small group containing two to three cells that are enclosed in common connective tissue space [44]. The neurons in the clusters are in most cases separated from each other by SGC sheath. The SGCs envelope usually consists of flat processes that lie close to the neuronal plasma membrane. The distance between the glial cell and neuronal plasma membrane is about 20 nm [44]. The neurons send numerous fine processes (microvilli), some of which fit into the invaginations of SGCs thus increasing the neuronal surface area and may allow an extensive exchange of chemicals between two cell types. A study on cultured SGCs of embryonic and neonatal rats showed that SGCs could transform into astrocytes, Schwann cells and oligodendrocytes [53]. Quantitative studies on several species showed that the number of SGCs per neuron increases in proportion to the neuron volume [50] consistent with the idea that SGCs support the neurons metabolically. It was also found that the mean volume of the nerve cell body corresponding to an SGC was lower for small neurons than for large neurons, which implies that the metabolic needs of small neurons are better satisfied than those of large ones. Therefore, smaller neurons have a higher resistance to insults, which seems to be the case for mercury poisoning. However, there is experimental evidence that smaller neurons are more likely to die following axonal damage [54]. As sensory ganglia are not protected from substances circulating in the blood, SGCs may be important in the context of exposure to toxic substances. In several studies, SGCs were examined after poisoning with heavy metals and it was found that these cells take up organic mercury compounds [55], and lead [56]. Mercury poisoning also caused SGCs proliferation [57]. Nineteen days after the administration of organic mercury to rats, SGCs in DRG were heavily labeled for mercury, and their ability to take up GABA was greatly diminished. Interestingly, small neurons were considerably less labeled for mercury than large neurons, which could be attributed to a more effective protection by SGCs. Prolonged (3-18 months) administration of lead acetate to rats resulted in prominent changes in SGCs in DRG, which included proliferation and hypertrophy of these cells. Although a certain degree of neuronal damage was observed, it can be proposed that the changes in SGCs provide a better protection to the neurons during lead poisoning. Satellite glial cells maintain ionic concentration The satellite glial cells neighboring the pseudounipolar neurons have a highly negative resting membrane potential and noticeable potassium permeability. The primary means of limiting extracellular levels of potassium in the sensory ganglia occurs through the process commonly called spatial buffering or syphoning which is mediated by satellite glial cells. The maintenance of a low extracellular potassium concentration is crucial for controlling the neuronal resting membrane potential and neuronal excitability. In sensory ganglia increased neuronal excitability has been associated with the occurrence of altered sensation, including the development of the neuropathic pain [58]. In the CNS buffering of extracellular potassium ions is carried by astrocytes, which consist of uptake by inwardly rectifying potassium (Kir) channels and dissipation through other channels and gap junctions [59]. It is established that the Kir current and Kir4.1 expression occur in the satellite glial cells [60]. Voltage-gated potassium channels are one of the important physiological regulators of the membrane potentials in excitable cells including sensory ganglion neurons. Neuron-glial interactions Central nervous system glial cells are increasingly known to be important regulator of synaptic activity and the key functional unit of nervous system [61]. Even though many of the same voltage-sensitive ion channels and neurotransmitter receptors of neurons are found in glia; glial cells lack the membrane properties obligatory to fire action potentials. Nevertheless, these ion channels and electrogenic membrane transporters permit glia to sense indirectly the level of neuronal activity by monitoring activity-dependent changes in the chemical surroundings shared by these two cell types. Complex imaging methods, which allow observation of changes in intracellular and extracellular signaling molecules in real time, show that glia, communicate with one another and with neurons primarily through chemical signals rather than electrical signals. Many of these signaling systems overlap with the neurotransmitter signaling systems of neurons, but some are specialized for glial-glial and neuron-glial communication. Neuron-glia cell interaction through gap junctions and extracellular paracrine/autocrine processes are believed to be important in the development of peripheral sensitization within the trigeminal ganglia [62]. Peripheral sensitization, which is characterized by increased neuronal excitability and a lowered threshold for activation, may possibly trigger a migraine attack. Moreover, activation and sensitization of the trigeminovascular afferent fibers appear crucial for initiation of migraine pain and for subsequent central centralization, in which increased excitability of second-order neurons leads to pain and allodynia. Increased gap junction communication between neurons and satellite glial cells was observed in the trigeminal ganglion in response to chemical activation of sensory trigeminal nerves [62]. Increased neuronal-glial signaling by way of gap junctions is common in neuroinflammatory CNS disorders, such as cerebral ischemia and Alzheimer's disease and may have underlying pathological significance [63]. Tonabersat (SB-220453), a compound that binds selectively and with high affinity to a unique stereoselective site i.e. the gap junctions and inhibits it in rats and human brains [64]. After an injury, the numbers of gap junctions that connect satellite glial cells increases [43] in a probable adjust to the greater release of potassium ions with intense neuronal activity. Injury to a peripheral nerve does not directly impact satellite glial cells integrity. However, changes in injured neurons can influence the ability of the surrounding SGCs to regulate K + via neuromodulators such as adenosine triphosphate (ATP) and nitric oxide (NO) [65]. Satellite glial cells have unique proteins that include the inwardly rectifying K + channel Kir4.1 [43], the connexin-43 (Cx43) subunit of gap junctions the purinergic receptor P2Y4 [66] and soluble guanylate cyclase. There is also evidence of the presence of small-conductance Ca 2+ −activated K + channel SK3 that is present only in satellite glial cells. All the above proteins are involved, either directly or indirectly, Gap Junctions in the Dorsal Root Ganglia DOI: http://dx.doi.org /10.5772/intechopen.82128 in potassium ion (K + ) buffering and, thus, can influence the level of neuronal excitability, which, in turn, has been associated with neuropathic pain conditions (Figure 7). They also used in vivo RNA interference to reduce the expression of Cx43 (present only in SGCs) in the rat trigeminal ganglion and showed that this resulted in the development of spontaneous pain behavior. The pain behavior is present only when Cx43 is reduced and returns to normal when Cx43 concentrations are restored [66,67]. Glial fibrillary acidic protein (GFAP): locator molecule for the satellite glial cells Glial fibrillary acidic protein is principle intermediate filament in mature astrocytes of the central nervous system and satellite glial cells of sensory ganglia [4]. GFAP is strongly unregulated in response to CNS damage [68]. It is thought to be important in astrocyte neuronal interactions, astrocyte mobility and shape and for maintenance of homeostasis and vascular permeability at the blood-tissue interface [69]. GFAP is essential for normal white matter architecture and blood-brain barrier integrity and its absence leads to late-onset CNS dysmyelination [70]. Increased GFAP expression occurs in activated glial cells. Activated astrocytes are characterized by hypertrophy, the release of pro-inflammatory cytokines (IL-1, IL-6 and TNF-a), the release of nitric oxide and prostaglandins, and up-regulation of the intermediate filaments GFAP and vimentin [17]. Likewise, satellite glial cells (SGCs) display increased expression of GFAP after neuronal injury or inflammation and undergo a number of changes similar to those seen in astrocytes, such as synthesis of cytokines [71]. GFAP expression increases in the satellite glial cells of trigeminal ganglia after tooth pulp injury [72]. The present study also investigated the expression of GFAP in the satellite glial cells following acute pain (Figure 8). GFAP is a marker of activated satellite glial cells and astrocytes [48]. These ropes like filaments are called intermediate filaments because their diameter of 8-10 nm is [66]. Satellite glial cells involved in maintenance of potassium homeostasis between those of actin filaments and microtubules. Nearly all-intermediate filaments consist of subunits with a molecular weight of about 50 kDa. Some evidence suggests that many of the stable structural proteins in intermediate filaments evolved from highly conserved enzymes, with only minor genetic modification. Intermediate filaments are formed from nonpolar and highly variable intermediate filament subunits. Unlike those of microfilaments and microtubules, the protein subunits of intermediate filaments show considerable diversity and tissue specificity. In addition, they do not possess enzymatic activity and form nonpolar filaments. Intermediate filaments also do not typically disappear and reform in the continuous manner characteristic of most microtubules and actin filaments. For these reasons, intermediate filaments are believed to play a primarily structural role within the cell and to compose the cytoplasmic link of a tissue-wide continuum of cytoplasmic, nuclear, and extracellular filaments. A highly variable central rod-shaped domain with strictly conserved globular domains at either end characterizes intermediate filament proteins. Although the various classes of intermediate filaments differ in the amino acid sequence of the rod-shaped domain and show some variation in molecular weight, they all share a homologous region that is important in filament self-assembly. Intermediate filaments are assembled from a pair of helical monomers that twist around each other to form coiled-coil dimers. Then, two coiled-coil dimers twist around each other in antiparallel fashion (parallel but pointing in opposite directions) to generate a staggered tetramer of two coiled-coil dimers, thus forming the nonpolarized unit of the intermediate filaments. Each tetramer, acting as an individual unit, is aligned along the axis of the filament. The ends of the tetramers are bound together to form the free ends of the filament. This assembly process provides a stable, staggered, helical array in which filaments are packed together and additionally stabilized by lateral binding interactions between adjacent tetramers [2]. Total six classes of intermediate filament are present in body, e.g., Class II and I include keratin and cytokeratin and class III include vimentin, glial acidic fibrillary protein (GFAP) and peripherin. GFAP is the principal intermediate filament in mature astrocytes. GFAP is a soluble protein isolated from the multiple sclerosis plaques and presumably arising from the glial filaments [73]. The GFAP gene is located on the long (q) arm of chromosome 17 at position 21. Mutation in the GFAP results in Alexander disease characterized by rare leukoencephalopathy affecting predominantly the brainstem and cervical cord with insidious onset of clinical features and unified by the presence in astrocytes of Rosenthal fibers (protein aggregates mainly contain glial fibrillary acidic protein (GFAP) and small stress proteins) in the astrocytes especially in the subpial and subependymal in location. It is strongly upregulated in response to the CNS damage [68]. It is thought to be important in astrocyte-neuronal communication and is believed to modulate astrocyte motility and shape. Satellite glial cells (SGCs) responsible for the maintenance of homeostasis and vascular permeability at the blood-tissue interface [69]. In the peripheral nervous system, neurons located in sensory ganglia are tightly surrounded by SGCs, following injury these cells undergo modification in structure and function [15]. According to Feng et al., after ligation of the L5 spinal nerve, mechanical allodynia developed in the ipsilateral hind paw and expression of GFAP in the ipsilateral DRG increased significantly as early as 4 hours after surgery, and gradually increases up to peak level at day 7 and then stayed at high level till day 56 [74]. Significant change seen among the sizes of neurons means small to medium size neurons shows maximum GFAP immunoreactivity at 12 hours and on day 7, a number of larger neurons was surrounded by GFAP stained satellite cells. Gap junctions in the nervous system Gap junctions, tight junctions, adherens junctions, desmosomes, hemidesmosomes, focal adhesions, chemical synapses, and immunological synapses are complex multiunit plasma membrane structures that assemble in a localized spatial and temporal organization to maintain structural tissue organization and to provide the cell signaling functions. At least nine connexins (Cx26, Cx32, Cx33, Cx36, Cx37, Cx40, Cx43, Cx45, Cx46) are expressed to various degrees in the nervous system. Functional studies in diverse cell types and in various exogenous expression systems have revealed that gap junction channels formed by different connexins are regulated differently, both at the single channel level (gating controls such as voltage sensitivity and variations in unitary conductance) and at the level of synthesis (expression, altered for example by hormones, extracellular matrix). Some gap junction channels are more sensitive to various gating stimuli than others, some display some degree of ionic selectivity, and some will pair promiscuously with other connexins (heterologous channels) while others are quite selective in their interaction (homologous channels). Such differences are important from the standpoint of the physiological roles of gap junctions in different cell types, as well as in the establishment of communication compartments within the nervous system [75]. Connexins are differentially expressed in the brain during ontogeny. Most recently, tissue culture preparations from embryonic neural tissue have allowed manipulation of individual cells and evaluation of changes in junctional distribution and expression during maturation. Such studies have clarified the relationships between sequential changes in phenotypes of neural cells, with the extent of coupling mediated by Cx43 (which is abundant in neural precursor populations) and the appearance of other gap junction proteins. Expression pattern of Cx32, Cx43 and Cx30 during the development in rat brain indicates the Connexin-43 appears first at embryonic days 12-18 [76] and that Cx32 protein and mRNA appear during first or second postnatal week and increases during development. Immunohistochemical analysis of postnatal rat brain has shown that Cx43 first appears along radial glial cells and is most intense along cerebellar Bergmann glial cells [77]. Glia represents the major cell population in the CNS coupled by gap junctions. Indeed, compared to neurons, the level of connexin expression is high in these cells and persists until the adult stage [75]. For the two main types of macroglial cells, the astrocytes and the oligodendrocytes, several connexins have been detected [78]. Gap junctional communication is not limited to either astrocyteto-astrocyte or oligodendrocyte-to-oligodendrocyte, but it also occurs in between both cell types. In the adult brains, the predominant connexin is Cx43, which is abundant in astrocytes and is also expressed in leptomeninges, endothelial cells and ependyma. The second type of microglia, the oligodendrocytes (and their peripheral counterparts, the Schwann cells), appear to express a different gap junction protein, Cx32, although to a lower extent in situ than the level of Cx43 expression exhibited by astrocytes. Astrocytes express Cx43 and are well coupled in vivo and under culture conditions. However, the strength of coupling and degree of Cx43 expression between astrocytes varies depending on brain regions being higher in the hypothalamus than in the striatum. Although glial gap junctions do not generate action potentials in normal conditions and are devoid of synaptic contacts, connexin channels provide a route that allows changes in membrane potential to be transmitted from one cell to its neighbors. Recently, the participation of astrocytic gap junction in neuroprotection has been investigated by comparing neuronal vulnerability in the presence of either communicating or non-communicating astrocytes [75]. Gap junctions and connexins Gap junctions and their consistent connexin proteins have represented a new challenge in all tissues where they occur but no structure is more complex or more interconnected than the mammalian central and peripheral nervous systems (CNS and PNS). The term "Gap junctions" arose from the work of Revel and Karnovsky, who described the fine structure of the interconnections between mouse cardiomyocytes and between hepatocytes. Later development of specific antibodies to gap junction proteins and eventually the cloning of these connexin molecules have now led to the availability of a variety of techniques by which the distribution and expression patterns of specific types of gap junctions have been defined in a varied number of tissues, including the brain. Gap junctions are the clusters of intercellular channels that are composed of 12 subunits, 6 of which form a connexion or hemichannel contributed by each of the coupled cells [79]. Gap junctions are permeant to molecules up to 1 kDa and are found in virtually all cell types in mammals; few exceptions include circulating erythrocytes, spermatozoids and adult innervated skeletal muscle cells [80]. Gap junctional communication is essential for many physiological events, including cell synchronization, differentiation, cell growth, and metabolic coordination of avascular organ including epidermis and lens [81]. Connexin family members share a similar structural topology. Each connexin has four transmembrane domains that constitute the wall/pore of the channels. These domains are linked by two extracellular loops that play roles in the cell-cell recognition and docking processes. There are three unchanged cysteine residues in each loop, which solely form intraconnexin disulfide bonds [82]. The transmembrane domains and extracellular loops are highly conserved among the family members. Furthermore, connexin proteins have cytoplasmic N-and C-terminal and a cytoplasm loop linking the second and third transmembrane domains. Although the N-terminus is conserved, the cytoplasmic loop and C-terminus show great variation in terms of sequence and length. The cytoplasmic tail and loop are susceptible to various post-translational modifications (e.g., phosphorylation), which are believed to have regulatory roles [83]. Connexons (hemichannels) are then carried to the cell surface via vesicles transported through microtubules, which fuse to the plasma membrane. These hemichannels can either form nonjunctional channels in unopposed areas of the cell membrane or diffuse freely to regions of cell-to-cell contact to find a partner connexon from a neighboring cell to complete DOI: http://dx.doi.org /10.5772/intechopen.82128 the formation of intercellular channels. Intercellular channels then cluster into gap junction plaques, a highly dynamic event involving removal of old channels from the center of the plaque, while adding new gap junction subunits to the periphery [84]. The intercellular channels from the middle of the plaque are internalized into vesicular structures called "annular junctions" [85], which either fuse with the lysosome for degradation by lysosomal enzymes or are targeted to the proteasomal pathway [86]. The continuous synthesis and degradation of connexins through these mechanisms may provide for the quick adaptation of tissues to changing environmental conditions. Unopposed hemichannels can also be functional under certain conditions, including mechanical and ischemic stress. Under these circumstances, open hemichannels are thought to facilitate the release of a variety of factors such as ATP, glutamate, and NAD+ into the extracellular space, generating different physiological responses [87]. Up to date, there were 20 proposed members of the connexin family of proteins that form gap junctional intercellular communication channels in mammalian tissues, and over half are reported to be present in the nervous system. Identification of the several connexin proteins at gap junctions between each neuronal and glial cell type is necessary for the sensible design of investigations into the functions of gap junctions between glial cells and into the functional contributions of electrical and "mixed" (chemical plus electrical) synapses to communication between neurons in the mammalian nervous system (Figure 9). Pathophysiology of connexins Gap junction's role has been well evaluated concerning cell-to-cell interaction. There are two effects derived from gap junction's function that may determine life and death of the connected cells [89]. The bystander effect promotes the death of normal cells adjacent to an apoptotic cell by diffusing toxic metabolites through gap junctions. In the same way there is the Good Samaritan effect that allows Immunohistochemical staining using connexin-43 antibody. Black arrows represent the location of gap junctions between the satellite glial cells and the neuronal bodies [33]. © 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Author details Vishwajit Ravindra Deshmukh Department of Anatomy, All India Institute of Medical Sciences, Nagpur, Maharashtra, India *Address all correspondence to: drvishwajitdeshmukh@gmail.com a condemned cell to live by draining the toxic metabolites to adjacent cells and maintaining cells integrity and thus tissue homeostasis. In this way gap junctions perform a dual function either saving or killing interconnected cells [88]. Some pathological conditions are directly related to gap junctions or to their altered function. Some human diseases are caused by mutated connexins [89]. Mutations on Cx32 induce a peripheral neuropathy named Charcot-Marie-Tooth disease. The many conductivity changes observed in this disease may be caused by altered protein traffic to the junctions, altered channel permeability and, sometimes, altered conformation of heterotypic channels [78]. Mutations of Cx36 may lead to the most common hereditary non-syndromic deafness. Cx43 structure may be altered in some forms of human epilepsy where Cx43 mRNA expression may or may not be altered. High Cx43 levels have been detected in β-4 positive amyloid plaques of Alzheimer's disease [77], indicating either astrocytes invasion of the plaques or increased Cx43 expression by astrocytes, as observed in PC12 cells (cells from a rat pheochromocytoma) with increased expression of carboxy-terminal portions of amyloid precursor protein [90]. However a higher Cx43 expression in that area may reflect the existence of many activated macrophages/microglia. The decrease of Cx43 within an inflammatory focus suggests that factors as IL-1 β are involved in astrocytic connectivity decrease as observed in autoimmune experimental encephalitis.
2019-04-03T13:09:04.768Z
2018-12-21T00:00:00.000
{ "year": 2019, "sha1": "6cdb50acf23212a18d22f1d092f4a0e0f1e8f891", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/64893", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "740b8bae623982e5aa93936506fad77b6a4c9c10", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
269944853
pes2o/s2orc
v3-fos-license
LC3A-mediated autophagy elicits PERK-eIF2α-ATF4 axis activation and mitochondrial dysfunction: Exposing vulnerability in aggresome-positive cancer cells The unfolded protein response pathways (UPR), autophagy, and compartmentalization of misfolded proteins into inclusion bodies are critical components of the protein quality control network. Among inclusion bodies, aggresomes are particularly intriguing due to their association with cellular survival, drug resistance, and aggresive cancer behavior. Aggresomes are molecular condensates formed when collapsed vimentin cages encircle misfolded proteins before final removal by autophagy. Yet significant gaps persist in the mechanisms governing aggresome formation and elimination in cancer cells. Understanding these mechanisms is crucial, especially considering the involvement of LC3A, a member of the MAP1LC3 family, which plays a unique role in autophagy regulation and has been reported to be epigenetically silenced in many cancers. Herein, we utilized the tetracycline-inducible expression of LC3A to investigate its role in choroid plexus carcinoma cells, which inherently exhibit the presence of aggresomes. Live cell imaging was employed to demonstrate the effect of LC3A expression on aggresome-positive cells, while SILAC-based proteomics identified LC3A-induced protein and pathway alterations. Our findings demonstrated that extended expression of LC3A is associated with cellular senescence. However, the obstruction of lysosomal degradation in this context has a deleterious effect on cellular viability. In response to LC3A-induced autophagy, we observed significant alterations in mitochondrial morphology, reflected by mitochondrial dysfunction and increased ROS production. Furthermore, LC3A expression elicited the activation of the PERK-eIF2α-ATF4 axis of the UPR, underscoring a significant change in the protein quality control network. In conclusion, our results elucidate that LC3A-mediated autophagy alters the protein quality control network, exposing a vulnerability in aggresome-positive cancer cells. The unfolded protein response pathways (UPR), autophagy, and compartmentalization of misfolded proteins into inclusion bodies are critical components of the protein quality control network.Among inclusion bodies, aggresomes are particularly intriguing due to their association with cellular survival, drug resistance, and aggresive cancer behavior.Aggresomes are molecular condensates formed when collapsed vimentin cages encircle misfolded proteins before final removal by autophagy.Yet significant gaps persist in the mechanisms governing aggresome formation and elimination in cancer cells.Understanding these mechanisms is crucial, especially considering the involvement of LC3A, a member of the MAP1LC3 family, which plays a unique role in autophagy regulation and has been reported to be epigenetically silenced in many cancers.Herein, we utilized the tetracycline-inducible expression of LC3A to investigate its role in choroid plexus carcinoma cells, which inherently exhibit the presence of aggresomes.Live cell imaging was employed to demonstrate the effect of LC3A expression on aggresome-positive cells, while SILAC-based proteomics identified LC3A-induced protein and pathway alterations.Our findings demonstrated that extended expression of LC3A is associated with cellular senescence.However, the obstruction of lysosomal degradation in this context has a deleterious effect on cellular viability.In response to LC3A-induced autophagy, we observed significant alterations in mitochondrial morphology, reflected by mitochondrial dysfunction and increased ROS production.Furthermore, LC3A expression elicited the activation of the PERK-eIF2a-ATF4 axis of the UPR, underscoring a significant change in the protein quality control network.In conclusion, our results elucidate that LC3A-mediated autophagy alters the protein quality control network, exposing a vulnerability in aggresome-positive cancer cells. The protein quality control (PQC) network is a complex system that regulates protein production, folding, and degradation (1,2).Endoplasmic reticulum (ER)-resident chaperones are the initial checkpoint in the PQC network, whereby they assist newly synthesized proteins in folding into their correct three-dimensional structure (3,4).Proteins that fail to fold correctly are either eliminated by the ubiquitin-proteasome system (UPS) or undergo autophagylysosome hydrolysis (2,3).The unfolded protein response (UPR) is another crucial mechanism that cells use to maintain ER protein homeostasis, with three stress sensors: Protein Kinase R-like ER Kinase (PERK), Activating Transcription Factor (ATF6), and Inositol Requiring Element 1 (IRE1), regulating protein trafficking, folding, and degradation (4).The UPR is activated in response to protein misfolding and helps cells cope with this stress by increasing the production of chaperones, halting protein synthesis, and promoting the degradation of misfolded proteins (5).In recent years, protein compartmentalization into specific cellular sites, forming membrane-less bimolecular condensates, was found to be a critical component of the PQC network (6,7).Among different types of condensates, the aggresome has emerged as a specialized structure formed by the collapse of the intermediate filament vimentin at the microtubule organizing center (MTOC) (7)(8)(9).Ensuing experiments further supported the role of aggresome formation as a protective mechanism against the accumulation of misfolded proteins before final removal by autophagy (10)(11)(12). Autophagy is a catabolic process that occurs at a basal level or is induced by stress, such as nutrient deprivation (13).Autophagy is classified into three subtypes: macroautophagy, microautophagy, and chaperone-mediated autophagy, based on the cargo delivery route to the lysosome (2).When macroautophagy is initiated (hereafter referred to as autophagy), an isolation membrane expands around the cargo and closes to form autophagosomes, then fuse with the lysosome for final degradation (13).Selective forms of autophagy were also identified where specific cellular compartments or cargo were eliminated (14)(15)(16).The evolutionary conserved autophagyrelated (ATG) proteins are an integral autophagy component (17,18).In mammalian cells, the yeast Atg8 orthologs are classified into two families: microtubule-associated protein 1A/1B light chain (MAP1LC3, referred to as LC3), consisting of LC3A, LC3B, and LC3C, and GABA type A receptorassociated protein (GABARAP), consisting of GABARAP, GABARAPL1, and GABARAPL2 (19).The LC3 paralogs are ubiquitin-like proteins that play an essential role in cargo recognition, engulfment, and vesicle closure (18).LC3B is the most extensively studied mammalian LC3 paralog and is widely used for assessing autophagy flux (20).Recently, differences between LC3 paralogs pertaining to their localization, regulation, molecular function, and interactome have been recognized (20)(21)(22).This is further supported by the observation that LC3 members interact with specific adapters for cargo recruitment, thereby providing functional specialization in cargo selection (23).Studies have shown that LC3A exhibits distinct expression patterns and functions from LC3B and LC3C.Furthermore, gene mutation and epigenetic silencing by promoter methylation have been identified in various human cancers, including multiple myeloma, breast, colon, and lung (24)(25)(26).Silencing of LC3A has been linked to impaired autophagy flux, increased cellular invasion, and a poor disease prognosis.This suggests that LC3A may have a differential and specific role in regulating protein homeostasis and serving as a tumor suppressor (27)(28)(29).These findings have prompted the investigation of the role of LC3A-mediated autophagy in maintaining cellular proteostasis. In the current study, we investigate the role of LC3A independent of LC3B in cancer cells that inherently exhibit the presence of aggresomes.Our findings indicate that the activation of LC3A is associated with cellular stress response governed by the PERK-eIF2a-ATF4 axis of the UPR pathway, resulting in altered mitochondrial dynamics and stressinduced senescence.Additionally, inhibiting LC3A-mediated autophagy in aggresome-positive cancer cells has detrimental effects on cell viability.These results support the significance of specialized autophagy in cellular homeostasis and provide valuable biological insight into how cancer cells exploit PQC for survival.This warrants further investigation, as it could empower the development of targeted therapeutic strategies in diseases with known proteopathy associations. LC3A activation and lysosomal blockage disrupt cellular homeostasis, while sustained LC3A activation induces cellular senescence In this study, we aim to characterize the role of LC3Amediated autophagy in choroid plexus carcinoma cells (CPC) that inherently harbor aggresomes.Aggresomes are wellknown sites where misfolded proteins are sequestered, ultimately leading to their degradation through autophagy.In a previous study, we showed LC3A silencing in CCHE-45 cells, which was attributed to intergenic CpG island methylation (29).The inactivation of LC3A expression has been reported in various tumors, including lung, breast, and colon cancers (24,30).Notably, this inactivation is associated with aggresome formation, specifically in multiple myeloma (24,31).Hence, our primary objective is to explore the consequences of activating LC3A on the dynamics of aggresomes, and its subsequent impact on overall cellular homeostasis.We generated tetracycline-inducible myc-LC3A and GFP-LC3A fusion proteins by cloning LC3Av1 cDNA downstream of myc or GFP tags (Fig. S1A).Plasmids were stably transfected in CCHE-45 (LC3A-negative, aggresomes-positive) and HEK293 (LC3Anegative, aggresomes-negative) cells (32,33).We verified the mRNA and protein expression levels of LC3A in both systems, respectively (Figs.S1B and 1A).Consistent with our previous report, tetracycline-induced myc-LC3A expression showed puncta distribution throughout the cells without serum starvation, as observed by immunofluorescence using different anti-LC3A antibodies (Figs.1B and S1C).Similarly, GFP-LC3A puncta were detected as early as 24 h post-induction (Fig. S1D and Movie 1), with approximately 97.2% of puncta colocalizing with lysosomes at 48 h (Manders' coefficient factor =0.9723) (Fig. S1D and Table S1).In GFP-LC3A CCHE-45 cells (N = 10), the number of puncta ranged from 32 to 156, and their average area varied between 0.04 and 4.89 mm 2 .(Fig. 1C and Table S1).The variation in average puncta numbers and area reflects the dynamic nature of autophagy activation, suggesting that different cells within the population exhibit varying degrees of autophagosome formation.Additionally, a GFP-LC3A signal was detected in the nucleus after more than 50 h of induction (Movies 1 and 2).In contrast, the expression pattern of GFP-LC3A in HEK293 cells exhibited a diffused distribution with no discernible puncta (Fig. S1E).Concurrently, vimentin in HEK293 cells exhibited well-defined filaments (Fig. S1F).However, upon treatment with MG132, a proteasome inhibitor, the control and GFP-LC3A-expressing HEK293 cells displayed a perinuclear ring-shaped collapsed vimentin.Additionally, GFP-LC3A transitioned from a diffused pattern to small puncta scattered around the newly formed aggresomes (Fig. S1F).To further elucidate the relationship between aggresomes and LC3A, independent of LC3B, and to investigate whether the simultaneous activation of LC3A in CCHE-45 cells is cell-line-specific or a general stress response triggered by aggresomes, we subjected SH-SY5Y neuroblastoma cells to serum starvation.After 5 h of serum starvation in Hank's Buffered Salt Solution, LC3B-positive autophagic puncta were detected and further intensified with the addition of chloroquine (CLQ), supporting the induction of LC3Bmediated autophagy, with no change observed in the LC3A pattern (Fig. S1G).However, the induced formation of aggresomes in SH-SY5Y cells after MG132 treatment was accompanied by a transition in LC3A expression from a diffuse to a distributed cytoplasmic punctate pattern, supporting the induction of LC3A-mediated autophagy (Fig. S1H).These observations, involving both exogenous LC3A in HEK293 and endogenous LC3A in SH-SY5Y after proteasome inhibition, provide valuable insights into the role of LC3A in the PQC network of cells.Moreover, they establish a correlation between LC3A and aggresomes, revealing distinct cellular triggers for LC3A and LC3B.To confirm whether puncta clustering at the aggresomes indicated a physical interaction between LC3A and aggresomes in CCHE-45 cells, we performed immunoprecipitation using an anti-GFP antibody on GFP-LC3A-induced cells after 48 h.The precipitated samples revealed the presence of vimentin and cytokeratin-8 intermediate filaments, components of the aggresome cage in CCHE-45 cells, indicating the physical interaction of LC3A and the aggresome cage (Fig. 1D).To monitor LC3A-mediated autophagy flux, LC3A expression was induced, and cells were treated with the lysosomal inhibitor CLQ.Interestingly, a decline in cell viability was observed shortly after treatment, which was not observed when cells were treated with CLQ alone, serum-starved, or serum-starved plus CLQ treatment (Fig. 1E and Movie 3).Due to the observed cell death resulting from LC3A induction and CLQ treatment, our hypothesis was that LC3A activation alone is not detrimental to cell survival and CCHE-45 cells might display an elevated reliance on the autophagy-lysosome pathway, making them more susceptible to inhibition of lysosomal degradation.However, sustained expression of LC3A may result in cell death.To explore this further, we extended our cell monitoring for 100 h following the induction of LC3A expression.We detected no significant expression post-induction with tetracycline.LC3A expression was detected using an anti-LC3A or anti-GFP antibody in myc-LC3A and GFP-LC3A transfected cells, respectively.GAPDH served as a loading control.B, double immunofluorescence staining for LC3A (red) and vimentin (green) to assess antibody specificity in myc-LC3A-induced cells.DAPI was used for nuclear staining.Representative images are shown from three independent experiments.C, scatter plot of the number and sizes of GFP-LC3A-positive puncta in 10 cells using ZEN blue segmentation analysis.D, immunoprecipitation analysis using anti-GFP antibody following induction of GFP-LC3A expression for 48 h.Samples were then analyzed by immunoblotting using antibodies against GFP, LC3A, vimentin, and CK8.Whole-cell lysates were used as a positive control, while the residual lysate was used as a negative control.E, impedance monitoring was performed for the following conditions; CCHE-45 untransfected cells, CCHE-45 cells stably transfected with GFP-LC3A, GFP-LC3A with CLQ, GFP, and GFP with CLQ.The X-axis represents time in hours.The Y-axis represents the cell index obtained from the RTCA software.Three biological replicates were performed and the data is presented as mean ± SD.F, representative flow cytometric analysis of CellTrace Violet staining showing CCHE-45 cell proliferation at day 0 and after 1 week, with and without expression of GFP-LC3A.G and H, senescence-associated B-galactosidase (SA-B-Gal) was assessed in CCHE-45 and HEK293 cells following induction of myc-tag or myc-LC3A expression using tetracycline.Cells were monitored for 2 weeks, and the number of SA-B-Galpositive cells was counted and averaged from three independent experiments.The reported values represent the mean ± SD.The data represent three independent biological replicates and the statistical significance was determined using two-way ANOVA analysis to compare different conditions, with **p < 0.01 and ****p < 0.0001 denoting significant differences. alterations in the cell index (Fig. S1E).This indicated that LC3A expression alone is insufficient to trigger cell death.Nevertheless, after 96 h, we observed changes in the cells' physical characteristics and nuclei.Consequently, we tracked cell proliferation and monitored cell division by assessing CellTrace Violet's (CTV) dilution using flow cytometry.The proliferation of CCHE-45 cells remained unchanged in both control and those expressing GFP, whether in the presence or absence of tetracycline (Fig. 1F).We monitored cell proliferation for up to 1 week in culture.However, we observed a substantial 80% decline in cell proliferation during the prolonged expression of GFP-LC3A for 1 week, which persisted over the subsequent week (Fig. 1F and Table S1).Importantly, we avoided relying on monitoring cell proliferation beyond 7 days to ensure that our data would not be affected by the potential deterioration effects of the CTV dye.These findings collectively indicate that the prolonged expression of LC3A can suppress the proliferation of aggresome-positive cells.Based on our collective previous observations, we investigated whether the cells were entering a state of senescence.Our findings revealed that the sustained expression of LC3A for at least 1 week induced senescence in CCHE-45 cells (Fig. 1G), whereas it did not affect aggresome negative HEK293 cells (Fig. 1H). Expression of LC3A elicits global cellular stress altering the proteostasis network The presence of LC3A-positive puncta in the absence of serum starvation and the subsequent dissolution of the vimentin cage surrounding aggresomes strongly imply that LC3A-mediated autophagy plays a crucial role in identifying and removing materials contributing to aggresome formation.Additionally, a senescence phenotype indicates that prolonged LC3A-mediated autophagy is linked to the accumulation of cellular stress signals over time.Hence, we employed quantitative proteomics to identify the protein network changes associated with LC3A expression.We performed quantitative proteomics using SILAC for the myc-LC3A stable clone 48 and 96 h post-induction with tetracycline to capture early and late events.CCHE-45 cells expressing myc-tag were used as controls for the effect of myc-tag and tetracycline.We identified 808 and 872 proteins, in three independent biological replicates at 48 and 96 h, after excluding common proteins with myc control cells (Table S2).Correlation scores were calculated between the different biological replicates based on the abundance of identified proteins in each sample (Fig. S2, A and B).A total of 88 and 145 were differentially expressed proteins (DEPs) at 48 h and 96 h, respectively (Fig. 2A).Among DEPs, 41 proteins were shared between the two time points (Fig. S2C).Shared DEPs displayed similar expression patterns, while the fold change was generally higher at 48 h, with 24 proteins downregulated, and the other 17 upregulated (Figs.2B and S2D). Subcellular localization prediction was then used to gain further insight into the spatial distribution of the DEPs.Most DEPs at 48 h were localized to the nucleus, cytoplasm, and ribosomes.However, at 96 h, proteins were mainly localized to the ER, mitochondria, Golgi, and intracellular vesicles (Fig. 2C and Table S3).Enrichment analysis of the biological processes for the exclusive early event (48 h) identified regulation of gene silencing, chromatin condensation, and conformation change in DNA (Fig. S2E and Table S4).After 48 h of LC3A induction, the protein interactome was associated with chromatin condensation, tri-H3, H4 methylation of histone, and epigenetic maintenance.Recently, a link between chromatin remodeling and autophagy following rapamycin treatment was found to be mediated by the non-canonical eukaryotic initiation factor 3 (eIF3) (34,35).A similar mechanism may occur since the down-regulated DEPs are two members of the eIF3 complex; eIF3C and eIF3G.On the other hand, gene ontology (GO) analysis of the biological process for the exclusive late events (96 h) was significantly enriched in the cellular response to unfolded protein, protein-targeting to the ER, cellular response to osmotic stress, negative regulation of mitochondria, ER to Golgi vesicles transport and antigen processing and presentation (Fig. S2F and Table S4).Common DEPs identified a broad range of proteostasis pathways enrichment, including protein assembly, protein translation initiation, protein folding, targeting to the ER, positive regulation of proteolysis, and RNA catabolic process (Fig. 2D and Table S4). Interestingly, the top-upregulated protein in both time points was the outer mitochondrial membrane protein voltage-dependent anion channel 2 (VDAC2) (36).The VDAC family of proteins is essential in several mitochondrial functions, including metabolite exchange, calcium transport, and apoptosis (36,37).Recent evidence, however, supports the role of VDAC2, specifically in mitochondrial Ca 2+ influx through contact sites between mitochondria and ER (38).Furthermore, the increase in the protein FKBP1A, a modulator of calcium channels, observed at 96-h, along with the enrichment in the pathway associated with ER ryanodine-sensitive calcium release channels (RyR2), provides strong support for potential changes in intracellular calcium levels.These changes could potentially be caused by ER calcium leak and activation of the UPR, possibly triggering a compensatory mechanism involving mitochondria.Consequently, we investigated the role of mitochondria as a mediator of ER-induced stress and cellular allostasis.To determine alterations in the mitochondria environment following LC3A expression induction, we used changes in mitochondrial morphology as a readout.Notably, CCHE-45 cells expressing GFP-LC3A exhibited a significant reduction in mitochondrial branching and increased interconnected networks, indicative of a more compact and circular mitochondrial morphology.Conversely, we observed a general increase in mitochondrial footprint, although these changes were not associated with large-scale disruption of mitochondria content since there was no significant change (Fig. 2E).These results suggest that in response to LC3A-induced autophagy, cells navigate ER stress by mitochondriamediated mechanism, possibly by buffering cytoplasmic Ca 2+ levels.To assess the impact of the morphological changes in LC3A exposes vulnerability in aggresome-positive cancer cells mitochondria on their function, we employed the MitoSOX dye to evaluate mitochondrial function via flow cytometry under different conditions.CCHE-45 cells stably expressing GFP-LC3A displayed an increase in the intensity of the MitoSOX dye.This increase was comparable to the positive control H 2 O 2 .However, no similar pattern was observed in GFP-expressing CCHE-45 cells (Fig. 2F).The heightened intensity of the MitoSOX fluorescence signal indicates elevated levels of mitochondrial superoxide, suggesting an association with oxidative stress.This finding underscores the potential link between LC3A-induced autophagy, alterations in mitochondrial morphology, and increased mitochondrial superoxide levels. LC3A-mediated autophagy is associated with the activation of the PERK-eIF2a-ATF4 axis of the UPR Given the profile of DEPs associated with ER, mitochondria resident proteins, trafficking between the ER and Golgi, and alterations in mitochondria morphology, we hypothesized that LC3A-mediated autophagy in our cellular context correlates LC3A exposes vulnerability in aggresome-positive cancer cells with the ER stress response activation.Moreover, since phosphorylation events regulate the UPR pathways, we examined its activation despite the absence of its members in the proteomics data.Accordingly, we investigated the three UPR stress sensors, PERK, ATF6, and IRE1-a, in both CCHE-45 and HEK293 cells after inducing the expression of LC3A for 48 h.We used thapsigargin (Tg) as a positive control to assess ER stress activation and treated the cells for 6 h.Furthermore, serum starvation for 2 and 5 h was used to assess whether LC3B-mediated autophagy would elicit similar effects.We observed a significant increase in the expression levels of the ER lumenal chaperone Bip in both Tg-treated and myc-LC3Ainduced CCHE-45 cells (Fig. 3A).Similarly, an increase in the expression of ATF4, a downstream transcription factor of the PERK arm, was detected in Tg-treated and myc-LC3Ainduced cells (Figs. 3A and S3).On the other hand, only Tgtreated HEK293 cells had an increase in Bip and ATF4 expression levels (Figs.3B and S4).The increased expression of ATF4 was associated with the phosphorylation of PERK and eIF2a (Figs.3C and S3).Furthermore, the activation of the PERK arm was only associated with LC3A-mediated autophagy since serum starvation did not induce the same response (Fig. 3, A and C).Further confirming that such effect is specific to LC3A and not LC3B mediated autophagy.In contrast, we did not detect activation of the PERK arm in HEK293 cells following LC3A activation (Fig. 3D).Only Tg treatment activated the PERK pathway in HEK293T cells (Fig. 3D).Hence confirming that the activation of the PERK arm is LC3Amediated only in aggresome-positive cells.Next, we examined XBP1 splicing, a downstream marker of IRE1a activation, and ATF6.Our findings show that we could only detect XBP1 splicing in Tg-treated CCHE-45 and HEK293 cells, whereas ATF6 exhibited increased expression rather than splicing (Fig. 3, E and F). Discussion Based on these findings, we propose that CCHE-45 cells employ a strategy of sequestering misfolded or aggregated proteins and dealing with protein overload by forming aggresomes.Upon the introduction of LC3A, these cells trigger LC3A-mediated autophagy, directing the cargo from aggresomes toward the autophagosomal-lysosomal system for degradation.Once cellular commitment to LC3A-mediated autophagy is established, the process becomes reliant on LC3A for maintaining proteostasis, and any interference with lysosomal degradation proves deleterious to the cellular milieu.This commitment also involves the ER and mitochondria's active engagement in managing cellular stress.While these mechanisms can effectively mitigate stress stemming from protein misfolding, their prolonged activation may exert detrimental effects on cancer cells. In response to LC3A-induced autophagy, we observed significant alterations in the mitochondrial morphology of CCHE-45 cells, characterized by a reduction in branching and an increase in interconnected networks, indicative of a more compact and circular mitochondrial structure (39).Although these changes did not disrupt overall mitochondrial content, as evidenced by a consistent footprint, the morphological shifts suggested a mitochondria-mediated response to ER stress.A substantial increase in mitochondrial superoxide levels in CCHE-45 cells expressing GFP-LC3A, underscores a connection between LC3A-induced autophagy, mitochondrial morphology alterations, and oxidative stress.These findings align with proteomics analysis, unveiling a significant enrichment in the 'response to hypoxia' biological process in LC3Aexpressing.This suggests that mitochondrial dysfunction contributes to the cellular response to LC3A expression, emphasizing the intricate interplay between autophagy and mitochondrial dynamics. The UPR constitutes an adaptive cellular mechanism for sensing and responding to stress.When initiated, it can serve a dual role: it can alleviate stress by reducing protein synthesis and expanding the ER capacity to restore cellular homeostasis.Alternatively, it can activate processes leading to cell death when unresolved stress conditions remain (40).Activation of the UPR involves three pathways: PERK, ATF6, and IRE1.Their activation is triggered by an increase in the ER chaperone Bip protein and its subsequent release from the luminal domains of these proteins (41).In our study, the expression of LC3A was exclusively associated with the activation of the PERK pathway.Recent evidence points to a paradoxical regulation of IRE1 under sustained ER stress conditions, where PERK can inhibit the adaptive responses mediated by IRE1 (40).These findings offer a plausible explanation for the absence of IRE1 activation in our experimental model despite the increased expression of Bip.The lack of ATF6 pathway activation in CCHE-45 cells may be attributed to the upregulation of VDAC2, which is known to suppress the functioning of the ATF6 branch within the UPR (42).Consequently, we can propose that LC3A-mediated autophagy in cells with aggresome accumulation activates the PERK pathway to resolve ER stress.However, the absence of IRE1 and ATF6 activation implies a potential predisposition towards cell death which explains the detrimental effects of CLQ treatment following LC3A expression. In conclusion, our study demonstrates that LC3A orchestrates basal autophagy and effectively resolves aggresome formation.Notably, inhibiting lysosomal degradation in the presence of LC3A elicits deleterious effects on cellular group was labeled as "S.S".The reported values representing the mean ± SD of three independent biological replicates, statistical significance was determined using one-way ANOVA analysis, with ns= Non-significant, **p < 0.01, ***p < 0.001 and ****p < 0.0001 denoting significant differences.C and D, Western blot analysis for PERK pathway activation in CCHE-45 and HEK293 cells, respectively, expressing myc or myc-LC3A for 48 h.E and F, Western blot analysis for IRE1 and ATF pathway activation in CCHE-45 and HEK293 cells, respectively.XBP1 reverse transcriptase analysis was performed for the detection of XBP1 splicing.Quantification was performed using XBP1 expression relative to the b-actin gene.In all experiments, cells were treated with Tg for 6 h. Western blot analysis was quantified using the relative density of the desired protein compared to the loading control, GAPDH.Plots depict mean ± SD of three independent biological experiments.Statistical significance was determined using one-way ANOVA analysis, with ns= Non-significant, *p < 0.05, **p < 0.01, ***p < 0.001 and ****p < 0.0001 denoting significant differences. homeostasis, warranting exploration as a prospective therapeutic avenue.Additionally, the sustained activation of LC3A is linked to ER stress, initially mitigated through mitochondrial mechanisms but culminating in cellular senescence.While our investigation did not encompass measurements of Ca 2+ dynamics, our findings strongly suggest a potential involvement of Ca 2+ in mediating the intricate interplay between the ER, mitochondria, and the induction of senescence, thereby emphasizing the need for dedicated future investigations. Plasmids and cloning The pcDNA4/TO/myc-His A (Invitrogen, K1030-02) was used to generate; myc-LC3Av1, GFP, and GFP-LC3Av1 expression vectors using XhoI and AgeI restriction enzymes.Gene synthesis and cloning were performed by Eurofins Scientific Company. CCHE-45 and SH-SY5Y cells were cultured in Roswell Park Memorial Institute-1640 (RPMI 1640) medium (Gibco, 52400025), while HEK293 cells were cultured in Dulbecco's modified Eagle's (DMEM) medium (Gibco, 41966029).Both media were supplemented with 10% fetal bovine serum (FBS) (Gibco, 10270106) and 1% penicillin-streptomycin (Gibco, 15140122).The cells were maintained under standard cell culture conditions at 37 C and a CO 2 concentration of 5%.Both cell lines were verified to be free from Mycoplasma contamination.For the generation of stable clones, cells were first cultured in their respective medium supplemented with 10% tetracycline-reduced FBS (Thermo Scientific, A4736301) for 24 h before transfection.The transfection process involved cotransfecting the cells with a mixture of the gene of interest construct and the pcDNA6/TR regulatory vector (Invitrogen, K1030-02) at a ratio of 1:6 using Lipofectamine 3000 transfection reagent (Invitrogen, L3000015) according to the manufacturer's instructions.However, the cells were incubated with the DNA-lipid complex for 10 to 15 min before adding the complete culture medium.Twenty-four hours after transfection, cells were washed, and fresh medium supplemented with 10% tetracycline-reduced FBS was added.After an additional 48 h, transfected cells were maintained in a selective medium containing 10% tetracycline-reduced FBS, 5 mg/ml and 250 mg/ml for CCHE-45, and 3 mg/ml blasticidin and 125 mg/ml zeocin for HEK293.Cells were maintained in the selective medium for 4 weeks until distinct focal points (foci) developed.Twenty different foci were selected and expanded to screen for the expression of LC3A.To induce LC3A expression, tetracycline was added to the cells to a final concentration of 1 mg/ml, and the cells were incubated for at least 24 h at 37 C.For the induction of LC3B-mediated autophagy, cells were serum-starved in Hank's balanced salt solution (Lonza) for 2 and 5 h.To evaluate autophagy flux, CLQ (Enzo Life Sciences) was added to the cell culture medium at a final concentration of 50 mM to block the fusion between autophagosomes and lysosomes.The Tg drug, which inhibits the ER Ca 2+-ATPase, was used as a positive control to induce ER stress (1,43).Tg was added to the cells' culture medium at a final concentration of 1 mM for 6 h.The cells were harvested for further analysis at the indicated time point.Hydrogen peroxide (H 2 O 2 , 30% w/v) (Adwic, H0038111) was utilized as a positive control to induce reactive oxygen species (ROS) stress in CCHE-45 and HEK 293 cells.H 2 O 2 was added to the culture media at a final concentration of 20 mM for 4 h after treatment.Cells were harvested for subsequent analyses. 5mM of the proteasome inhibitor MG132 (Cell Signaling, 2194S) was added to cells for 6 h to induce the formation of aggresomes in HEK293 and SH-SY5Y cells. RT-PCR and real-time PCR Total RNA was extracted using TRIzol reagent (Invitrogen, 15596-026), and the resulting RNA was reverse transcribed using the RevertAid First Strand cDNA Synthesis Kit (Thermo Scientific, K1622).Real-time PCR was conducted on the CFX96 Touch Real-Time PCR Detection System (Bio-Rad) using Maxima SYBR Green qPCR Master Mix (2×) (Thermo Scientific, K0251).Quantification analysis was performed using the comparative threshold cycle (Ct) method, with the Ct values of each gene normalized to the Ct value of ß-actin.All experiments were performed in triplicate.The fold change in gene expression was determined using the equation 2 -DDCt (44).To detect XBP1 gene splicing, PCR was conducted with DreamTaq Green PCR Master Mix (2×) (Thermo Scientific, K1081).All protocols were performed according to the manufacturer's instructions with the primers described in (Table S5). SDS-PAGE and Western blot analysis For protein analysis, cells were lysed using a lysis buffer containing 8M urea and 500 mM Tris-HCl (pH 8.5), and a protease inhibitor cocktail (Thermo Scientific, A32955).The total protein concentration was determined using a bicinchoninic acid (BCA) protein assay kit (Thermo Scientific, 23225).Equal amounts of proteins were resolved by 12% SDS-PAGE at 100 V for 2 h and transferred to PVDF membranes (Thermo Scientific, 88518 and 88520).The membranes were then blocked with 5% w/v nonfat dry milk for 1 h at room temperature.Subsequently, membranes were probed with specific primary antibodies (listed in Table S6) overnight at 4 C.After three washes, the membranes were incubated with secondary antibodies for 1 h at room temperature.Detection was performed with Pierce ECL plus W.B. substrate (Thermo Scientific, 32132) and scanned using the ChemiDoc MP Imaging System (Bio-Rad) with all bands in the linear range of detection.ImageJ was utilized for quantitative analysis of protein levels and statistical comparisons among different treatments. LC3A exposes vulnerability in aggresome-positive cancer cells Immunofluorescence and confocal microscopy For immunofluorescence staining, cells were seeded in 6-well plates on microscope cover glasses (22 mm × 22 mm) coated with Poly-D-Lysine (Gibco, A3890401).After treatment, cells were fixed with 4% paraformaldehyde (Sigma-Aldrich, P6148) in PBS for 15 min and permeabilized using 0.3% Triton X-100 (Sigma-Aldrich, 93420) in PBS for 15 min.Blocking buffer (0.3% Triton X-100 and 5% FBS in PBS) was added to cells for 1 h at room temperature.Fixed cells were incubated overnight at 4 C with primary antibodies in the antibody-dilution buffer (0.3% Triton-X and 1% BSA in PBS) in a wet chamber.Detailed information about the antibodies used is listed in (Table S2).After three washes with PBS, the cells were incubated with Alexa Fluor-conjugated secondary antibodies for 1 h in the dark at room temperature, followed by incubation with DAPI (Invitrogen, D1306) for nuclear staining.Positive-charged slides were then mounted using Prolong Gold (Invitrogen, P36930).LysoTracker Deep Red dye (Invitrogen, L12491) was used to label lysosomes at a 1:10,000 dilutions.For visualization of the mitochondria, CCHE-45 stable clones were grown on live imaging cell culture plates, and GFP-LC3A expression was induced with tetracycline for 24 h before adding Mitotracker Red CMXRos (Invitrogen) (1:1000 dilution) for 30 min at 37 C.The cells were washed with PBS, and a fresh medium was added.Live imaging was performed using a ZEISS LSM 980 confocal microscope equipped with an Airyscan 2 detector confocal microscope at 60× magnification with a Plan-Apochromat 60×/1.4Oil DIC (UV) VIS-IR M27 objective lens 24 h after the induction.The cells were maintained at 37 C with 5% CO 2 during imaging for 24 h. Mitochondrial reactive oxygen species (ROS) measurement CCHE-45 cells were treated with 5 mM MitoSOX Red mitochondrial superoxide indicator (Invitrogen, M36007) for 30 min at 37 C in the dark, following the manufacturer's instructions.The unstained cells served as the control for each sample.Approximately 40,000 gated events were acquired for each sample on a CytoFLEX (Beckman Coulter) and analyzed using CytExpert software.Dead cells and debris were excluded based on forward scatter and side scatter measurements.All analyses were gated on control of unstained CCHE-45 cells, determined by morphologic identification (forward scatter versus side scatter).The mean value ± standard deviation (SD) for the percentage of Mitosox-positive cells was calculated. Image analysis Fiji (ImageJ) software was used for image analysis.The MitoTracker channel threshold was set using ImageJ's Auto Threshold plugin.Mitochondrial network parameters, length, and branching were then quantified using the Mitochondrial Network Analysis (MINA) plugin in ImageJ (45).Statistical analysis was performed using GraphPad Prism software version 8.The segmentation analysis module in ZEN 3.3 software (Blue edition) was employed to identify and count the puncta representing positive autophagosomes.Parameters such as thresholds and size criteria were adjusted to ensure accurate detection and quantification of the puncta.The software automatically detected and counted the puncta based on the predefined segmentation criteria.To ensure accuracy, the results of the automated puncta counting were manually validated and refined if necessary.The counted puncta were recorded and subjected to further statistical analysis or comparisons.Linear unmixing analysis was performed using ZEN 3.3 software.Channels corresponding to different labels were selected, and the linear unmixing algorithm was applied to separate the contribution of each label from the mixed signal in the acquired images.This process facilitated the separation of specific signals corresponding to individual colors, reducing spectral overlap and enabling enhanced visualization and analysis of the target structures.Subsequently, quantitative measurements, such as intensity profiles, colocalization analysis, and morphological characterization, were performed on the unmixed images.Colocalization analysis was conducted using ZEN 3.3 software.Specific channels corresponding to different labels were selected for analysis utilizing the software's colocalization module.Thresholds were set to differentiate the signal from background noise, ensuring precise colocalization measurements.The degree of colocalization between the labeled structures was quantified using the colocalization coefficient known as Manders' coefficient.Subsequently, co-localization channels and scatterplots were generated to visualize and analyze the colocalization patterns. Immunoprecipitation Stable cell lines expressing GFP and GFP-LC3A protein were seeded in 15-cm cell culture plates.After 48 h of LC3A expression induction with tetracycline, cells were washed with ice-cold PBS and lysed in I.P. lysis buffer (90409, Thermo Scientific).One milligram of total protein was combined with 10 mg of anti-GFP antibody (Abcam, ab6556) in Protein LoBind tubes (Eppendorf, 022431081) and incubated overnight at 4 C while rotating.Next, the sample-antibody mixture was added to 0.25 mg of Pierce protein A/G Magnetic Beads to perform manual immunoprecipitation according to the manufacturer's instructions.Immunoprecipitated samples were eluted, dried in a speed vacuum concentrator, and reconstituted in urea sample buffer.The isolated proteins were resolved by 12% SDS-PAGE. Real-time analysis of cytotoxicity Twenty-four hours before tetracycline-induction, 50 ml of complete medium was added to the electronic microtiter plate (E96) xCELLigence plate for impedance background measurement.Following harvesting and counting, stable clone cells were diluted to 10,000 cells/well and added to the 50 ml medium.The E-Plate was incubated at 37 C with 5% CO 2 and monitored with the RTCA software (xCELLigence Real-Time Cell Analysis) at 5-min intervals.The following day, the cell index was assessed to ensure an equivalent number of cells across all conditions.Subsequently, cells were treated with 1 mg/ml tetracycline, and the cell index was monitored for up LC3A exposes vulnerability in aggresome-positive cancer cells to 100 h post-induction.CLQ was added to the cells at a final concentration of 50 mM, concurrently with tetracycline, to evaluate autophagy flux. SILAC and mass spectrometry (LC-MS/MS) Stable isotope labeling of amino acids in cell culture (SILAC) was performed to quantitatively analyze the effect of LC3A expression on CCHE-45 cells.Control-CCHE-45 cells were labeled with "heavy" amino acid: 0.248 mg/ml L-13C6 Arginine-HCL (Cambridge Isotope Laboratories Inc, CLM-2265) and 0.04 mg/ml L-lysine-2HCL (Thermo Scientific, 88429).While the myc-LC3A and myc stable clones were labeled with the "light" amino acid L-arginine and L-lysine using 0.2 mg/ml and 0.04 mg/ml L-arginine, free base (Millipore Sigma,1820-100GM) and L-lysine-2HCL, respectively.SILAC-heavy and light-labeled cells were lysed and combined in a 1:1 ratio.Protein samples were reduced with 10 mM dithiothreitol in 50 mM ammonium bicarbonate for 30 min at 60 C, alkylated in the dark with 55 mM iodoacetamide in 50 mM ammonium bicarbonate for 30 min at room temperature, and digested overnight at 37 C with trypsin.The protein digestion reaction was stopped by acidification.Cells were collected from three biological triplicates for two time points (48 h and 96 h following expression induction with tetracycline).For LC/MS/MS, digested samples were analyzed using an EASY-nanoLC 1200 system and an Orbitrap Fusion Lumos Tribrid mass spectrometer (Thermo Scientific) coupled with a NanoFlex ion source.100 ng of peptides were trapped on Acclaim PepMap 100 C18 HPLC trap columns (Thermo Scientific, 164750) with 75 mm I.D., 2 cm length, and 3 mm particles.Then, peptides were eluted on an Acclaim PepMap 100 analytical column (Thermo Scientific, 164569) with a 75 mm I.D., 25 cm length, 2 mm particles, and C18 packing.Sample elution was performed using the one-hour gradient of solvent B (0.1% formic acid, 80% acetonitrile) that was conducted at 5% to 30% for (0-31 min), 30% to 40% for (31-41 min), 40% to 80% for (41-51 min), held for 4 min at a flow rate of 250 nl/ min, and followed by a 5-min ramp to 100%.Then, solvent A contained 0.1% formic acid in water.The mass spectrometer was operated in data-dependent acquisition (DDA) mode with 3-s cycles for the survey and the MS/MS scans.Survey scans of peptide precursors were performed from 400 to 1800 m/z at 120K resolution with standard automatic gain control (AGC) and maximum ion injection time (I.T.) set to auto mode.Monoisotopic precursor selection (MIPS) was determined at the peptide level with an intensity threshold of 5 × 103, and only peptides with charge states of 2 to 7 were selected for tandem M.S.The dynamic exclusion was set to 30 s with a 10 ppm mass tolerance, and isotopes were excluded.Isolation for MS2 scans was performed in the quadrupole with an isolation window of 1.5 m/z.Higher-energy collisional dissociation (HCD) activation was applied with 30% collision energy using dynamic injection time mode and a standard AGC target.The resulting fragments were detected using the rapid scan rate in the linear ion trap.The MS1 and MS2 spectra were recorded in profile and centroid modes.The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE [1] partner repository with the dataset identifier PXD043703. Proteomics data analysis, gene enrichment, and subcellular analysis MaxQuant (version 1.6.17)was used to identify and quantify peptides, matching against the reference human proteome (uniprot_cano_varsplic_HUMAN) supplemented with the default contaminants database.Relevant settings for analysis included: the use of the heavy label feature (set to Arg6), variable modification of methionine oxidation and acetylation of the protein N terminus, along with the use of trypsin specificity with a maximum of two missed cleavages, FDR limited to 1% at both the peptide and protein levels, and unique + razor peptides selected for quantification while all other settings were set to the default.For differential abundance analysis, the resulting evidence.txtand proteingroups.txtfiles were analyzed using aggregation, normalization, and the differential expression analysis tools in ProteoSign5 (version 2.0) using default parameters.After removing the contaminants, DEPs were identified using logFC (1.5 and −1.5), and the change expression between biological replicates was tested (at adjusted p ≤ 0.05).Pearson's correlation scores were calculated between the different biological replicates in the proteomics experiments.The correlation scores were calculated based on the abundance values of identified proteins in each sample.A high correlation score indicates a strong agreement between the replicates, whereas a low score indicates a significant variation between them.These scores are useful in assessing the reproducibility and reliability of proteomics experiments and can guide the selection of appropriate replicates for downstream analysis.The circular clustered heatmap was plotted using the SRplot tool https://www.bioinformatics.com.cn/en.The enrichGo6 function in the (cluster Profiler) R package was used to perform functional enrichment analysis of DEPs.Gene ontology (GO) in three sections: biological process (BP), molecular function (MF), and cellular component (CC), were identified based on (Benjamini-Hochberg FDR <0.05) with the removal of redundant terms.The ClueGO7 plugin in Cytoscape software was employed to visualize the significant GO terms.Subcellular analysis was performed using Sub-cellularRVis, a bioinformatics tool that describes subcellular localization for a gene list (46).SubcellulaRVis can be accessed via the web (http://phenome.manchester.ac.uk/subcellular/). Statistical analysis GraphPad Prism version 8 software was used for statistical analysis.Image analysis was performed using Fuji.Error bars were plotted as standard errors of the mean (±SD) for three independent biological experiments.A two-tailed Student's t test was performed for real-time PCR, immunoblot quantification, and comparisons.One-way ANOVA was used to compare the means between control and GFP-LC3A-induced cells. Figure 1 . Figure1.Activation of LC3A-mediated autophagy in aggresomes positive cells.A, Western blot analysis of CCHE-45 and HEK-293 cell lysates for LC3A expression post-induction with tetracycline.LC3A expression was detected using an anti-LC3A or anti-GFP antibody in myc-LC3A and GFP-LC3A transfected cells, respectively.GAPDH served as a loading control.B, double immunofluorescence staining for LC3A (red) and vimentin (green) to assess antibody specificity in myc-LC3A-induced cells.DAPI was used for nuclear staining.Representative images are shown from three independent experiments.C, scatter plot of the number and sizes of GFP-LC3A-positive puncta in 10 cells using ZEN blue segmentation analysis.D, immunoprecipitation analysis using anti-GFP antibody following induction of GFP-LC3A expression for 48 h.Samples were then analyzed by immunoblotting using antibodies against GFP, LC3A, vimentin, and CK8.Whole-cell lysates were used as a positive control, while the residual lysate was used as a negative control.E, impedance monitoring was performed for the following conditions; CCHE-45 untransfected cells, CCHE-45 cells stably transfected with GFP-LC3A, GFP-LC3A with CLQ, GFP, and GFP with CLQ.The X-axis represents time in hours.The Y-axis represents the cell index obtained from the RTCA software.Three biological replicates were performed and the data is presented as mean ± SD.F, representative flow cytometric analysis of CellTrace Violet staining showing CCHE-45 cell proliferation at day 0 and after 1 week, with and without expression of GFP-LC3A.G and H, senescence-associated B-galactosidase (SA-B-Gal) was assessed in CCHE-45 and HEK293 cells following induction of myc-tag or myc-LC3A expression using tetracycline.Cells were monitored for 2 weeks, and the number of SA-B-Galpositive cells was counted and averaged from three independent experiments.The reported values represent the mean ± SD.The data represent three independent biological replicates and the statistical significance was determined using two-way ANOVA analysis to compare different conditions, with **p < 0.01 and ****p < 0.0001 denoting significant differences. Figure 2 . Figure 2. The proteome landscape associated with the expression of LC3A protein.A, volcano plot of differentially expressed proteins after the induction of LC3A expression for 48 and 96 h.The log2 FC on the X-axis and log10 adjusted p-value on the Y-axis.The horizontal line represents the cutoff of the adjusted (p-value < 0.05), and the vertical lines represent the cutoff of the log2 FC (1 and −1).B, bar plot of the average log2 fold change of the Light/ Heavy ratio identified from SILAC for the 41 common proteins.The blue and green bars represent protein FC at 48 and 96 h, respectively.C, NUpSet plot for DEPs subcellular localization using SubcellularRVis.Each set's total number of proteins is represented as a bar chart.Each row corresponds to a cellular compartment where filled-in cells show the compartment and the intersection with the other compartment.D, network visualization of GO for biological process, molecular function, and cellular component for differentially expressed proteins shared at 48 and 96 h following induction of LC3A expression.The nodes in the network represent the GO terms, while the edges connecting them represent the relationships between the terms.E, confocal microscopy images using a 60× objective lens of mitochondria labeled with MitoTracker in CCHE-45 cells after 48 h of induction of GFP-LC3A expression.Barplots represent counts obtained from the MINA analysis across three different experiments with the reported values representing the mean ± SD of independent biological replicates, statistical significance was determined using one-way ANOVA analysis, with *p < 0.05 and ****p < 0.001 denoting significant differences.F, flow cytometry analysis for mitoSOX in CCHE-45 control cells, cells expressing GFP, and cells expressing GFP-LC3A.H2O2 was utilized as a positive control for ROS stress.The bar plot represents the percentages of mitoSOX-positive cells across three different replicates, with the reported values representing the mean ± SD. Figure 3 . Figure 3. UPR analysis following induction of LC3A expression.A and B, real-time PCR analysis depicting the mRNA expression levels of Bip and ATF4 under various experimental conditions normalized to the b-actin as a housekeeping gene for CCHE-45 and HEK293 cells, respectively.A serum-starved
2024-05-22T15:09:32.144Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "bea9196359ce3e431b894ca3134e639255d73077", "oa_license": "CCBY", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "3eb1f852ea99e310ef4522f7bc3dd34bf4e5ec30", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
14434284
pes2o/s2orc
v3-fos-license
Knowledge , Attitudes and Beliefs about Dementia in an Urban Xhosa-Speaking Community in South Africa Background: Dementia, a debilitating condition, requires particular attention in Southern Africa where there is a dearth of prevalence data. Population ageing and other risk factors are driving an increasing incidence of dementia. However, limited knowledge and understanding may impact the attitudes and practices towards persons with dementia. Aim: To investigate the relationship between the knowledge of dementia, its effect on the attitudes and practices toward people with dementia in an urban community setting. To determine the perceived availability of services for those with dementia, the awareness of elder abuse and care-giver burden. Methods: A descriptive, cross-sectional study was performed in Khayelitsha. An interviewer administered questionnaire was used with assistance from isiXhosa speaking translators. A sample of 100 individuals was surveyed door-to-door from both the informal and formal housing settlements, using cluster random sampling methods. Results: There was deficient knowledge about dementia, with an average accuracy of 53.44% on the knowledge test. Only 10% reported knowing what dementia was. Participants had generally tolerant views about people with dementia. No significant relationship was found between knowledge and attitudes about dementia. There was a significant difference between people who would share their house with a family member with dementia or send them to nursing homes (p = 0.03). 64% of participants knew what elder abuse was. 19% knew of an elder who had been abused; amongst the most common forms reported were being locked alone in their house and being deprived of food. Conclusions: This study showed that knowledge about dementia was limited with no relationship to attitudes of high tolerance towards people with dementia. Elder abuse was well recognized, but poorly reported. Appropriate health promotion strategies and education should be conducted and further research should be done into dementia in South Africa. Dementia, an incapacitating disease, mostly affects older persons globally.The most common types of dementia are Alzheimer's disease (AD) and vascular dementia [1].The diagnosis of dementia involves obtaining significant medical history from the patient and the family [2].Thus, knowledge and awareness amongst family members is crucial to detecting the symptoms and seeking medical advice. Knowledge about Dementia Studies in other countries found knowledge and understanding of dementia to be poor amongst the general population [3]- [6] as well as being limited amongst family caregivers of individuals with dementia [4].Knowledge about dementia is generally even lower in minority ethnic groups and low to middle income countries (LAMIC) [3] [4] [6] [7].One study reported that even where general knowledge about dementia was present, focusing on definitions and symptoms, the "biomedical knowledge" about the causes, treatment and prevention was still very low [8].Lack of knowledge about dementia negatively impacts on health seeking behavior [4] and on the ability of caregivers to provide adequate care [8]. Attitudes towards People with Dementia People with dementia are often stigmatized, discriminated against and socially excluded.This is a major public health concern.This stigma occurs in all social classes [9].The patients often fail to tell family members and friends or seek medical care about their health condition because of the negative reactions that might ensue [10]. Lay concepts about dementia are influenced by the cultural perspectives.For example, in some African cultures dementia symptoms are perceived as a sign of possession by evil spirits, or a punishment for sins.A study amongst British people of Indian origin [7] found that people (aged 17 -61 years old) described old age as a time of social withdrawal and isolation.According to Sahin et al. [11] and others the majority of elderly individuals consider the occurrence of dementia in old age to be a normal phenomenon [12].There was a significant dissociation between the concept of dementia and the typical symptoms of dementia.The term dementia is apparently treated as an independent and vague concept without awareness of its clinical symptoms.A recent study found that black Caribbean people with dementia feared being viewed as "crazy" or "mad" [13].In contrast, a US study exploring attitudes in the African American community found little or no stigma associated with dementia, which was conceptualised as a combination of worry and stress [14]. Elder Abuse toward Individuals with Dementia Elder abuse is difficult to define and poorly understood because it is often a hidden offence and the nature of the abuse alters in different regions of the world [15] [16], while it can be described as "any knowing, intentional or negligent act by a caregiver or any other person that causes harm or a serious risk of harm to a vulnerable adult" [16].Western countries classify elder abuse as either being physical, emotional, financial, sexual and neglectful.Developing countries-especially African countries-expand their definition to include allegations of witchcraft [16]. One of the identified target groups of elder abuse includes those with dementia [16].In some African countries, there has been witchcraft-related violence (ostracism, torture and murder) especially towards elderly widowed women [16].In addition to gradual gross memory loss, people with dementia display behavioral disturbances, hallucinations, garbled speech and wandering [17].These types of behavior, together with the lack of community comprehension of dementia, influence the mistreatment of older people with dementia [18]. The elderly often experience abuse from family.A caregiver's perceived burden of looking after a cognitively or physically impaired older person can be seen as a stressor that can lead to abuse.A study conducted in Hong Kong showed that mistreatment was often in the form of verbal and physical abuse, ranging from shouting, manhandling and beating older people [19].Extreme physical forms of violence, such as stabbing older people with knives, burning or murdering were not reported in the study.In Southern Africa such incidents are more common, but poorly reported [15] due to beliefs that the odd behavior of PWD is synonymous with witchcraft practices [20].Another common type of elder abuse reported by the UK National Elder Abuse Study is that of a financial nature.Reported acts of abuse included denying access to sufficient health services, despite poor family care, as this would reduce the family inheritance; identity theft with the aim of falsely obtaining loans on behalf of PWD and valuables missing without explanation [21]. Effects on Family Caregivers PWD require high levels of care.The majority of PWD live in the community and approximately 75% receive care provided informally by family and friends [22].The typical profile of a dementia caregiver is a middle aged or older female, child or spouse of the individual with dementia.One of the main differences between caregiving in the developed and developing world is in the living arrangements, whereby in the developing world PWD live in larger households with extended families [23]. Family caregivers are motivated to provide care for reasons which include a sense of love or reciprocity, spiritual fulfilment, a sense of duty, guilt, social pressures or even greed.Those caregivers who are able to identify more beneficial components of their role experience less burden, better health and relationships and greater social support [24].Caregivers in developing countries spend a median of 3 to 6 hours a day with the PWD and 3 to 9 hours assisting with activities of daily living including bathing, feeding, and toilet assistance [23]. Carers of PWD face the difficulty of balancing caregiving with other demands such as raising children, careers and relationships and this puts them at increased risk of stress, depression and a variety of health complications [25].The costs of looking after PWD are high and involve paying for medical consultations and residential care in later stages. Aims and Objectives We aimed to investigate the knowledge, attitudes and practices toward people with dementia in an isi-Xhosa speaking community in a township (Khayelitsha) in the Cape Town Metropole of the Western Cape so as to develop an appropriate health promotion intervention to increase dementia awareness in the community.The objectives of the study were to design an appropriate new dementia questionnaire in order to determine: 1) the knowledge of dementia in Khayelitsha; 2) the attitudes toward PWD; and 3) awareness of abusive practices towards PWD; and to evaluate the relationship between level of knowledge and attitudes and practices, furthermore, to assess the perceived availability of services for PWD and the problems associated with caregiving for PWD. Motivation It is crucial that research be done to evaluate the beliefs and practices nationwide towards PWD.This study was done to promote better awareness and education about dementia through public health promotion and to make known the need for more research.This is to be established with the support of Dementia SA, a non-government organisation that started in 2006, to minimise the impact that dementia has on individuals, families and communities. Study Design This was a cross-sectional, observational and descriptive study which started in April and ended in June, 2013.The study made use of a researcher-administered questionnaire as an instrument for data collection.The questionnaire was produced in English therefore isiXhosa translators accompanied the researchers (Appendix A) [26].The questionnaire was adapted from a series of existing questionnaires, including an AD quiz produced by Ayalon & Areán [27] and The Alzheimer's Disease Knowledge Scale [28].The questionnaire consisted of closed and open-ended questions and was divided into 5 sections: participants' demographic status, knowledge and understanding of dementia (16 items), attitudes towards PWD (7 quantitative, 1 qualitative item), practices towards PWD and challenges that carers experience (6 quantitative, 8 open-ended).The purpose of the open-ended questions was to gain better insight into the opinions of the community members, considering the lack of research around dementia in South Africa.Responses were used to select specific ideas, allowing the data to be converted from qualitative into quantitative data.The total number of questions was 50, 13 being open-ended and 37 being closed-ended questions. The attitudes section of the questionnaire was based on tolerance toward those with dementia.The score was out of 8, with 8 being most tolerant and 1 being intolerant.For the perceived causes of dementia and people's attitudes towards PWD, questions were based on a questionnaire by Crabb et al., 2012 [29].The practices and elder abuse section of the questionnaire was drafted from three studies reporting on elder abuse.The word mental illness was substituted with the word dementia. Population and Sampling The research was conducted in the township of Khayelitsha, established in 1985 and housing a population of approximately 400,000 people in 86,000 formal and informal households in 2012 (Wikipedia).Cluster sampling groups representative of the population in terms of the socio-economic and cultural circumstances of the area were selected.The sample size included 100 individuals; the minimum sample size was estimated within 7% of the true value and an anticipated population precision of 15% [CI: 95%] on a 50% proportion having accurate knowledge about dementia.The study included males and females aged 18 to 80 plus.People under the age of 18 and those who lacked the mental capacity to give informed consent and respond to the questions were excluded.Participants were interviewed in their homes during the day.In cases where there was more than 1 occupant in the home, a maximum of 2 people were interviewed. The cluster sample was selected by using a map of the area using Google maps.With the help of a site-facilitator, 3 sections in the area of Khayelitsha were chosen.It is possible that the cluster group selected may be more similar with regards to beliefs than another cluster group thus increasing the risk of bias [30].Therefore to decrease bias, clusters were increased to 200 homes per sample area (formal and informal settlements) with the sample size limited to 100 participants. Approval for the study was obtained from the University of Cape Town, Human Research Ethics Committee (HREC).Verbal informed consent was obtained from the participants.Vulnerable groups were protected by exclusion from the study (under 18 and those unable to give informed consent).Participants were interviewed separately to ensure privacy. Pilot Study A sample of 10 people were interviewed at the Nonceba family and counselling center in Khayelitsha for the pilot study.Each interview took approximately 30 minutes, with translators.An isiXhosa equivalent questionnaire was not prepared so the translators used a copy of the English questionnaire to guide them during the interview.Questions were altered from the published versions to improve their cultural appropriateness and tested in the pilot study.The pilot study identified certain questions that needed to be removed or rephrased to improve the questionnaire's validity, as there was no pre-existing questionnaire to validate it against in this population. Data Analysis Data analysis was performed with Stata software, version 12. Descriptive statistics such as frequencies and measures of central tendency (Means and standard deviation, or proportions with 95% CI) were produced to summarize the data including the socio-demographic characteristics.In addition, independent T-tests were performed to compare knowledge scores by sex, employment and type of accommodation.Histograms were used to demonstrate the spread of the data.Furthermore hypothesis testing was performed with Chi-squared tests to evaluate the relationship between specific categorical variables, such as beliefs and attitudes toward individuals with dementia, and elder abuse.Pearson's correlations were performed to establish associations between variables (e.g.living in the community vs in nursing homes). Results There was a 100% response to the questionnaire from the study sample consisting of a 100% Black African population (Table 1).The majority of the participants were female (68%), Xhosa speaking (98%), Christian B).The question "Do you think a brain disease can be the cause of dementia?"had an 87% correct response rate.There was a minimal negative correlation between age and knowledge (r = −0.12),however, this was non-significant (p = 0.25). There was no correlation between years of education and knowledge (r = 0.04, p = 0.68).T-tests were used to compare knowledge scores by sex, type of accommodation, employed (yes/no) and knowing someone with dementia (yes/no).Sex and employment showed a significant relationship to knowledge, with males knowing more than females and unemployed people knowing more than employed people (Table 2). The distribution of attitude and knowledge scores is shown in histograms (Figure 1).The mean attitude (tolerance) score was 5.78/8 +/− 1.7.The attitudes questions revealed that the proportion of those who would share their house with a family member that has dementia was high: 88% [95%CI: 80%; 94%]; those who believed that PWD were responsible for their illness was low: 7% [95%CI: 3%; 14%].The proportion who thought PWD were dangerous and violent and to be avoided was 43% [95%CI: 33%; 53%].The proportion who would feel ashamed having a family member with dementia and the community knowing this was 19% [95%CI: 12%; 28%].The proportion of the sample that would be afraid of having a conversation with someone who has dementia was 11% [95%CI: 5%; 19%], and afraid to have someone with dementia as their neighbor was 15% [95%CI: 9%; 24%].Finally the population proportion that agreed that PWD should live in the community was 67% [95%CI: 57%; 76%], while the population proportion that agreed that people with dementia should live in nursing homes was 74% [95%CI: 64%; 82%], thus a proportion (40%) answered yes to both questions. Since there was a large overlap between the proportions of participants who believed people with dementia should live in the community or in nursing homes, Pearson's correlations were used to identify the associations between these responses and the other variables (Table 3) to evaluate underlying perceptions.These associations were significant between the participants believing that "PWD should live in the community" versus that "PWD are dangerous or violent" (r = 0.35), while the association was significant between those who thought PWD should live in nursing homes versus "being ashamed to have a PWD in the family" (r = 0.27).There was a negative trend between those who thought PWD should live in nursing homes with "being willing to share a house with PWD". With regards to spiritual beliefs about dementia, four main concepts were analyzed.The population proportion that agreed that dementia was a punishment from God was 14% [95%CI: 7%; 22%]; or from the ancestors was 18% [95%CI: 11%; 27%]; there was combined agreement of 26% that dementia was a punishment.The proportion of those who believed traditional healers can cure dementia was 15% [95%CI: 9%; 24%].Finally the proportion of those who believed that dementia was a curse or due to witchcraft was 28% [95%CI: 19%; 38%].The correlations between these beliefs were established through the use of Pearson's correlation (shown in Table 4), to identify the overlap in beliefs.There was a significant overlap between those who believed dementia was a punishment from God or from the ancestors (r = 0.54) and between those who thought dementia was a curse and that traditional healers can heal dementia (r = 0.30).Two different methods were used to assess the relationship between knowledge and attitudes.Firstly the knowledge score was correlated against the attitude score for tolerance.The results showed no significant relationship between knowledge and attitudes (p = 0.59). Qualitative Data Participants were asked what possible challenges or problems would occur when caring for someone with dementia.Eighty four participants gave examples of problems that may occur, while the other 16 did not know, did not respond or replied that they did not think it would be a problem.Common responses were constant watching of the person, the actual difficulty in caring for those with dementia, not knowing how to care for them and the possibility that PWD would not be cooperative when being advised or instructed.Other responses were that PWD could cause damage in the house or be a danger to themselves and others.Nevertheless many participants reported that they would be willing to care for an individual with dementia. Knowledge and opinions on elder abuse were assessed in order to get a scope on the severity of the matter in this community.Almost two-thirds (64%) [95%CI: 53.79 -73.36] of the participants understood the term "elder abuse" and were able to describe it.However, the results revealed that a much higher proportion of participants were able to identify signs of elder abuse, namely (Table 5 13.49 -30.29].In order to assess whether there were associations between the knowledge of support programs/ services available for elders who had been abused and sex, level of education and type of accommodation, crosstabs analysis with a chi-squared test was done.There were no significant results for any factor.When asked about raising awareness with regards to the abuse of elders with dementia, participants agreed that education about dementia, its causes and care for PWD was of key importance.Further suggestions were that seminars could be run in halls or clinics or that education visits be done door-to-door. Discussion In keeping with the international literature, the results showed that knowledge concerning dementia was very low in the isiXhosa-speaking sample surveyed.The scores on the knowledge questionnaire were higher than expected, but this may not have been an accurate reflection of people's actual knowledge levels.There was the possibility of obtaining correct answers without a full understanding of dementia.Because the majority of the participants did not know what dementia was, perceived understanding of the term "dementia" was relied upon in order for them to answer the rest of the questionnaire.The variables in the assessment of attitude which scored the highest in agreement were being prepared to share a home with PWD, having PWD living in the community or living in nursing homes, the latter two having equal acceptability.However, living in nursing homes was not regarded as being excluded from the community by participants.This suggests that the concept may not have been well-enough defined to convey the idea that being sent to a nursing home meant being isolated from the community and being cared for by unknown caregivers rather than by family and friends, versus the idea that living in the community means not being isolated, living amongst family and friends. Furthermore participants who agreed with PWD living in the community indicated that they were not ashamed if people knew, but at the same time they considered PWD dangerous/violent.However, the participants who agreed with PWD living in nursing homes were more negative about sharing their home with a family member with dementia and about being ashamed if people knew they had a family member with dementia (Table 3).Nevertheless according to the attitude score participants had overall tolerance toward PWD.This is contrary to the global beliefs toward those with dementia and mental illness, where the common attitude is one of neglect from the community and isolation from society [29].These contrasting attitudes suggest that partici-pants were influenced by the interviewers to answer as expected rather than truthfully; or it may be assumed that the people of Khayelitsha actually have fewer stigmas toward those with dementia.Moreover they may have more traditional respect for elders and willingness to accept responsibility for the care of family members.This reflects other reports in the literature, as the general understanding is that PWD are more vulnerable than other elders thus needing help with a greater expectation of care from the family than from outside carers, especially in LAMIC [31] [32].However, the attitudes of our study participants were largely not based on experience with PWD. The spiritual beliefs of the participants in relation to their understanding of dementia, was evaluated.It was evident that most believed that dementia was not a punishment from God, nor from the ancestors.However, there were a substantial proportion of participants with the belief in dementia as a punishment.There were a similar proportion of participants who believed that dementia was due to a curse or witchcraft and about half of these people also believed that traditional healers could heal dementia.This finding is supported by other reports of African cultural beliefs that overlap with the ideas that dementia is a sign of possession or a punishment of sins [18]. With regards to the care-giver burden: one of the most frequent perceived challenges of caring for PWD was constant watching or caring for PWD as most did not know how to care for PWD or had other duties, such as caring for children or jobs.This challenge is supported by reports that on average caregivers can spend up to 9 hours a day caring for an individual with dementia [32] [33]. Examples of elder abuse were easily recognized on direct questioning, although only 19% knew of an elder who had been abused due to their dementia.This could be due to the fact that much of this abuse was under-reported, here as elsewhere; however, only 27% of the participants knew someone with dementia [22].Locking elders in the house, stealing from and starving them were the most common types of abuse reported.These results are in keeping with existing studies [23] [30].Reports of abuse related to witchcraft allegations which occurred in developing countries, more common in African countries [23], were unreported by the participants.Yan & Kwok described how elder abuse was commonly committed by family members, as was the case in this study [19].All the participants who reported knowing an abused elder mentioned that abuse was by a family member, be it their children, grandchildren or siblings.Knowledge of support services/programs for abused elders was found to be quite varied, with just over half the participants reporting awareness of such services.Reasons for this variation was unfounded, with there being no significant difference between the sexes, level of education (primary versus higher education) or type of accommodation (formal versus informal). Limitations The first limitation noted in this study was interviewer bias.98% of the population was Xhosa-speaking, necessitating the use of isiXhosa translators.The challenge was that there was no Xhosa name/term for dementia.Providing a definition that was both medically accurate and understandable in lay terms was difficult; therefore the term dementia may not have been understood.The standardization of the questionnaire may be in questionas each translator could have interpreted the questions differently and conveyed the questions differently to the participants.These factors could have contributed to compromising the validity of the questionnaire. Secondly, the questionnaire included some ambiguous questions; while some of these questions were eliminated after the pilot study, a few were missed and only detected when the data were being analyzed.For example, questions could have been structured as "either-or" choice questions in order to avoid overlapping responses e.g. to "PWD living at home v in nursing homes.Recall bias may have occurred in the section of the questionnaire on elder abuse; here the incidents of abuse relied solely on the participants" reports.Lastly there is the limitation of quantitative analysis in this type of study where a qualitative approach may have been more appropriate to exploring correct understanding of concepts about dementia. Recommendations Two categories for recommendations were identified. Methodology The questionnaire should be translated into isiXhosa to standardize it and eliminate interviewer bias.Ambiguous questions should be revised for more comprehensive analysis of the data. Education Knowledge concerning the causes, the symptoms and diagnosis of dementia needs to be promulgated into both schools and urban and rural Xhosa communities.This could be done through community workshops in venues with mass capacity, such as halls.Secondly, community awareness about dementia could be raised via community health care workers and existing support services.It would be beneficial to the community to establish dementia-specific support services for those who have specific concerns, including carer-burden related problems.Lastly, we encourage further research into dementia prevalence, causes and risk factors as well as carer-burden and beliefs in South Africa (qualitative and quantitative), as the condition needs to be better understood in this LAMIC context to enable appropriate support and interventions to be introduced. Conclusion The key findings in this study show that there is no difference in the knowledge, attitudes and practices towards people with dementia in terms of demographic characteristics.In general, there is very limited knowledge and understanding of dementia in the urban Western Cape Xhosa-speaking community sampled.Some spiritual beliefs revealed the lack of knowledge about the causes of dementia.People's tolerance towards PWD was not influenced by their knowledge about dementia.Also very importantly people were able to identify indicators of abuse of elders with dementia.The majority of the participants denied knowing a person with dementia who had been abused; however, they knew of the services available in the community for abused people.Further research needs to be conducted on this topic to enable implementation of appropriate interventions, health promotion strategies and workshops. Questionnaire about the Knowledge, Attitudes and Practices towards People with Dementia Preface We are fourth year medical students from the University of Cape Town.We would like to ask you to participate in a questionnaire. The purpose of the questionnaire is to gain a better understanding about what you believe dementia to be and your attitudes and your behaviour towards those suffering from dementia. Your answers will assist us determining the knowledge, attitudes and practices within this community and will form a springboard for further research and improve the health services in terms of health promotion: education and awareness of dementia and in the future to implement health structures to deal with the needs of the community with regards to dementia. All the data recorded on this sheet is confidential, therefore only the persons conducting this research will have access to these papers and no other persons.Your name will not appear in any of the information when it is reported.It is in your right to refuse to take part in this research. You can choose not to answer a question or to stop answering questions at any time. Figure 1 . Figure 1.Histograms to depict the frequency of scores on the (a) knowledge score and (b) attitudes aspects of the dementia questionnaire, showing how many participants achieved a specific score. Table 1 . Demographic characteristics of participants n = 100.The number of people interviewed within each age group was roughly equivalent.60% of the households had 2 -4 people living in the house, 22% had people over the age of 60 living in the house, with 10 % having more than 1 person over the age of 60.Sixteen participants reported having a family member with dementia and 11 reported having a friend with dementia. Table 2 . T-test results between knowledge score and demographic variables. Table 3 . The correlations between variables, specifically looking at the comparison between participants who believe people with dementia should live in the community or nursing homes. ** Correlation is significant at the 0.01 level (two-tailed). 95%CI: 42.76 -63.06] of the participants were aware of support services or programs that are available for elders who have reported abuse, including police services, social workers or counseling centres, such as Nonceba and Family And Marriage Society of South Africa (FAMSA), with others reporting that there were either no services (26%) [95%CI: 17.74 -35.73] or they knew of none (21%) [95%CI: Table 4 . The correlations between the spiritual beliefs of the participants. Table 5 . Frequency of types of elder abuse reported. Do you have any questions?Dementia is a term used to describe various different brain disorders that are caused by dying of brain cells (degeneration).This causes a loss of thinking function and affects memory, thinking, behaviour and emotion.Dementia gets worse over time.Questions that were bold were used for the knowledge score out of 16 points Do you think people with dementia should live in a nursing home?⃝ Yes ⃝ No ⃝ Don't Know ⃝ Not Answered 34.What problems can occur when caring for someone with dementia?
2017-04-14T02:01:01.203Z
2015-05-20T00:00:00.000
{ "year": 2015, "sha1": "4e4fddcadca35ff8e0809cce52fa019e6d0f8618", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=56906", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "4e4fddcadca35ff8e0809cce52fa019e6d0f8618", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
119138088
pes2o/s2orc
v3-fos-license
Functional Renormalisation Group analysis of a Tensorial Group Field Theory on $\mathbb{R}^3$ We study a model of Tensorial Group Field Theory (TGFT) on $\mathbb{R}^3$ from the point of view of the Functional Renormalisation Group. This is the first attempt to apply a renormalisation procedure to a TGFT model defined over a non-compact group manifold. IR divergences (with respect to the metric on $\mathbb{R}$) coming from the non-compactness of the group are regularised via compactification, and a thermodynamic limit is then taken. We identify then IR and UV fixed points of the RG flow and find strong hints of a phase transition of the TGFT system from a symmetric to a broken or condensate phase in the IR. Introduction. -Group Field Theories [1][2][3][4][5] (GFTs) are a particular class of quantum field theories with fields defined over a group manifold and characterised by combinatorially non-local interaction terms. This combinatorial non-locality makes the Feynman diagrams of the theory stranded diagrams dual to cellular complexes (simplicial complexes in the simplest constructions) [1,3]. GFT's historically were born from an attempt to generalise matrix models [6] of 2d gravity to higher dimensions in the form of tensor models [7][8][9]. These models were soon enriched with group theoretic data in such a way that the Feynman amplitudes of specific GFT models coincide with state sum models of topological field theory [9,10]. A connection between GFTs and Loop Quantum Gravity (LQG) [11,12] was immediately pointed out [13] at the level of quantum states, and later enforced at the level of the quantum dynamics. In fact, it was later shown [14] that GFTs provide a formal (complete) definition of spin foams models, a covariant formalism for LQG. For GFTs (and spin foam models) endowed with a discrete geometric interpretation (which requires appropriate group theoretic data and suitable choices of dynamics) there is also a direct link with simplicial quantum gravity path integrals, first manifested in the semiclassical analysis of GFT Feynman (spin foam) amplitudes, where one recovers the Regge action [15], and then shown to be generically manifest in the flux repre-sentation of the same amplitudes [16]. One key open issue of all of the above quantum gravity approaches, and GFTs in particular, is the emergence of a continuous geometry out of the discrete and quantum pre-geometric structures defining the formalisms, and of General relativity as an effective description of their (collective) dynamics in the same approximation. The study of the GFT collective dynamics and of the associated continuum limit is therefore crucial. Moreover, one suggested scenario for the emergence of spacetime and geometry out of such quantum gravity models involves a phase transition (dubbed "geometrogenesis" [17]) from a pre-geometric phase to a geometric phase, which may be further identified to a condensate phase of the underlying quantum gravity system [18,19] (a similar idea was proposed also in a loop quantum gravity context in [20]). Indeed, GFT condensate states seem to possess an effective dynamics with a cosmological interpretation [21]. Within this perspective, the advantage of the GFT formalism is that it offers the possibility to address this issue with tools coming from standard quantum field theory. And indeed, an important line of recent developments has concerned the renormalisation of GFT models, since the renormalisation group is indeed the key tool to address both the quantum consistency of field theory dynamics and the definition of the continuum limit, aimed at a pre-1 arXiv:1508.01855v1 [hep-th] 8 Aug 2015 cise mapping of the phase diagram of the theory. Furthermore, GFT renormalization is also one of the two main strategies to define and study the renormalization of spin foam models, the other being through a generalised lattice gauge theory approach [22]. Most work in this direction has concerned a particular class of GFTs, called Tensorial Group Field Theories (TGFT's) [23][24][25][26][27][28][29][30][31][32][33], which incorporate recent advances in the statistical analysis of colored tensor models [34][35][36][37]. In particular, in TGFT framework, fields are endowed with tensorial transformation properties under the action of the group itself. The perturbative analysis of these field theories has been undertaken and a large set of models prove to be perturbatively renormalisable and asymptotically free (see references above). However, understanding the continuum limit, including the phase diagram and phase transitions of the same models requires the study of their non-perturbative properties. The FRG approach has been applied first to matrix models [42][43][44] (with the double scaling critical point reinterpreted as a fixed point in the RG flow). More recently, the FRG framework has been adapted and applied for the first time to TGFTs in [45]. The authors of [45] studied a rank-3 TGFT defined over a compact U (1) group manifold. The β-functions define a non-autonomous system in the cut-off N . Then, the authors studied two regimes of the cut-off, the large and small N (and also an intermediate regime at fixed N ), where a proper notion of dimension of the couplings can be defined and an autonomous system of RG equations is obtained. The notion of UV or IR "fixed points" is then only loosely (i.e. asymptotically) defined, as the existence of a trajectory from a UV to a IR fixed point becomes more difficult to ascertain. This is not surprising nor problematic per se, and it simply signals the presence of an additional scale in the formalism, here the size of the group manifold on which the fields are defined. In fact, the same feature is found in different contexts like quantum field theory at finite temperature, on non-commutative manifolds and on a curved spacetimes (see [46] and references therein). Still, hints of a phase transition from a symmetric to a broken phase, in the approximation of large size of the group manifold, were found. Note that progress towards the a better characterisation of the phase diagram and of phase transitions in tensor models has also been recently achieved [47,48]. In fact, using a similar mode integration alongside double scaling limit techniques, the nonperturbative analysis of quartic tensor models has been performed, with evidence of spontaneous symmetry breaking mechanism similar to that suggested in [45]. The models considered (quartic tensor models with trivial kinetic term) as well as the techniques employed (double scaling and intermediate field representation, allowing to solve quartic tensor models with matrix models methods) are very different from the ones employed in our present study and in [45], which are based on generic FRG conventions and concern TGFT models with non trivial kinetic kernels. In this work, we study a class of TGFT models which possess no such additional scale, thus are expected to show proper fixed points, so, in a sense, improve on the previous analysis. The model we consider is a rank-3 TGFT with fields defined on the non-compact manifold R 3 , and endowed with a Laplacian kinetic term. Being a TGFT, this model is of interest as a toy model for quantum gravity, due to the combinatorics of its Feynman diagrams, but even more because it can be seen as a (much) simplified version of Lorentzian (T)GFTs for 4d quantum gravity, also based on a non-compact group manifold. Thus it can be seen as a useful exercise on the way to a renormalisation analysis of more realistic models, hopefully providing useful hints of what to expect for them. One certainly generic feature is that the non-compact manifold introduces IR divergences, that we properly address through a careful definition of a thermodynamic limit for TGFTs, in this FRG context. This is an important technical lesson for later developments. In this limit, we recover an autonomous system of β-functions of the coupling constants, and we can then identify the UV and IR fixed points of the RG flow. We also find evidence for a phase transition (in the continuum limit) from a symmetric to a broken (or condensed) phase (which would be consistent with the "geometrogenesis" scenario, if we had a full geometric interpretation for the simple TGFT model we are considering). The model. -We consider a rank-3 TGFT defined over R 3 endowed with a specific φ 4 interaction called "melonic" [34], shown in Fig.1. In general, rank-3 melonic interactions correspond to peculiar triangulations of the 3-sphere and are the most dominant objects in the large cut-off N limit [23,36,37] (in both simple tensor models and topological GFTs, but this result is expected to extend to a wider class of models). Written in momentum space 1 , the classical action of the model reads: where we used the notation φ 123 = φ(p 1 , p 2 , p 3 ) for the field modes and "sym" indicates that we include all the interactions obtained by symmetrisation over the color labels (see Fig.1). The kinetic term is defined by a sum of Laplacians acting on the field indices and a mass term with coupling µ. It is immediate to see that the action is built using generalised traces over field indices convoluted with the kinetic and interaction kernels and, once exponentiated, defines a quantum theory through a Gaussian field measure of covariance ( s p 2 s + µ) −1 . FRG equations for tensorial models. -The Functional Renormalisation Group approach [38][39][40][41] rephrases the problem of integrating out the high modes of a theory, as one of solving a differential equation, the FRG equation. Being non-perturbative in nature (with respect to any expansion in the interaction coupling constants), the FRG allows in principle to deal with the full set of quantum fluctuations of the model and to study its critical behavior. The implementation of the FRG method in TGFT [45] follows closely the usual one [38,39,41], with special attention payed to the fact that we are dealing with a convolution of tensors and thus with peculiar non-local interactions. We start by decoupling the field modes as typical in the Wilsonian approach to renormalisation, by adding to the action a mass-like regulator term ∆S k = Tr(φ · R k · φ) depending on the IR cut-off k, that splits the modes into high modes (|p| > k) and low modes (|p| < k). The scale dependent quantum theory will then be defined through a partition function where high modes are integrated out: where J is a complex tensor playing the role of a source and Tr(Jφ) := R 3 J 123 φ 123 . After a Legendre transform, we identify a scale dependent effective action which encodes the full information about the quantum theory: where ϕ = φ and W k [J, J] = log Z k [J, J]. The term ∆S k is also chosen to be compatible with the choice of initial conditions for the FRG differential equation, encoding the scaling of effective action to the bare one in 1 We adopt the standard QFT terminology for field modes, even though no spacetime interpretation is associated to the domain of the fields, and thus no standard physical interpretation should be associated to their modes. The same remark applies to our use of the terms 'UV' and 'IR' throughout the article. the UV: Γ k [ϕ, ϕ] −→ k→Λ S[ϕ, ϕ], where Λ plays the role of a UV cut-off. Introducing the logarithmic scale t = log k and Γ (2) k := δΓ k /δϕδϕ, the Wetterich equation for tensorial GFT models has the form [45]: Equation (4) is fully non-perturbative and exact, and encodes (formally) all the information about quantum fluctuation in a typical one-loop form. Before moving on to the solution of this equation and to the study of the critical points, let us anticipate an important technical fact, which we will have to deal with in the following. Despite the presence of momentum cutoffs, evaluating (4) requires, as in the ordinary scalar field theory, an infinite volume regularisation. In the local field theory case, infinite volume divergences are cured by passing to constants field modes or taking a thermodynamic limit [38,39]. In the case of compact groups as worked out in [45], the issue does not arise, as the group volume is finite. The presence of a finite radius results in a non-autonomous system of equations where the scale k appears explicitly. The existence of phase transition can be inferred only in the limit of infinite radius [46]. The present situation differs from both cases, namely the local field theory and compact TGFTs in the limit of infinite volume. Because of the crucial non-local properties of the interactions, the use of constant field modes is misleading (indeed φ 4 terms in the TGFT case do not have the same combinatorics, and this cannot be neglected) and, in the compact group case, one must understand how to best perform, both from a conceptual and practical point of view, an infinite radius limit in the equations. In our case, we then find wiser to perform a thermodynamic limit. Truncation scheme. -In order to be able to perform practical computations, we need to adopt a truncation scheme for the effective action. Of course performing a truncation means loosing the exact nature of the Wetterich equation. Generally, this also generates a singularity of the flow that splits the space of couplings in disconnected regions. In a neighbourhood of the singularity, we cannot trust the computations. Since one usually is interested in the free theory around which the perturbative expansion makes sense, we will discuss only the region connected to the origin in the space of couplings. We choose to truncate Γ k to the quadratic term in the derivative of the fields and to order four in the fields, thus obtaining a form similar to the action itself: + λ k 2 R 6 dpdp ϕ 123 ϕ 1 23 ϕ 1 2 3 ϕ 12 3 + sym 1, 2, 3 . We can already see that, in this way, the UV initial condition on the flow is satisfied. From (5), the 2-point 1PI Green function expresses as Γ The regulator function is chosen as [49]: where θ stands for the Heaviside step function. This is a standard choice and it satisfies all basic requirements, namely: to approximately freeze the propagation of modes with norm smaller than k; R k (|p| > k) = 0, so that high modes are unaffected by the regulator. In addition, this choice is particularly interesting in our framework because its functional properties allow the analytic evaluation of spectral sums. If we act on the regulator with the derivative with respect to the logarithmic scale, we find s )-term so generated simply cancels out. Expanding the Wetterich equation, it seems natural to choose an expansion in powers of (ϕϕ), which we perform up to the third order, and discarding the vacuum terms, to obtain our final truncated functional equation, from which we read out the differential equations for the beta functions of the theory. Thermodynamic limit. -In order to regularise volume divergences, we perform a lattice regularisation in the p-space, which follows from a compactification in the direct space, according to the conventions of [50]. Defining the model (1) over a lattice D * = [ 2π L Z] 3 = [ 1 r Z] 3 := [lZ] 3 , of spacing l 3 proportional to the volume of the direct space, the Fourier transform becomes a Fourier series and, for any function f (p), we have D * dp i f (p) = l 3 {pi}∈D * f (p). We define the delta distribution in D * as: δ D * (p, q) = δ p,q /l 3 , with δ p,q , the Kronecker delta. As a result, we have: δ D * (p, p) = δ p,p /l 3 = 1/l 3 . Using this regularisation prescription, the effective action of the model reads: In the end, the continuous description will be recovered in the thermodynamic limit l → 0. The dependence of the system on the volume of the direct space is now explicit, and we can tune this dependence in order to consistently remove all the divergences, and be left with he physical β-functions. β-functions. -The IR regularisation of the system of β-functions is direct: we need to extract from the coupling constants an explicit dependence on the volume of the direct space, in addition to their scaring with the momentum cut-off. After a lengthy but straightforward calculation, the set of β-functions computed with the prescription introduced in the previous section reads 2 : To make sense of it in the infinite volume limit, we use the ansatz: The system (10) of β-functions is non-autonomous in the IR cut-off k as long as the parameter l is kept finite. This feature is due to the peculiar combinatorics of the tensorial vertices which span the 1PI 2-point functions with different volume contributions. One way to realise this is by noting the unusual delta distributions in F in (6). From (10), we see two different systems arising in the UV and IR cut-off limits, coming from different leading terms. The fact that the set of β-functions of a TGFT over a compact group manifold is non-autonomous is consistent with the analysis of standard field theories on compact (and curved) manifolds [46]. In order to make sense of the non-compact limit, we solve the system in the variables ξ and χ by requiring that the highest volume contribution is regularised and all the sub-leading infinities are sent to zero. We have: ξ − 2χ − 2 = 0 ξ − 3χ − 2 = 0 which yields χ = 0, ξ = 2, σ = 2. The resulting system of differential equations for the theory is: which is the starting point of our computation of the RG flow. As expected, absent any remaining fixed external scale, the system is now autonomous. The RG flow. -Proceeding with the standard analysis, we first determine the fixed points and then study the linearised system around them to determine the critical exponents of the model. From the non-linear nature of the β-functions, we have a singularity at µ k = −1 and λ k = (1 + µ k ) 2 /π. In a neighbourhood of those singularities, we do not trust the linear approximation and, being interested mainly in the sector of the theory connected with the Gaussian fixed point (i.e. to the perturbative regime of the theory), we will not study the flow around points beyond the singularities. By numerical evaluation, we find a Gaussian fixed point (GFP) and three non-Gaussian (NGFP) fixed points in the plane (µ k , λ k ). We discard one of them because it lies beyond the singularity. The others correspond to P 1 = (8.619, −47.049), P 2 = 10 −1 (−6.518, 0.096) . The stability matrix at the GFP has an eigenvalue with algebraic multiplicity 2 corresponding to the canonical scaling dimensions of the couplings: θ G 1,2 = −2, but one single eigenvector v = (1, 0), thus, considering that all the trajectories flow into the origin, the GFP must have a marginal direction in the UV. In a neighborhood of the non-Gaussian fixed points, we have: θ 2 − ∼ −1.988 for v 2 − ∼ 10 −1 (9.987, 0.506). The flow of the couplings between the two NGFPs and the Gaussian one are plotted in Fig.2. The origin is a UV sink for the flow; hence, the model is asymptotically free. As mentioned before, the absence of a second eigenvector for the stability matrix around the GFP requires an approximation beyond the linear order and is a signal of the presence of a marginal perturbation. By close inspection of the plots, confirmed by direct integration at second order of the system of β-functions, which can be performed for generic numerical constants/initial conditions, we infer that the behaviour of this direction is still UV attractive, i.e. that it corresponds to a marginally relevant direction. Both the non-Gaussian fixed points have one relevant and one irrelevant direction. They are also characterised by the so-called "large river effect". This effect shows a splitting of the space of coupling in two regions not con-nected by any RG trajectory. Thus, the irrelevant direction for the NGFP match the properties of a critical surface and suggests the presence of phase transitions in the model. In the λ > 0 plane, the flow is similar to the one of standard local scalar field theory on R 3 in a neighbourhood of the Wilson-Fisher fixed point. That is: above the critical surface, the IR limit of the RG trajectories brings the theory in a region where both µ k and λ k are positive, while below the irrelevant eigendirection for P 2 , the mass parameter is driven to be negative in the IR, indicating a spontaneous symmetry breaking mechanism (in the different but related context of tensor models, such a mechanism has been also found in [48]). In the sector λ < 0, the situation is rather peculiar. We might infer that P 1 has the same properties just discussed but reversed with respect to the critical surface. The symmetric phase where µ k and λ k have the same sign in the IR is below the irrelevant direction of the fixed point, while the broken phase lies above it. In this sense, we have a phase transition also crossing the surface λ = 0, but this is not an irrelevant direction for any NGFP. This feature suggests that, in this case, we may have a first order phase transition. Nevertheless, we must remember that the sector λ < 0 generates theories with a non-stable coupling, which is generally not considered in a field theory context. This sector must therefore be analyzed under a different parametrisation, if we want to shed more light on it. In a GFT model with additional geometric data, and a proper simplicial gravity interpretation, a broken or condensate phase could be interpreted as a continuum geometric phase [19,21], and would support a geometrogenesis scenario for the emergence of continuum spacetime and geometry from these GFT models. The model under consideration would therefore need to be enriched with such additional data to be more than an indirect support for such a scenario. Also in our model, in any case, a proper study of the broken phase, involving a change in parametrisation for the effective potential and a detailed study of the theory around the new ground state, solving the classical equation of motion of the model, in a saddle point approximation, would be needed to confirm conclusively the existence of a phase transition as envisaged. * * * The authors are thankful to the Albert Einstein Institute and the University of Bologna for having made possible this collaboration. They are also very much grateful to Dario Benedetti and Astrid Eichhorn for helpful discussions and several useful comments. R.M. warmly thanks the AEI for its hospitality.
2015-08-08T05:22:50.000Z
2015-08-08T00:00:00.000
{ "year": 2015, "sha1": "8af5f4b08764bc3ba8dd38c8a58cd157bbe14439", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1508.01855", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8af5f4b08764bc3ba8dd38c8a58cd157bbe14439", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
270378350
pes2o/s2orc
v3-fos-license
Statistical analysis in cellular systems for channel capacity improvement with dynamic pilots across different angles users Accurate channel state information (CSI) is crucial for optimizing wireless communication systems. In scenarios with varying user-to-base station angles, the angle-dependent coherence time impacts conventional pilot strategies. Due to small angles, the coherence time of the user decreases dramatically because of doppler shift, which causes an increase in the number of pilots. We introduces an innovative sub-block design approach for systems with different user angles. This method harmonizes coherence time of high and low-angle users, while maintaining a constant pilot count. This not only improves spectral efficiency but also ensures accurate channel estimation. Through simulations, we demonstrate the effectiveness of our approach in enhancing both spectral efficiency upt to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$10 \%$$\end{document}10% and CSI precision. This breakthrough contributes to the advancement of channel estimation techniques in scenarios with angle-dependent coherence time, offering practical benefits to wireless communication systems. Related works In the quest to enhance the efficacy of wireless communication in the presence of fading channels, the practice of pilot symbol-assisted modulation (PSAM) stands as a widely employed technique.PSAM entails the periodic insertion of known pilot symbols within the stream of unknown data.By adopting this strategy, the acquisition of accurate channel state information (CSI) becomes feasible, enabling coherent signal detection with minimal computational complexity, as discussed in 7,8 . The placement and power allocation of pilot symbols assume a critical role in the context of channel estimation, as indicated by the references 9,10 .Within the literature, noteworthy research contributions include the work of 11,12 , where pilot design optimization was undertaken to maximize achievable data rates for block transmissions in the presence of time-frequency selective fading channels.Additionally, 13,24 proposed a recurrent channel estimation approach, predicated on the optimal design of segmented data rates. Furthermore, the study presented in 14,25 delved into the analysis of pilot symbol design specifically within orthogonal frequency division multiplexing (OFDM) systems operating over doubly-selective channels.In 15,26 , the optimization of pilot design for OFDM systems was explored, particularly in scenarios involving imperfect channel state prediction. This paper redefines multi-user hybrid massive MIMO, emphasizing angle-based orthogonality and efficient channel estimation for improved multi-user communication 27 .Lastly, 16 introduced a pilot contamination elimination scheme tailored for multi-antenna assisted OFDM systems, with the overarching goal of reducing the training duration. In the context of high-mobility environments, a methodical approach to channel estimation was developed in 17,28 , employing equispaced pilot symbols within a multiple-input-multiple-output (MIMO) configuration of an OFDM system.Subsequently, in 18,29 , a channel estimator with reduced computational complexity, utilizing maximum a posteriori probability principles, was introduced for mobile MIMO-OFDM systems.For vehicle-to-everything (V2X) communications in support of Internet of Vehicles (IoV) applications, 19 proposed an optimization technique for pilot design grounded in the Markov decision process.Furthermore, in 20,30 , an investigation into interference-free pilot design was conducted within MIMO-OFDM-based V2X networks, employing zero-correlation-zone sequences. A principal challenge encountered in V2X communications lies in the manifestation of time-frequency selective fading across physical channels, primarily stemming from the different location of vehicles.In the realm of massive MIMO-OFDM systems, 31,32 deliberated upon the integration of both time and frequency division multiplexed pilots.In a large-scale MIMO system.In 33 an analysis of doubly selective channel estimation, wherein a pilot pattern was devised through the insertion of guard pilots to reduce inter-carrier interference.Lastly, 34 introduced a data-aided scheme for doubly selective channel estimation, capitalizing on an affine-precoded superimposed pilot design tailored for millimeter-wave MIMO-OFDM systems. Motivation and novelty The Internet of Vehicles (IoV), established to facilitate connectivity between vehicles and roadside infrastructures, confronts numerous challenges due to the different angles of vehicles.One significant challenge is the need for doubly selective channel estimation, which necessitates a substantial allocation of pilot resources to estimate a large number of channel coefficients, particularly in the context of large-scale Multiple-Input Multiple-Output (MIMO) systems.Conversely, the escalating mobile data traffic, projected to reach 288 exabytes per month by 2027 according to Ericsson's forecast 35 , not only demands reduced energy consumption by service providers to mitigate carbon emissions but also compels them to enhance spectral efficiency to achieve higher effective throughput. As an efficient approach to enhance resource utilization efficiency, multicast communication has been widely employed within the IoV ecosystem.It aims to reduce energy consumption and improve spectral efficiency while ensuring the quality of service (QoS) 36,37 .Nevertheless, the challenge arises from the fact that multicast groups typically comprise multiple vehicles with different angles, resulting in substantial pilot overhead resource consumption. In light of these challenges, we propose an innovative pilot design for IoV multicast services involving multiple vehicles with diverse angles.This approach employs common pilot symbols shared within the multicast group and dynamically tailors the design based on the varying angles of the vehicles.The primary objective is to significantly reduce the pilot overhead required for doubly selective channel estimation.Specifically, the novelty of our work is summarized in comparison with related studies on channel estimation and pilot design. The salient advantages offered by our dynamic pilot design are outlined below: Reduction of pilot overhead In contrast to the traditional pilot pattern design, which is typically tailored for point-to-point links, our approach centers on dynamic pilot design within multicast services.By utilizing shared pilot symbols among multiple vehicles, we achieve a substantial reduction in pilot overhead. Multicast to different angles vehicles The pilot design previously proposed for point-to-point vehicular communications can find application in multicast services involving vehicles with uniform angles.However, in our current work, we optimize our dynamic pilot design specifically to accommodate multicast scenarios with vehicles of varying angles. Improvement of resource utilisation efficiency Our dynamic pilot design leads to a reduction in pilot overhead, consequently enhancing spectral efficiency by facilitating higher data rates.Additionally, it contributes to improved energy efficiency as a result of achieving higher effective throughput.Throughout this paper, the following mathematical notations are used: Boldface uppercase and lowercase letters denote matrices and vectors, respectively.In particular, 0 1×M denotes the 1 × M zero vector and I M denotes the M × M identity matrix.The transpose and the modulus operators are denoted by (•) T and | • | , respectively.The complex normal distribution with mean µ and variance σ 2 is denoted by C N (µ, σ 2 ) .The greatest integer function is denoted by ⌊•⌋. Preliminaries In wireless communications, the PSAM is used for the estimation of CSI and, thus, understand the propagation property of a link.For an (M, N) flat-fading MIMO channel with M transmit and N receive antennas, the T contains the flat-fading channel coefficients from the mth transmit antenna to all the N receive antennas.Herein, all the components in H are assumed independent and identically distributed (i.i.d.) circularly symmetric complex Gaussian random variables with zero mean and unit variance, i.e., In practice, the block-fading MIMO channel model is adopted for channel estimation, where the i.i.d.channel coefficients in a channel realisation H are sampled from the complex Gaussian ensemble C N (0, 1) at the start of each block and remain constant for C symbols.This process is repeated for every block in an i.i.d.manner, and the block length is C in the unit of symbols.The structure of a transmitted block from an arbitrary transmit antenna, m ∈ {1, 2, • • • , M} , is shown in Fig. 1.Within each block, P pilot symbols are evenly interspersed with the data at each transmit antenna for the purpose of channel estimation and, thus, the pilot overhead factor is defined as P/C.Note that, a total of MP pilot symbols are allocated in an MIMO system of M transmit antennas. At the receiver, the CSI h m , spanning from the mth transmit antenna to all the N receive antennas, is estimated according to the pilot observations, m ∈ {1, 2, • • • , M} , and then, the CSI estimation is used for the coherent detection of the data s p , p = 1, 2, • • • , P , where the data in a sub-block is denoted by a 1 × (C/P − 1) vector s p . The observed pilot symbols originating from the mth transmit antenna is obtained by where the N × P matrices Y b,m and W b,m are the receiver's observations of pilot symbols and additive white Gaussian noise (AWGN), respectively, pertaining to the mth transmit antenna, ] contains P pilot symbols originating from the mth transmit antenna, which is known at the receiver.Using minimum mean square error (MMSE) estimation, the CSI h m is estimated by 38,39 where m = 1, 2, • • • , M , and σ 2 W is the AWGN power.The real CSI h m can then be expressed as where hm is the channel estimation error and its variance is the MMSE, given by ( 1) www.nature.com/scientificreports/Apparently, the MMSE is reduced with the increase in the number of pilot symbols, P. Pilot design for time-frequency selective channels The primary challenge encountered within the context of the IoV pertains to the time-frequency selective fading phenomenon, which is predominantly instigated by the different angle of vehicular entities.In a MIMO system characterized by (M, N) dimensions and operating across doubly selective channels, the impulse response characterizing the time-varying channel from the mth transmitting antenna to the nth receiv- ing antenna is formally denoted as h nm (t; τ ) .In this representation, the parameter τ assumes values within the [0, τ max ] , where τ max signifies the upper bound for delay spread arising from multipath propagation effects.Fur- thermore, the indices m and n are employed to distinguish between the transmit antennas, taking values from the set m ∈ {1, 2, • • • , M} and receive antennas, ranging from n ∈ {1, 2, • • • , N} respectively. With a given sampling period denoted as T s , an OFDM system comprises Q subcarriers, each exhibiting uni- form frequency spacing, defined as �f = 1/(QT s ) .For effective doubly selective channel estimation, it becomes imperative to capture the variations across the Q frequency bases, all while accommodating distinct paths in the time domain. Consequently, the impulse response h nm (t; τ ) can be effectively represented in discrete time as h nm (k; l) , where the continuous-time parameters are discretized as follows: t = kT s , and τ = lT s , with k = 1, 2, 3, • • • rang- ing from 1 onwards, and l taking values within the range The structural configuration of a transmitted block originating from any arbitrary transmit antenna, denoted as m ∈ 1, 2, • • • , M , designed for the purpose of doubly selective channel estimation within a point-to-point Multiple-Input Multiple-Output (MIMO) system, is schematically illustrated in Fig. 1.Further more the VEs communicating with BS at different angle is given in Fig. 2. In scenarios where the transmitter remains stationary while the receiver is a moving vehicle with a velocity of v, each block is comprised of a total of P = f D QT s + 1 sub-blocks.Here, f D is defined as the Doppler spread, and it can be expressed as f D = v f c /c cosθ , where f c signifies the central frequency of the carrier, and the constant c corresponds to the angle, which is equal to 3 × 10 8 meters per second, v is the constatn speed of users, and θ is the angle of user with respect to the base station. Furthermore, the length of an individual sub-block is equal to the coherence time associated with the channel.This coherence time is determined by the equation: This function is characterized as a monotonically decreasing one in relation to the parameter θ. For the sake of clarity and without loss of generality, let's consider the qth subcarrier, with q ∈ 0, 1, • • • , Q − 1 .In a sub-block associated with a specific transmit antenna, indicated by ⌊τ c /T s ⌋ symbols, there exists a composi- tion comprising a solitary pilot symbol, 2L zeros, and ⌊τ c /T⌋ s − (2L + 1) unknown information symbols. More specifically, within the pth sub-block, denoted by p ∈ 1, 2, • • • , P , the information symbols are encap- sulated within a 1 × (⌊τ c /T s ⌋ − (2L + 1)) vector denoted as s (q) p .The pilot symbol, represented as b, is flanked by two zero vectors, each being 1 × L .The purpose of the L zeros preceding the pilot symbol is to mitigate inter-symbol interference (ISI) that may affect it, while the zeros following the pilot symbol are intended to prevent ISI emanating from it. When considering an arbitrary block for the purpose of estimating the Channel State Information (CSI) from the mth transmit antenna to the nth receive antenna within the pth sub-block, specifically over the qth subcarrier, the received pilot symbol can be formally represented as: This expression signifies the received pilot symbol and serves as a pivotal component in the process of CSI estimation. where y b (q; p) and w b (q; p) are the observed pilot symbol and the received AWGN, respectively, Using MMSE estimation, the estimated CSI is obtained by In an (M, N) MIMO system, each sub-block on a specific subcarrier across all transmit antennas contains a total of M⌊τ c /T s ⌋ symbols.These symbols encompass M pilot symbols, 2ML zeros, and M(⌊τ c /T s ⌋ − 1 − 2L) unknown information symbols, where L is distinct paths within the time domain.. Consequently, the overhead factor, which encompasses both pilot symbols and zero padding, is quantified as (2L + 1)/⌊τ c /T s ⌋ in the context of pilot design tailored for point-to-point doubly selective channels. New dynamic pilot design Let's examine the multicast service as depicted in Fig. 3, where the OFDM symbol is shared among different VEs.In this scenario, there are a total of U vehicles within the IoV, each characterized by varying angles.These vehicles are organized in a specific order based on their angles, such that,VE1, VE2, and so forth, up to VEU, correspond to angles denoted as θ 1 θ 2 • • • θ U .Here, θ u represents the angle of VE u, where u spans the range from 1 to U. At the base station, there are a total of M transmit antennas.For each individual VE, the doubly selective channel estimation necessitates the ability to capture variations across Q frequency bases within the OFDM framework. As indicated in Eq. ( 5), the dimension of a sub-block within the multicast service is established by the Doppler spread induced by the differnt angles of the VEs.The Doppler spread pertaining to the channel of the u th VE, denoted as f u , can be computed using the formula: The structure of a transmitted block from an arbitrary transmit antenna with shared sub-block. In this equation, f c represents the central frequency of the carrier employed in the multicast service, and the parameter u takes on values ranging from 1 to U. Evidently, the coherence times associated with the channels of all U VEs within the multicast service follow an ordered sequence: Here, τ u = 1/f u signifies the coherence time of the channel pertain- ing to the u th VE.It is noteworthy that as the angle θ u of VE u increases, the coherence time τ u increases.This relationship holds true for all u in the range from u ∈ {1, 2, 3, • • • U}. One straightforward approach for designing pilot symbols for these VEs is to allocate pilot symbols individually to each of them, taking into account the coherence time of each point-to-point channel.In this manner, the overhead factor, encompassing both pilot and zero padding, within the context of doubly selective channel estimation, is determined as: Here, con represents the overhead factor in the conventional pilot design for IoV multicast. In order to mitigate this overhead, we introduce a dynamic pilot design within the IoV multicast framework.In this approach, common pilot symbols are shared among all U VEs, and the length of a multicast sub-block is standardized to τ U , which corresponds to the longest coherence time observed among all VEs within the multicast group.Consequently, this dynamic pilot design results in an overhead factor, incorporating both pilot symbols and zero padding, for the doubly selective channel estimation given by: where dyn denotes the factor of overhead in our dynamic pilot design for the IoV multicast. Performance evaluation In order to assess the performance and resource utilization of our dynamic pilot design, we employ key metrics, spectral efficiency,.These metrics serve to investigate the achievable data rate and effective energy consumption of our proposed approach. To validate our analysis and design, we conducted simulations involving multicast MIMO-OFDM transmissions within a sub-block for an IoT system.The simulation parameters include: The total number of symbols, denoted as N = 62 .A selection of L = 3 for each channel.A carrier frequency of f c = 2.4 GHz.A sampling period of T s = 36.8μs.The lowest three dimensional angle, θ 1 , set to 0. The highest dimensional angle, θ U , set to 45 • .A total of U = 10 VEs.These numerical values serve as the basis for simulating and evaluating pilot overhead, spectral efficiency, and energy efficiency in our analysis and design. To analyze the performance evolution of multicast dynamic pilot design, we define the spectral efficiency of the IoV system.Based on [26], we derived the channel throughput for multicast pilots design in MIMO-OFDM, IoV system as, where is the pilot overhead, N c is the total number of the OFDM subcarriers in the system.The channel matrix, H is complex C N×M and σ 2 ω is i.i.d AWGN.Consider the equal power equalization method for power calculation of the transmitted data of dynamic pilot design we have the channel spectral efficiency R given by, here dyn is the pilot overhead of the multicast dynamic pilot design for doubly selective channel in IoV and M is the number of the transmitter antenna. Similarly for the conventional system the throughput is given by, where con is the pilot overhead of conventional system.Let we have the the total number of the subcarrier N c = 6 , the largest angle θ 1 = 90 , and calculate the coherence time of the corresponding VEs based on θ u = θ u − 10(U − u) , u ∈ {1, 2, . . ., U} .For setting, with L = 4 , T s = 12.8 μs we fetched the values of pilot overhead from Eqs. ( 9) and ( 10) in to Eqs. ( 12) and ( 13) to figure out the simulation results for spectral efficiency.In Fig. 4, the spectral efficiency of system is simulated with SNR of 20dB.Its is considered for three different scenarios U = 2, 5, 10 VEs.The spectral efficiency in Eqs. ( 12) and ( 13) is plotted.It can be seen that with increase (8) in the SNR the spectral efficeincy of the proposed method outperform the spectral efficiency of conventional method, because in the proposed method no extra pilot symbols were inserted.In Fig. 5, the spectral effieceny is plotted versus the number of VEs.Multicast dynamic pilot design spectral efficiency is plotted for SNR = 0 dB, 10 dB, 20 dB and compared with the conventional pilot design spectral efficiency.It can be observed that as the number of VEs increases, the spectral efficiency of the dynamic pilot design significantly surpasses that of the conventional system.This superior performance is attributed to the proposed method's efficient sharing of a common pilot among the VEs.Unlike conventional systems that require the transmission of additional pilot symbols as the number of VEs increases, the dynamic pilot design avoids this necessity.By maintaining the same set of pilot symbols for multiple VEs, the proposed method reduces the overhead associated with pilot transmission, thereby enhancing overall spectral efficiency.This efficient utilization of pilots not only conserves bandwidth but also improves the system's capacity to handle a larger number of VEs without compromising performance. Conclusion In conclusion, this study emphasizes the paramount importance of precise Channel State Information (CSI) for the optimization of wireless communication systems.It addresses a challenge posed by varying user-to-base station angles, where angle-dependent coherence bandwidth affects conventional pilot strategies, leading to an increased pilot overhead.To address this challenge, the research introduces an innovative sub-block design approach tailored for systems with diverse user angles.This approach harmonizes coherence bandwidths for users with both high and low angles while maintaining a consistent pilot count.As a result, this method enhances both spectral efficiency and the accuracy of channel estimation, as evidenced by simulations.This breakthrough Figure 1 . Figure 1.The structure of a transmitted block from an arbitrary transmit antenna. Figure 4 . Figure 4.The efficiency of the system for different number of users. Figure 5 . Figure 5.The efficiency of the system for different SNR.
2024-06-12T06:17:43.393Z
2024-06-10T00:00:00.000
{ "year": 2024, "sha1": "1ec1b39e2e4ecadec44263e01d67e4257201ea22", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ba7557d37e342fa3643db9d54cf27d1afa209f51", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
53362262
pes2o/s2orc
v3-fos-license
An operational approach to quantum stochastic thermodynamics We set up a framework for quantum stochastic thermodynamics based solely on experimentally controllable, but otherwise arbitrary interventions at discrete times. Using standard assumptions about the system-bath dynamics and insights from the repeated interaction framework, we define internal energy, heat, work and entropy at the trajectory level. The validity of the first law (at the trajectory level) and the second law (on average) is established. The theory naturally allows to treat incomplete information and it is able to smoothly interpolate between a trajectory based and ensemble level description. To demonstrate the strength of our theory, we compute the thermodynamic efficiency of recent experiments reporting on the stabilization of photon number states using real-time quantum feedback control. Special attention is also payed to limiting cases of our general theory, where we recover or contrast it with the previous literature. We point out various interesting problems, which the theory is able to address rigorously, such as the detection of quantum effects in quantum thermodynamics. A. Brief historical perspective Many small-scale systems of current interest can be modeled by master, Fokker-Planck or Langevin equations, whose microscopic origin can be classical or quantum in nature. Fundamental as well as practical insights can be obtained by studying their thermodynamic behaviour out of equilibrium, which is well-established for decades if we are interested only in averaged quantities of internal energy, heat, work or entropy [1][2][3][4][5][6]. For classical systems it became clear during the past 25 years that also fluctuations in thermodynamic quantities bear important information and that those fluctuations are constrained by fundamental symmetry relations valid arbitrary far from equilibrium. These symmetry relations are known as fluctuation theorems [7,8]. For a given realization of a stochastic process an understanding of the fluctuation theorem required to extend the ensemble averaged energetic [9,10] and entropic [11] description to the level of single stochastic trajectories. The resulting theoretical framework is called stochastic thermodynamics [12,13]. Quantum stochastic thermodynamics tries to generalize classical stochastic thermodynamics to systems whose quantum nature cannot be neglected. Obviously, the very definition of a trajectory dependent quantity is nontrivial as any measurement disturbs the system and the meaning of a 'trajectory' is a priori not clear. We note that incomplete and disturbing measurements are also prevalent in classical systems [14], but exploring their consequences for classical stochastic thermodynamics has raised relatively little attention so far [15][16][17][18][19][20]. Soon after the discovery of classical fluctuation theorems, much effort was devoted to derive fluctuation theorems for quantum systems. A theoretically successful strategy is the two-point measurement approach [21,22]. It requires to measure the energy and particle number of the system and the bath at the beginning and at the end of the thermodynamic process. Obviously, for a bath with its prosaic 10 23 degrees of freedom such a scheme is not even for a classical system practically feasible. In addition, the resulting statistics for internal energy and work cannot fulfill an averaged first law if the initial state is not diagonal in the energy eigenbasis [23]. Nevertheless, within this approach quantum fluctuation theorems can be derived, which are formally identical to their classical counterpart. Thus, by measuring the whole universe (system plus bath), the two-point measurement approach circumvents the need to define thermodynamic quantities along a specific system trajectory. Alternative approaches based on a single projective measurement [24,25] or no measurement at all [26,27] have been also put forward. To conclude, even though those approaches are theoretically powerful, they are experimentally hard to confirm and do not constitute a complete quantum counterpart of classical stochastic thermodynamics: trajectory dependent internal energies or entropies are not defined and the influence of measurements performed on the system are not taken into account. Exceptions to the above case are quantum systems, whose dynamics can be described by a classical rate master equation in the energy eigenbasis. Provided that the system is observed in the energy eigenbasis without disturbing it, the framework of classical stochastic thermodynamics can be carried over one by one to the quantum situation. This is, for instance, possible in electronic nanostructures made out of quantum dots in the so-called sequential tunneling regime [28][29][30][31]. Trying to adopt the standard definitions to more general quantum dynamics formulated by a rate master equation in a time dependent basis results in definitions for thermodynamic quantities, which are different from the conventional ones [32], and differences persist even in the semiclassical limit [33][34][35]. This further demonstrates the need for a radically different approach to quantum stochastic thermodynamics. One such approach makes use of the framework of repeated interactions [36][37][38]. In there, the role of a static bath is replaced by an external stream of ancilla systems, which are put into contact with the system one by one and are designed to simulate a thermal bath. If the external systems are projectively measured before and after the interaction, a trajectory based formulation of quantum thermodynamics becomes possible similar to classical stochastic thermodynamics. Although such a description yields theoretical insights, in experimental reality a system is usually also in permanent contact with a bath. An experimentally closer approach uses a theoretical technique, which was discovered in parallel to the first fluctuation theorems in a different field of research, quantum optics, in order to describe the stochastic evolution of a quantum system based on a particular measurement record [39][40][41]. Given a particular measurement scheme, the dynamics of the system can be 'unraveled' by describing it in terms of a stochastic Schrödinger or master equation. Combined with this dynamical description, researchers recently applied the ideas of stochastic thermodynamics to such quantum systems [42][43][44][45][46][47]; a completely general picture is, however, still missing. For instance, entropy and entropy production along a single stochastic trajectory were not yet defined (specific fluctuation theorems were studied in Refs. [44,46,47], which also give rise to a notion of entropy production; we will come back to this later on). Furthermore, we are presently still far away from understanding the most general quantum measurement schemes as the above publications focused only on efficient measurements in which the state of the system along a particular trajectory is always pure (an exception is Ref. [43], which, however, studies only weak measurements). Finally, only simple protocols excluding feedback control have been studied so far (Refs. [43,44] consider also very simple feedback schemes for specific systems). Here, we propose an approach to quantum stochastic thermodynamics, which we call operational quantum stochastic thermodynamics. It places the experimenter in the foreground by defining a 'stochastic trajectory' -and the corresponding thermodynamic quantities internal energy, heat, work and entropy along such a trajectory -solely based on experimentally meaningful interventions or control operations of the system dynamics. A specific unravelling scheme as required in previous approaches [42][43][44][45][46][47] is not necessary, but can be treated as well. In addition, following the credo 'information is physical' [48], we depart from standard stochastic thermodynamics by explicitly taking into account the memory of the experimenter. The benefit of this approach is that we can treat arbitrary control operations on the system: this does not only include generalized measurements, but also unitary kicks, state preparations, noise addition, and all kinds of feedback control (even if it is time-delayed). Mathematically, the only requirement is that the interventions are modeled by a completely positive (CP) map acting instantaneously on the system. This is physically necessary: a CP map describes the most general quantum operation and the requirement of an instantaneous action ensures that the experimenter has complete control about the control operation. From a dynamical point of view our system is described by a recently developed tool known as the process tensor [49][50][51][52][53] (see also Refs. [54][55][56][57] for earlier work in that direction). From a thermodynamic point of view, we will see that the framework of repeated interactions [58] helps us in finding an unambiguous interpretation of the work and heat injected during the control operations. In the rest of this section we will fix the notation and give a outline of the paper together with a summary of its most important results. B. Notation We here summarize the notation used most frequently during the main text. Furthermore, because we believe that the present framework will also be useful to treat classical systems, we provide a 'dictionary' for classical physicists at the end of this section. The state of some physical system X is described by a density operator ρ X or ρ X (t) if we want to make the time t explicit. The corresponding Hilbert space of the system is denoted by H X and the Hamiltonian by H X , whichin case that it depends on an externally controlled timedependent parameter λ t -is also denoted by H X (λ t ). Furthermore, a few information theoretic concepts will be very helpful. The von Neumann entropy of an arbitary state ρ X is defined as S vN (ρ X ) ≡ −tr X {ρ X ln ρ X }. To characterize the correlations of a bipartite system XY in state ρ XY , we use the always positive mutual information I X:Y ≡ S vN (ρ X )+S vN (ρ Y )−S vN (ρ XY ). It is closely related to the always positive relative entropy D[ρ||σ] ≡ tr{ρ(ln ρ−ln σ)} by noting that I X: where ρ X/Y ≡ tr Y /X {ρ XY ln ρ XY } denotes the marginal state. Furthermore, we denote superoperators, which map operators onto operators, by calligraphic letters, e.g., U, V, P, etc. Below, we will see that a stochastic trajectory is specified by a sequence of measurement results or outcomes r n , . . . , r 1 , which were obtained at times t n ≥ · · · ≥ t 1 . The sequence of outcomes will be denoted by r n ≡ (r n , . . . , r 1 ). The state of a system X at time t ≥ t n conditioned on such a sequence will be denoted by ρ X (t, r n ). The ensemble averaged state is given by ρ X (t) = rn p(r n )ρ X (t, r n ) where p(r n ) denotes the probability of obtaining the sequence of outcomes r n . We will also keep this notation for thermodynamic quantities such as internal energy E, heat Q, work W and entropy S (which possibly have additional sub-and superscripts). This means, for instance, that the stochastic internal energy depending on the outcomes r n is denoted by E(t, r n ) whereas the ensemble averaged internal energy is E(t) = rn p(r n )E(t, r n ). Classically, the state of the system X is not described by a density operator but by a probability vector p X (t) (we only consider finite dimensional systems in this paper). Superoperators U, V, P become simple matrices U, V, P acting on p X (t). Especially, a CP map A becomes a subnormalized stochastic matrix A, i.e., a matrix which fulfills A xy ≥ 0 and x A xy < 1. A completely positive and trace preserving (CPTP) map A becomes an ordinary stochastic matrix A, which fulfills x A xy = 1. A special subset are permutation matrices, which are the classical analogue of unitary operations in quantum mechanics. Furthermore, the concept of the von Neumann entropy naturally translates into the Shannon entropy, denoted by S Sh [p(x)] ≡ − x p(x) ln p(x). Related concepts such as mutual information and relative entropy also have a natural translation in the classical domain. C. Summary and Outline We start in Sec. II by briefly reviewing the basics of the process tensor, which sets the stage for understanding the dynamics of a general quantum dynamical process interrupted by experimentally controlled operations. Sec. III then explains how the process tensor fits into the picture of repeated interactions, which helps us to understand the thermodynamics of the process. Sec. IV is the central section of this paper. After stating our assumptions in Sec. IV A, we consider the stochastic energetics of an open quantum system based on arbitrary control operations in Sec. IV B. Based on the definition of the stochastic internal energy [Eq. (23)] we will derive a first law valid at the trajectory level. To understand the energetic impact of the control operation, we will introduce a work-and heat-like contribution to it [Eqs. (28) and (29)]. In Sec. IV C we then proceed with the entropic considerations. Based on our definition of entropy along a single trajectory [Eq. (36)], we are able to show the validity of the second law of thermodynamics on average. To conclude the first part of this paper, we have extended classical stochastic thermodynamics to weakly coupled open quantum systems by allowing for arbitrary interventions and feedback control protocols happening at discrete times. Even classically, the thermodynamic impact of, e.g., imprecise or disturbing measurements or arbitrary time delayed feedback control protocols has not been understood so far. The present paper therefore provides a framework to understand a vast variety of systems from a stochastic thermodynamics point of view. Whereas Secs. II, III and IV should be read together, the remaining part of the paper can be accessed on demand. Sec. V is devoted to understand our operational framework under specific limiting cases and Sec. VI discussed the (im)possibility of alternative approaches and interesting future work. More specifically: In Sec. V we show how the standard framework of repeated interactions [58] arises in our context (Sec. V A), how the averaged description of quantum thermodynamics [3][4][5][6] is contained in our approach (Sec. V B), under which conditions a simplified thermodynamic framework arises (Sec. V C), how far our framework differs from classical stochastic thermodynamics (Sec. V D), how the two-point measurement approach fits in our language (Sec. V E), and how our framework compares with the one put forward in Ref. [44] and especially debate the meaning of the 'quantum heat' introduced therein (Sec. V F). Sec. VI is devoted to a discussion why it is necessary to use repeated interactions to obtain an unambiguous thermodynamic framework (Sec. VI A), how far it is possible to formulate our operational approach without the need of any theory input (Sec. VI B), why we have not used a time reversed process to deduce a fluctuation theorem and define an entropy production (Sec. VI C), whether it is possible to consider the limit of continuous weak measurements (Sec. VI D), why our framework is useful to understand time-delayed feedback control (Sec. VI E), how far it is possible to include multiple heat reservoirs in our description (Sec. VI F), and whether it is possible to extend our framework to the strong coupling and non-Markovian situation (Sec. VI G). II. THE PROCESS TENSOR The process tensor is a tool to describe arbitrary dynamics of an open quantum system, which can be accessed by an experimental physicist due to arbitrary control operations performed on the system [49][50][51][52][53]. It is the extension of 'quantum superchannels' [54,57] to multiple control operations and it is closely related to the very general 'quantum comb' framework studied in Refs. [55,56]. Here, the terminology 'control operation' is used in a wide sense and could describe any action of an external agent such as measurements, unitary kicks, state preparations, noise addition, feedback control operations, etc. Mathematically, we only require that each control operation is described by a completely positive (CP) map. The basic insight behind the process tensor is to treat those operations as inputs to the quantum stochastic process and not the state of the system itself because the latter can in general not be fully controlled. Notice that also classically one needs to modify the theory of stochastic processes as soon as active interventions are allowed (one then usually talks about 'causal models' [59]). We now briefly review the basics of the process tensor and we will make use of the framework of quantum operations and quantum measurement theory, see Refs. [60][61][62][63] for introductory texts. As usual we consider a system S coupled to a bath B described by an arbitrary initial system-bath state ρ SB (t 0 ). The composite system-bath state evolves unitarily up to time t 1 ≥ t 0 according to the Liouville- Here, the system Hamiltonian H S might depend on some arbitrary time dependent control protocol λ t , but not the interaction Hamiltonian H SB and the bath Hamiltonian H B . The resulting unitary evolution is described by the superoperator with the time ordering operator T + . Then, at time t 1 we interrupt the evolution by a CP operation A(r 1 ), which only acts on the system and yields 'outcome' r 1 (for instance, the result of a projective measurement). Mathematically, we write the operation as Here, t ± 1 = lim 0 (t 1 ± ) denotes a time shortly after or before t 1 and I B denotes the identity superoperator acting on B. Note that we assume the control operation to happen instantaneously. This does not only simplify the subsequent treatment, but it also ensures that the experimenter has complete control over the operation: if the control operations takes longer, it would also affect the bath and a clear separation of the dynamics into a dynamics induced by the bath or the external agent becomes impossible. The final state of knowlegde after the operationρ SB (t + 1 , r 1 ) can explicitly depend on the outcome r 1 . Since A(r 1 ) is CP, it admits a operator-sum (Kraus) representation of the form but we do not require it to be trace perserving (TP). For this reason we have used a tilde in Eq. (3) to emphasize that the state is not normalized. The probability to observe outcome r 1 at time t 1 is Then, the normalized system state after the control operation at time t 1 becomes Notice that the map A(r 1 )/p(r 1 ) is CPTP, but non-linear in the state ρ S (t − 1 ). It is the quantum analog of Bayes' rule. The average system state is accordingly This would also correspond to our state of knowledge if we ignore the outcome r 1 . Notice that the average control operation r1 A(r 1 ) is now a CPTP map and can be written as with r1,α A † α (r 1 )A α (r 1 ) = 1 S . We then iterate the above procedure by letting the joint system-bath state evolve unitarily up to time t 2 ≥ t 1 , but this time the unitary operation is allowed to depend on r 1 by changing the control protocol of the system Hamiltonian H S (λ t , r 1 ). This actually corresponds to the simplest form of measurement-based quantum feedback control [62]. Then, at time t 2 we subject the system to another CP control operation A(r 2 |r 1 ), which is also allowed to depend on r 1 and which gives outcome r 2 . We can re-iterate the above procedure by letting the external agent interrupt the unitary system-bath evolution at times t n ≥ t n−1 ≥ · · · ≥ t 1 . Let us denote by t an arbitrary time after the n'th but before the (n + 1)'th control operation, i.e., t n+1 > t > t n . The unnormalized state of the system conditioned on the sequence of outcomes r n at such a time t is then given bỹ ρ S (t, r n ) = T[A(r n |r n−1 ), . . . , A(r 1 )] (10) ≡ tr B {U t,n (r n )A(r n |r n−1 ) . . . U 2,1 (r 1 )A(r 1 )U 1,0 ρ SB (t 0 )} . Here, we have introduced the process tensor T. Its variable inputs are the set of control operations {A(r i |r i−1 )} n i=1 , but not the initial state of the system, the bath or the composite. The trace of the process tensor gives the probability to observe the sequence of outcomes r n , p(r n ) = tr S {T[A(r n |r n−1 ), . . . , A(r 1 )]} (11) such that the normalized state of the system can be written as The process tensor describes the complete quantum stochastic process on the level of experimentally meaningful but arbitrary interventions and it has a number of desirable properties [49][50][51][52][53]. First of all, it depends multi-linearly on the set of control operations and thus, deserves the name 'tensor'. As each control operation is a superoperator (a CP map), the process tensor can be thought of as a supersuperoperator. Obviously, the tomographic reconstruction of the process tensor scales quite unfavourably: if the quantum system is d-dimensional, the density matrices has d 2 components 1 and a superoperator has d 4 entries. Thus, for n time steps knowlegde of the full process tensor requires to sample the effect of d 4n linearly independent control operations. Nevertheless, it was conjectured that in most cases the process tensor has an efficient representation in terms of a matrix product state of a many-body quantum system [50] and it can be also defined for an experimentally limited set of control operations [51]. We also remark that knowledge of the process tensor implies knowlegde of the system state ρ S (t − n+1 ; r n ) before each control operation because an informationally complete measurement of the system state at time t n is just one of the possible control operations. Therefore, after tomographic reconstruction of the process tensor, the states ρ S (t ± n+1 ; r n ) are known without uncertainty and do not require any additional theory input. Furthermore, the process tensor is also CP meaning that for any ancilla system A where A SA = [A(r n |r n−1 )⊗I A , . . . , A(r 1 )⊗I A ]. It therefore preserves the positivity of the system state even in presence of arbitrary initial system-bath or systemancilla correlations and it removes the dilemma of how to assign a CP map to this situation [50]. As the process tensor is CP, it also admits a generalized operator-sum (Kraus) decomposition [50]. We would like to point out that the process tensor cannot only describe arbitrary non-Markovian processes, but it also yields a general criterion to define quantum Markovianity [49]. This is related to the fact that it is uniquely connected to a generalization of Kolmogorov's extension theorems, which underpins the theory of classical stochastic processes without interventions [52]. Finally, the process tensor can be also 'unraveled' in terms of quantum trajectories [53]. III. PROCESS TENSOR FROM REPEATED INTERACTIONS In practise the control operations A(r n |r n−1 ) are implemented by letting the system interact for a short time with an externally prepared apparatus (e.g., a memory or detector). It is the interaction time and the initial state of the apparatus, which can be usually well-controlled experimentally. As we will see here, this insight naturally leads us to the framework of repeated interactions, in which we will model at least parts of the external apparatus explicitly. The motivation behind this framework is not merely a generalized thermodynamic description, which allows to cover a larger class of applications [58]. It , which are triggered by the interaction with an external ancilla system called the unit U (n) (blue circles). Each control operation has an outcome rn, which is recorded in a memory (e.g., a tape of bits) and future control operations are allowed to depend on previous outcomes. The memory for future outcomes is set in a standard state '0'. Also the system Hamiltonian is allowed to depend on rn (not depicted here). will also help us to formulate an unambiguous, generalized framework of stochastic thermodynamics. Secs. V C and VI A discuss how far it is possible to get rid of the explicit description of the external apparatus. The main insight of this section rests on Stinespring's theorem [64], which states that any CPTP map A can be seen as the reduced dynamics of some unitary evolution in an extended space. More precisely, we can always write where we labeled the additional subsystem by U for 'unit' in view of the thermodynamic framework considered later on and in unison with Ref. [58]. The unit is in an initial state ρ U and V denotes the unitary operator which acts jointly on SU . Furthermore, any non-trace preserving CP map A(r) with outcome r can be modeled as where P U (r) is an arbitrary (not necessarily rank 1) projector in H U . The collection of projectors is supposed to fulfill the completeness relation r P U (r) = 1 U . Notice that Eq. (14) can be recovered from Eq. (15) either by choosing P U (r) = 1 U or by summing over r. In accordance with our previous superoperator notation, we introduce such that we can write Eq. (15) in the shorter form The whole process tensor T[A(r n |r n−1 ), . . . , A(r 1 )] can then be seen as describing the reduced dynamics of a system coupled to a stream of units, which interact sequen-tially 2 at times t n ≥ · · · ≥ t 1 with the system, see Fig. 1. This constitutes the framework of repeated interactions. 3 Then, the unnormalized joint state of the system and all units, which have interacted with the system up to time t (t n+1 > t > t n ) with outcome r n , can be written as Except for the unitary system-bath evolution superoperator U (where the subscripts denote time intervals), subscripts are used to denote the Hilbert space on which the respective (super-) operator is acting. In this respect, the joint space of all n units is denoted by U (n). Notice that V SU (n) (r n−1 ) depends on all previous outcomes r n−1 , but due to causality it cannot depend on the n'th outcome r n . The same holds true for the initial state ρ U (n) (r n−1 ) of the n'th unit and also the chosen projection operator P U (n) (r n |r n−1 ) can depend on r n−1 . Therefore, the external agent has all the freedom she needs to engineer a desired control operation A(r n |r n−1 ). By construction, after tracing out the units, we obtain the process tensor for the system T[A(r n |r n−1 ), . . . , As it is in most situations obvious from the context which superoperator acts on which object living in which space, we will usually drop the subscripts S, U (n), . . . on superoperators. A. Preliminary considerations The process tensor is a formal object which does not make any assumptions about the bath and the systembath dynamics. On the contrary, the standard framework of quantum thermodynamics relies on a weakly coupled, memoryless and macroscopic bath. The average thermodynamic description of a small system coupled to such an ideal bath is well-established [3][4][5][6], whereas the trajectory dependent stochastic description is not. In this section we remain within this weak-coupling paradigm (strictly speaking we actually need only a bit weaker conditions, see below) because possible extensions beyond the weak-coupling and Markovian assumption have only recently raised attention (see Sec. VI G). Furthermore, we consider in this section only the case of a single heat reservoir at inverse temperature β. We will discuss how far multiple heat reservoirs can be added in the description in Sec. VI F. Consequently, within the standard paradigm and in absence of any control operations, we know how to define thermodynamic quantities [3][4][5][6]. Let us focus on the interval (t n−1 , t n ) (excluding the control operations at the boundaries) and let ρ S (t) be the system state at time t ∈ (t n−1 , t n ) (which is later on allowed to depend on r n−1 ). The state functions internal energy and system entropy for an arbitrary system state ρ S (t) are defined as According to the first law, the change in system energy ∆E can be split into heat and work, ∆E Furthermore, the validity of the second law can be also derived and states that the entropy production is always positive: 4 where ∆S Our goal in the rest of this section is to find definitions of internal energy, work, heat and system entropy along a single trajectory, where a trajectory is defined by the observed sequence of outcomes r n . The sought-after definitions are required to be intuitively meaningful, to fulfill the first law at the trajectory level and the second law on average. Further appeal to our definitions will be added in Sec. V where we will consider various limiting cases. Note that, after tomoragphic reconstruction of the process tensor (see Sec. II), we know the conditional system states ρ S (t ± n ; r n ) only right before or right after the n'th control operation, but not in between for t n−1 < t < t n . To compute the work (20) or heat (21) in between two control operations, additional theoretical input is in general required, e.g., by solving the quantum master equation for the system or by other forms of inference. This is the only way to ensure that we recover the standard weak coupling framework of quantum thermodynamics in absence of any control operations (see Sec. V B). Nevertheless, as it increases the computational effort, we present in Sec. VI B possible ways to avoid any additional theory input. For definiteness, we aim at a stochastic thermodynamic description in the time interval (t n−1 , t n ] starting shortly after the (n − 1)'th control operation and ending shortly after the n'th control operation. The change in any state function X over the complete interval is denoted by ∆X (n] , whereas ∆X (n) denotes the change in (t n−1 , t n ) (excluding the n'th control operation). Changes in the respective time intervals of any quantity which is not a state function are denoted without a delta, i.e., X (n] or X (n) . B. Stochastic energy and first law To formulate the first law at the trajectory level correctly, we need to take into account the internal energy of the system and all units. Thus, we define the trajectory dependent internal energy where H U (i) is the Hamiltonian of unit i. Since the Hamiltonian is additive, it splits into its marginal contributions in the obvious way, Notice that it is always simple to get rid of the units in the energetic description by assuming that H U (i) ∼ 1 U (i) . However, already the energetic changes of the units can bear some interesting non-trivial features. For instance, it is not sufficient to consider only the actual n'th unit in the energetic balance: in our general theory the energy of previous units can change even though they are physically decoupled from the system. This phenomenon does not necessarily require quantum entanglement and simply occurs because our state of knowlegde about past units U (i < n) can change depending on the outcome r n (see below). In absence of any control operations, the first law simply follows from the preceeding subsection and reads because the marginal state of the units does not change and hence, ∆E U (i) = 0 for all i. Note that the work S (r n−1 ) depend on previous outcomes r n−1 for two reasons: first, the initial system state ρ S (t + n−1 ; r n−1 ) depends on it, and second, the Hamiltonian H(λ t , r n−1 ) can be a function of it in case we apply feedback control. The first law during the control operation at time t n is more interesting as the internal energy of both, system and units, can change. In total, the energetic cost δE ctrl of the control operation is defined by It is not a state function and can be split into a work and heat like contribution, δE ctrl (t n , r n ) = W ctrl (t n , r n−1 ) + Q ctrl (t n , r n ). (27) This splitting stems from the convention we used to implement the control operation A(r n |r n−1 ) in the repeated interaction framework: we first applied the unitary operation V(r n−1 ) to the joint system-unit state and afterwards projectively measured the unit using the operation P(r n ). In general, we therefore use the definitions with λ n ≡ λ tn . Notice that the work-like contribution does not depend on the actual measurement outcome r n and corresponds to the energetic changes caused by a reversible (unitary) operation. The meaning of the heat injected during the control operation Q ctrl (t n , r n ) will be discussed further below. We also remark that we pay attention to the fact to use the normalized system-unit state always and not the unnormalized one (which we have previously denoted by a tilde). In the following, we will usually suppress the time dependence in the notation for simplicity. Both quantities have some additional important properties. First of all, both can be split additively into changes affecting the system or the unit, Furthermore, if we use that the marginal state of the previous n − 1 units does not change during the unitary operation V(r n−1 ), we can deduce that the work actually depends only on the energetic changes of the system and the n'th unit, Finally, we can deduce that the average heat injected into the system is always zero. Specifically, Note that the last equation implies Q ctrl S (t n ) = rn p(r n )Q ctrl S (r n ) = 0. All other contributions Q ctrl are on average in general non-zero even for i < n. A simple example for this behaviour is worked out in Appendix A. It also appears to some extend reasonable to call Q ctrl 'heat' because the emergence of a projector P(r n ) requires in a microscopic picture to couple the unit to some macroscopic and classical device, which allows the unit to lose information irreversibly due to dissipation and decoherence [68]. This last phenomenological step in quantum measurement theory is sometimes refered to as the 'Heisenberg cut' [62]. It necessarily entails a certain level of arbitrariness because we do not explicitly model the microscopic interaction between the unit and the final classical environment. It therefore remains unclear how far any notion of temperature is associated to the heat Q ctrl and we will investigate this in the next section further. We also remark that a conceptually similar contribution was called 'quantum heat' in Ref. [44] and we will come back to this point in Sec.V F. To conclude, after adding the first laws with and without control operation together, we obtain for the changes over a complete interval where we can split the work and heat into W (n] (r n−1 ) = W ctrl (r n−1 ) + W (n) S (r n−1 ) and Q (n] (r n ) = Q ctrl (r n ) + Q (n) S (r n−1 ). If we assume trivial Hamiltonians for the units (H U (i) ∼ 1 U ), we get a first law exclusively in terms of system quantities, with W (n] . For the entropic balance, it will be in general not that simple. C. Stochastic entropy and second law To account for all entropic changes, we do not only need to consider the system and all units, but also the entropy of the outcomes r n stored in a classical memory (see Fig. 1). This is a crucial point, which distinguishes our theory from standard stochastic thermodynamics where the entropic contribution of the measurement results is neglected (this will play an important role in Sec. V D). In general, however, the process tensor depends explicitly on the knowledge of r n , which cannot be neglected. Furthermore, it is important to also keep the past information of all previous units U (i < n) and outcomes r n−1 because we explicitly allow the current unit and Hamiltonian to depend on all earlier outcomes (this is, for instance, essential if we apply time-delayed feedback control). Thus, we define the stochastic thermodynamic entropy of the process as (36) Note that, even when ρ SU (n) (t, r n ) is in a pure state, we can only evaluate the entropy if we know the probability distribution p(r n ). This requires many sampled trajectories first before actually being able to evaluate S SU (n) (t, r n ) along a single stochastic trajectory. While this might appear awkward at first sight, the same problem appears in the definition of the trajectory dependent entropy in classical stochastic thermodynamics too [11][12][13]. Next, we define the entropy production along a single trajectory over a time interval (t n−1 , t n ] by adding to the change in stochastic entropy the heat flow into the system, As in classical stochastic thermodynamics, this expression can have either sign, but on average it is always positive as we will show below. Crucially, we have only taken into account the heat accociated with system changes whereas we did not include Q ctrl U (n) in the entropic balance. This will give us the correct result in all limiting cases and, if we use the commonly made assumption that H U (i) ∼ 1 U (i) , we anyway have Q ctrl U (n) = 0 always. Furthermore, as we do not microscopically model the final projective measurement step of the units, it is also unclear which temperature we should associate to heat changes in the units and hence, including Q ctrl in the second law would necessarily imply some ambiguity. While these are all good a posteriori arguments, the question whether there exist good a priori arguments remains. To show the positivity of the average entropy production, it is useful to split it into two contributions similar to the first law: We will now show that the second contribution Σ (n) is positive even along a single trajectory, whereas the first contribution Σ ctrl is positive only on average. To show Σ (n) (r n−1 ) ≥ 0 we will use Eq. (22), which holds for an arbitrary initial state ρ S (t + n−1 ; r n−1 ), together with the fact that the system evolution in between two control operations can be described by a CPTP map independent of the initial state. This is true within the weak couling paradigm of quantum thermodynamics [3][4][5][6] where the time evolution is governed by a (possible time dependent) master equation in Lindblad-Gorini-Kossakowski-Sudarshan form, but it might also hold in more general cases (cf. Secs. VI B and VI G too). Let us denote the CPTP map by M n = M n (r n−1 ) such that The inequality Σ (n) (r n−1 ) ≥ 0 can then be derived along the following lines: First, by using the mutual information I S:U (n) between the system and the stream of units, we can split the change in joint entropies as Since the marginal state of the units does not change under the action of the CPTP map M n , their entropic contribution cancels out and we can write in short ∆S S:U (n) (r n−1 ). Let us now add the entropy flow −βQ (n) (r n−1 ) from the bath to the entropy balance. From the second law (22) we can then infer that The positivity of the right hand side is then guaranteed by contractivity of relative entropy under CPTP maps [69,70]. More specifically, the following chain of (in)equalities applies [we exceptionally drop the argument of ρ SU (n) = ρ SU (n) (t + n−1 , r n−1 ) here]: where it was essential that M n acts only on S and not on U (n). This concludes the proof of positivity of Σ (n) (r n−1 ). Next, we will show that Σ ctrl (r n ) is positive on average. More specifically, we will show that If this holds, then it also follows that Σ ctrl (t n ) = rn p(r n )Σ ctrl (r n ) ≥ 0. After taking the average and using Eq. (33), we are left with three terms where S Sh [p(r n |r n−1 )] = − rn p(r n |r n−1 ) ln p(r n |r n−1 ) is the Shannon entropy of the conditional probability p(r n |r n−1 ). 5 The positivity of Σ ctrl (t n , r n−1 ) then follows from a theorem in quantum measurement theory [63,71,72]. To explicitly deduce it, we will proof the following (see Theorem 7 in Ref. [63] for a more general version): Lemma IV.1. Let ρ be an arbitrary state, {P n } n an arbitrary complete set of orthogonal projectors (not necessarily rank 1), V an arbitrary unitary operator, p n = tr{P n V ρV † P n } the probability to obtain outcome n after a unitary operation applied to ρ and ρ (n) = P n U ρU † P n /p n the post-measurement state conditioned on outcome n. Then, Proof. The von Neumann entropy is invariant under unitary operations, hence S vN (ρ) = S vN (V ρV † ). From Theorem 11.9 in Ref. [61] we also know that (50) All ρ (n) have support on orthogonal subspaces, hence (cf. Theorem 11.10 in Ref. [61]) This proves the lemma. If we rewrite Eq. (49) as and identify the projectors P n with P(r n ), the probability p n with p(r n |r n−1 ), the initial state ρ with ρ SU (n) (t − n , r n−1 ) and the post-measurement state ρ (n) with ρ SU (n) (t + n , r n ), we can deduce our desired result Σ ctrl (t n , r n−1 ) ≥ 0. We remark that the use of inequality (49) in quantum thermodynamics is not novel and was probably first exploited in Ref. [73] to show the positivity of the second law for a Maxwell demon employing quantum measurements. D. Short summary and outlook We have introduced microscopic definitions of a fluctuating internal energy [Eq. (23)] and entropy [Eq. (36)] along a single trajectory r n . The effect of the control operation forced us to introduce a (in general non-zero) energetic change δE ctrl , which can be split into a worklike [Eq. (28)] and heat-like [Eq. (29)] contribution. Together with the standard definitions of heat and work for a weakly coupled open system [Eqs. (20) and (21)] we could establish a first law for the entire interval (t n−1 , t n ] [Eq. (34)] or alone for the control operation [Eq. (26)] or for the evolution without control [Eq. (25)]. Similarly, we could split the stochastic entropy production into two parts [Eq. (38)]. Whereas the part belonging to the evolution without control [Eq. (40)] was always positive, the entropy production during the control step [Eq. (39)] can be also negative along a single trajectory, but on average it is always positive [Eq. (45)]. To get further confidence in our approach, we will consider various limiting cases in the next section. This will also help us to understand the quite technical and abstract constructions used in this section. In particular, we will consider the following limiting cases: A. The framework of repeated interactions was previously used to establish a generalized thermodynamic framework without considering trajectory dependent quantities [58]. We will see that we can naturally recover this framework within our setting in Sec. V A. B. For completeness, we will also show in Sec. V B how to recover from our definitions the standard framework of quantum thermodynamics [3][4][5][6] without any control operations. C. While the repeated interaction framework has guided us in finding the correct thermodynamic definitions, it is interesting to ask under which circumstances we can get rid of the units in the energetic and entropic balances. This can be achieved within our framework under some additional assumptions, which we will work out in Sec. V C. D. The most established framework is the standard framework of classical stochastic thermodynamics. In Sec. V D we will compare our definitions with the definitions used there and discuss what has to be changed to obtain an identical framework. E. A successful framework to derive quantum fluctuation theorems is the projective measurement approach mentioned in the introduction. We discuss how far we can reproduce it in Sec. V E. F. One recent approach to quantum stochastic thermodynamics was put forward in Ref. [44]. We will compare our framework with their findings and discuss the nature of 'quantum heat' [44] in Sec. V F. The paper then closes with additional remarks and by mentioning possible extensions and applications in Sec. VI. A. The conventional repeated interaction framework The framework of repeated interactions gives rise to a generalized thermodynamic theory by realizing that the stream of external units can act in the most general scenario as a resource of nonequilibrium free energy, which encompasses many previously considered theories [58] (see also Ref. [74] for important earlier work). However, the repeated interaction framework considered previously differs from our framework by avoiding to do any projective measurement on the units. In order to recover this thermodynamic framework, it is important to realize that a simple ensemble average of the process tensor over the outcomes r n will not do the job. Depending on the state of the system and units and depending on the projectors P(r n ) used, the ensemble averaged state can still differ from the repeated interaction framework where no measurement was applied. The correct way to recover previous results from our framework is to choose the projector P (r n ) = 1 U throughout. In this case, the process tensor can be written as T[A n , . . . , A 1 ] where A i is a CPTP map acting at time t i . The control operations and hence, also the process tensor, do not depend on any outcome r n anymore. 6 Furthermore, since there are no outcomes recorded anymore, every incoming unit is decorrelated from the previous units. Our thermodynamic framework of the process tensor is therefore much more general and flexible than the previous framework apart from one important difference. In Ref. [58] the units were allowed to interact with the system for a finite duration whereas we here only consider instantaneous interactions (or more precicely, interaction times where the effect of the bath can be neglected to leading order). From a thermodynamic point of view, this is not necessary. However, to be able to clearly distinguish between control operations on the system and system-bath dynamics, this assumption is necessary (compare with the discussion in Sec. II). Let us now investigate how the first law changes under the above assumptions. Clearly, no quantity will depend on r n anymore. This implies for instance for the internal energy 6 Alternatively, one could say that each control operation at time t i has only one possible outcome. This expression still differs from the framework of Ref. [58] where the internal energy of all units U (i < n), which are not interacting with the system, was neglected. However, it is easy to see that these internal energies never enter the first law, and thus, can be indeed neglected. First of all, in absense of any control operations we have from Eq. (25) the energy balance ∆E as expected. During the control operations, because there is no final projective measurement, Q ctrl (t n ) = 0 and only W ctrl (t n ) can differ from zero. But the work-like contribution only depends on the state of the n'th unit and not on previous units [cf. Eq. (32)]. Hence, the first law during the control operation becomes W ctrl (t n ) = ∆E S (t n ) + ∆E U (n) (t n ) because the marginal state of all other units does not change. Finally, note that W ctrl (t n ) would be identified in context of Ref. [58] with the switching work W switch required to turn on and off the system-unit interaction. We therefore obtain the same first law over one interaction period (t n−1 , t n ]: We now turn to the second law. Without any outcomes r n we obtain from Eq. (36) the entropy Again, this differs from Ref. [58] by explicitly taking into account the joint entropy of all units and the system. To recover Ref. [58], we start again with the situation without control operation. From Eq. (22) we know that ∆S because the von-Neumann entropy is invariant under unitary transformation. To recover the framework from Ref. [58] it is useful to split the entropies as follows: We then notice that I SU (n):U (n-1) (t + n ) = I SU (n):U (n-1) (t − n ) because mutual information is invariant under local unitary operations and at time t n the unitary V acts only on SU (n). Furthermore, we can split n )] because the system and the n'th unit are initially decorrelated. From the conservation of entropy, Eq. (56), we can then deduce The latter local changes in entropy of the system and unit n were previously identified as part of the entropy production [58]. Thus, we can deduce over a full interaction interval that which reproduces the generalized second law from Ref. [58]. The reason why the final mutual information between the system and the previous units is discarded in this framework becomes clear by recalling that every unit which has already interacted with the system does not have the chance to interact with the system again. All final mutual information will therefore be lost. This is in contrast to the general framework developed here where it was explicitly allowed that the entire sequence of outcomes r n can influence the system at later times, either through changing the system Hamiltonian, the state of the subsequent units or the interaction between the system and the subsequent units. Under these more general circumstances, the remaining mutual information after the interaction represents a valuable thermodynamic resource, which cannot be neglected. B. The standard framework of quantum thermodynamics If we perform no control operations at all, our framework obviously reproduces the standard framework of quantum thermodynamics mentioned at the beginning in Sec. IV A. This fact might seem so obvious that it is not worse to stress. However, similar to Sec. V A, it is important to remark that the standard framework of quantum thermodynamics is not recovered by performing an ensemble average over p(r n ), but by simply deciding not to apply any control operation at all (apart from maybe preparing a certain initial state and reading out the final state). We also want to point out that previous definitions used in quantum stochastic thermodynamics [36][37][38][42][43][44][45][46][47] as well as the standard framework of stochastic thermodynamics (see Sec. V D) fail to reproduce the picture without control operations as the definitions used there are intimately linked to a certain measurement procedure. C. Getting rid of the units in the thermodynamics We used the external stream of units to guide our thermodynamic analysis along the framework of repeated interactions. Furthermore, in many important realistic situations the units really correspond to physical subsystems. For instance, this is the case for the wellstudied micromaser and other recent experimental setups in quantum optics [75,76], in scattering theory where the units are projectiles impinging on a target (the system), in biomolecular processes where the units could be the monomers of a more complex molecule, or for certain mesoscopic devices where tunneling electrons and cooper pairs could be identified as units [77,78]. Therefore, the framework of repeated interactions allows us to treat a larger class of physically relevant scenarios. Nevertheless, there are also many scenarios where the exact microscopic nature of the units is not known or hard to model. Furthermore, as also the process tensor relies only on specifying CP maps A(r n |r n−1 ) acting on the system, it is worth to ask whether we can get rid of the sometimes rather artifical units in the thermodynamic description. Energetically, we have already seen that simply setting H U (n) ∼ 1 U for all n cancels out all unit contributions from the first law. To get rid of the units from the entropic considerations, we will need to restrict ourselves to efficient control operations [62,63]. Efficient control operations are defined by the requirement that they can be written as as opposed to the more general form (4). They have the specific property that any initially pure state ρ S gets mapped to a pure state again. To explicitly see that efficient control operations are sufficient to exclude the units from the entropic balance, notice that every efficient control operation can be modeled by an initially pure unit state ρ U (n) = ρ 2 U (n) , followed by an arbitrary unitary operation V (r n−1 ) acting on system and unit, and finally followed by a rank 1 projective measurement using P (r n ) = |r n r n |. This implies ρ S (t + n , r n ) = A(r n |r n−1 )ρ S (t − n , r n−1 ) = r n V(r n−1 )[ρ S (t − n , r n−1 ) ⊗ ρ U (n) ] r n . Notice that we have kept the dependence of the initial pure unit state as well as the measurement basis {|r n } on the previous outcomes r n−1 implicit for notational simplicity. Furthermore, we add that it is principle possible to construct efficient operations with a mixed initial unit state or a rank n > 1 projective measurement, but the present construction guarantees an efficient operation for any unitary V (r n−1 ). Because we perform a rank 1 projective measurement on the units after each control operation, the unit state is pure and decorrelated from the system after every operation. In fact, the joint state of the system and all units after the control operation is simply ρ SU (n) (t, r n ) = ρ S (t, r n ) ⊗ |r n r n | U (n) with |r n r n | U (n) ≡ |r n r n | U (n) ⊗ · · · ⊗ |r 1 r 1 | U (1) . The joint entropy for this state becomes Also before the interaction at time t n , we have where we used that the initial unit state was pure. Hence, the contribution of the units from the entropic balance completely vanishes. We note that the ensemble averaged system unit state rn p(r n )ρ SU (n) (t, r n ) is in general classically correlated. To summarize, in case of energetically neutral units and efficient control operations, the stochastic internal energy and entropy can be reduced to Note, however, that we are still using the external units to model the control operations dynamically. The question as to whether we can get completely rid of the units will be answered in Sec. VI A. D. Standard classical stochastic thermodynamics A tacitly made assumption in classical stochastic thermodynamics is the ability to measure perfectly (i.e., without error and without disturbance) the state of the system [12,13]. For definiteness we here focus on a classical discrete system, which makes random jumps between a finite set of states {s} = {1, . . . , d}. Its dynamics are described by a rate master equation Here, p s (t) is the probability to find the system in state s at time t, whose energy we denote by H(s, λ t ) (dropping the subscript S on H). 7 The rate matrix W s,s (λ t ) can depend on an external control parameter λ t . It is required to fulfill the local detailed balance condition which allows to link energetic changes in the system to entropic changes in the bath. Due to the assumptions of standard stochastic thermodynamics one knows at each time t the state s of the system without any uncertainty 7 Note that we focus here on the standard scenario where we only measure the system, but do not perform any feedback. This implies, e.g., that H(λt) does not depend on the outcomes rn. (denoted s t in the following). The stochastic energy and entropy at time t is then defined by where we used a subscript 'ST' to denote definitions used in standard stochastic thermodynamics. Note that the stochastic entropy S ST (s t ) is determined by evaluating the solution of the rate master equation along a particular stochastic trajectory [11]. Work and heat for a sufficiently small time-step dt are defined as 8 Furthermore, using rather complicated algebraic manipulations, one can compute the change of stochastic entropy along a particular trajectory [11][12][13] (we will see below that evaluating the quantities in discrete time steps simplifies the algebra significantly). In the resulting expression it is then possible to single out a term related to the entropy production, which -on average -yields the always positive expression where Our goal is now to show the following: (1) how a perfect, non-disturbing measurement arises in our context; (2) that we obtain identical expressions for the stochastic heat, work and internal energy in this limit; (3) that we obtain a different expression for stochastic entropy, which yields a different, but meaningful second law; (4) how the entropy production of standard stochastic thermodynamic arises in our context when we change the definition of stochastic entropy. (1) To obtain a perfect measurement, we take a classical unit with equally many possible states as the system under consideration. The initial state is prepared in a fixed standard state p U = |1 , where here and in the following we will use Dirac notation for simplicity. The 'unitary' operation which correlates the system and the unit at time t n is taken to be the permutation matrix where S U (s) is the shift-operator acting on the unit U defined by The action of V on the joint initial state s p s (t)|s, u = 1 , where p s (t) is arbitrary, is That is, the state of the system got copied onto the state of U without changing the system state. After the permutation, we then measure the state of the unit in its classical basis |u and obtain the outcome r = u = s ∈ {1, . . . , d} with probability p r (t). We then know that the post-measurment state is |r, r , i.e., the system and unit are in an identical pure state without uncertainty. To complete point (1), we consider the limit where we measure the system continuously, i.e., in small timesteps dt = t n − t n−1 such that the probability for a jump in each interval is very small: W s,s (λ t )dt 1. Furthermore, we assume that all units are identical and uncorrelated initially, i.e., we consider a pure measurement process without any feedback. In this limit, the sequence of measurement outcomes r n is identical to the state of the units, which is identical to the trajectory taken by the system. This is the essence of a perfect classical and continuous measurement. As a consequence, the state of the system at time t ≥ t + n only depends on the last measurement outcome r n , but not on any of the previous outcomes r n−1 . Furthermore, the state of the system during the interval (t n−1 , t n ] changes from p(t n−1 ; r n−1 ) = |r n−1 at the beginning to p(t − n ; r n−1 ) = |r n−1 + dt s W s,rn−1 (λ t )|s shortly before the control operation and to p(t + n ; r n ) = |r n at the end after the n'th control operation. Below we will identify t n = t and t n−1 = t − dt. (2) We now turn to the energetic description. As in standard stochastic thermodynamics, we neglect the energetics associated to the memory, that is we set H U ∼ 1 U for all units. This implies that we can replace our stochastic energy E SU (n) (t, r n ) by E S (t, r n ). Then, the stochastic energy at the beginning of the interval is simply H(r n−1 , λ t−dt ) and at the end it reads H(r n , λ t ), which is identical to the definition used in classical stochastic thermodynamics. Furthermore, in absence of control, we obtain from Eq. (20) which is identical to Eq. (70). 9 Furthermore, the work during the control step, Eq. (28), is zero because the 9 We remark that there is a certain degree of freedom involved in marginal state of the system does not change by application of the permutation matrix (73). Thus, we conclude that the definition of the total work W (n] (r n−1 ) during one full interval is identical to the definition used in classical stochastic thermodynamics. It remains to look at the change of heat during one full interval Q (n] S (r n , r n−1 ). First of all, from Eq. (21) the heat exchanged during the interval without control becomes which is different from the definition (71). However, it is now also important to take into account the heat exchanged during the control step, Eq. (29), in which we update our knowlegde about possible system changes. It is simple to see that this quantity reduces to such that Q where we used that the system and units are after each measurement in a pure state and hence, their entropy vanishes. Furthermore, we used that the system dynamics are Markovian and hence, p(r n |r n−1 ) = p(r n |r n−1 ). The stochastic entropy production (37) over one interval then becomes Σ (n] (r n , r n−1 ) = − ln p(r n |r n−1 )−βQ Notice that this second law is identical to the conventional one of stochastic thermodynamics if we apply the evaluation of the integral in Eq. (20). However, this degree of freedom is also there in the identification (70) and (71) and it is only important to stick consistently to one choice. Eq. (72) to an initially pure state p s (t − dt) = δ s,rn−1 , which implies ∆S ST (t) = S Sh [p(r n |r n−1 )] and Q ST (t) = Q (n] S (r n−1 ). Unfortunately, although S Sh [p(r n |r n−1 )] is infinitesimal small, it is of order O(dt ν ) with ν < 1. Therefore, the rate of entropy production diverges: Although seldomly stated [67], this is related to the fact that the Shannon entropy S Sh [p s (t)] is not differentiable when the kernel of p s (t) changes. Furthermore, by averaging Eq. (81) also over p(r n−1 ), we obtain Here, S Sh (r n |r n−1 ) = rn−1 p(r n−1 )S Sh [p(r n |r n−1 )] denotes the conditional Shannon entropy. This second law is different from the conventional one (72). Instead of containing the change in Shannon entropy of the system state, it contains the conditional Shannon entropy, which is nothing else than the entropy rate of the stochastic process [79]. Of course, if we devide Eq. (83) by dt, it still diverges. Furthermore, the difference in the two entropy productions is precisely given by where the 'backward' conditional entropy S Sh (r n−1 |r n ) = rn p(r n )S Sh [p(r n−1 |r n )] is computed via Bayes' rule: p(r n−1 |r n ) = p(r n |r n−1 )p(r n−1 )/p(r n ). We notice that our novel second law (83) has a transparent physical interpretation. It consists of the entropic change in the reservoir quantified by the Clausius-like term −βQ (n] S plus the change in entropy in our memory for the measurement outcomes. As we measure perfectly and continuously, the rate of information generation in the memory is infinite (in reality, every sampling rate is finite and no divergence arises). Therefore, even in equilibrium where Q (n] S = 0, we will have a positive entropy production Σ (n] > 0 due to the fact that we measure the system and continuously generate information. In stochastic thermodynamics, one instead finds Σ ST = 0 at equilibrium. The discrepancy of the two second laws is rooted in the fact that standard stochastic thermodynamics keeps the observer out of the contruction. This works well if one perfectly monitors a classical system, but outside this regime problems appear. In fact, trying to replace the quantities in definition (36) by different ones [e.g., p(r n ) by p(r n )] will likely result in a definition, whose associated entropy production is not positive on average in general. Importantly, "information is physical" [48] and much effort was needed to understand Maxwell's demon and other feedback controlled devices within the conventional theory of stochastic thermodynamics. In fact, the theory needed to be modified [80,81]. These problems are absent in our novel formulation where the information obtained from the measurement is treated on an equal footing with the system under control. Nevertheless, our framework can recover the conventional second law of stochastic thermodynamics, if we redefine entropy in this peculiar limit. (4) For completeness, we demonstrate how the standard entropy production (72) arises in our context if we replace our definition of entropy by the conventional one (69). The stochastic entropy production for the conventional definition becomes in our notation − ln p(r n ) + ln p(r n−1 ) − β[H(r n , λ t ) − H(r n−1 , λ t )] (85) and if we average over p(r n ) and use that the measured probabilities are identical to the probabilities of the system, p(r n = s) = p s (t) and p(r n−1 = s) = p s (t − dt), we obtain This is identical to Eq. (72). E. The two-point measurement approach The two-point measurement approach, which is closely related to the theory of full counting statistics, has become the primarily used approach to derive quantum fluctuation relations in various open quantum systems [21,22,31]. While theoretically powerful, we already discussed the practical weakness of this approach in the introduction: experimental confirmations have been so far only achieved for work fluctuation relations in isolated systems [25,82,83] or in electronic nanocircuits when the electrons behave according to a classical rate master equation [28][29][30]. For completeness we here want to point out that our strategy can reproduce the two-point measurement approach as long as we consider the case of an isolated system, i.e., there is no heat bath present. Alternatively, one could also adopt the point of view that everything that can be measured projectively in an experiment should be defined to be the system because the bath by definition should be an object about which we have only limited control. Whatever the point of view, let us repeat for completeness how the well-known two-point measurement protocol can be realized in our framework. We consider a (finite) quantum system whose Hamiltonian has the spectral decomposition H(λ t ) = k k (λ t )|k(λ t ) k(λ t )| (as our system is assumed to be isolated, we drop all subscripts S in the notation and we neglect any feedback control). We assume that the system was prepared at time t 0 in a Gibbs state ρ(t 0 ) = e −βH(λ0) /Z(λ 0 ) with Z(λ 0 ) = tr{e −βH(λ0) }, which could be achieved, e.g., by coupling the system to a larger heat bath for times t < t 0 . We then perform the first control operation at time t 1 = t 0 by measuring the system projectively in its energy eigenbasis. This corresponds to an operation where k (λ 0 ) denotes the measurement outcome (corresponding to r 1 in our previous notation), which is obtained with probability p[ k (λ 0 )] = e −β k (λ0) /Z(λ 0 ). Afterwards, we change the driving protocol λ t in an arbitrary but prescribed way until some time t 2 > t 0 . The state of the system is then At time t 2 we perform another final projective measurement in the energy eigenbasis of H(λ 2 ) and obtain an outcome (λ 2 ) (corresponding to r 2 ). The probability for the sequence of outcomes is It is a straightforward exercise to show that this probability distribution implies the quantum version of the classical Jarzynski equality [84][85][86], F. Comparison with the framework of Elouard et al. [44] In a recent paper by Eloaurd et al. [44], a quantum stochastic thermodynamics framework was established for two different scenarios: (1) an isolated, driven system (no heat bath present) starting in a pure state and interrupted by arbitrary projective measurements, and (2) an open quantum system described by a master equation in Lindblad-Gorini-Kossakowski-Sudarshan form where each decoherence channel is monitored with unit detection efficieny. The latter results in a weakly measured quantum system and in both cases, as only efficient measurements are considered, an initially pure state remains pure. The thermodynamics of a simple feedback control protocol was also considered. Thus, the framework introduced here is more general by allowing for arbitrary interventions and feedback control protocols. Nevertheless, it is instructive to compare our approach with the approach from Eloaurd et al. and to clarify the origin of the 'quantum heat' [44]. We will focus only on scenario (1) here. To reproduce the setting of Ref. [44], we switch off the bath and consider operations A(r n )ρ S = P (r n )ρ S P (r n ), which is a special case of the limit considered in Sec. V C. Here, P (r n ) = |r n r n | is an arbitrary rank-1 projector in the system Hilbert space and we assume rn P (r n ) = 1 S as usual. Note that the basis |r n is allowed to change in time (kept implicit in the notation) and does not need to coincide with the eigenbasis of the system Hamiltonian H S (λ t ). We start from a pure state |ψ 0 and at each point in time, the quantum system remains in a pure state |ψ(t; r n ) , which depends only on the last measurement outcome r n as we do not perform any feedback control. 10 Our definition of internal energy is straightforward and coincides with Ref. [44], Furthermore, in between the projective measurements, our definitions of work (20) and heat (21) also coincide: What differs in our frameworks are the energtic considerations during the measurement step and the second law. We start by repeating the framework of Ref. [44]. To account for the energetic changes during the measurement, they introduce the 'quantum heat' such that their first law reads ∆E (n] (r n , r n−1 ) = W (n) S (r n−1 ) + Q q (r n , r n−1 ). The terminology quantum heat was justified by the stochastic character of the wavefunction collapse and the fact that Q q (r n , r n−1 ) = 0 if the system is in an energy eigenstate at time t − n and measured in the energy eigenbasis [44]. Nevertheless, the quantum heat does not appear in their second law, which becomes at time t [44] S vN [ρ S (t)] ≥ 0. Here, ρ S (t) = rn p(r n )|ψ(t; r n ) ψ(t; r n )| denotes the ensemble averaged state (remember that the system starts initially always in the same pure state |ψ 0 with zero entropy). In Ref. [44] the second law is derived from a fluctuation theorem making use of a time-revered process. We will come back to time-reversed protocols and the associated definition of entropy production more generally in Sec. VI C. We now analyse the same situation with our tools. First of all, to implement a projective measurement in the basis |r n on the system (assumed to be d-dimensional), we can basically follow the same steps as in Sec. V D. The initial unit state is always taken to be ρ U = |1 1| and the unitary operator is where the subscripts S and U are made explicit to denote on which Hilbert space the operator is acting. We now consider an arbitrary system state ρ S (t − n ) = rn,r n ρ rn,r n |r n r n | expanded in the measurement basis. After the unitary operation, we are left with the correlated state Vρ S ⊗ ρ U = rn,r n ρ rn,r n |r n S r n | ⊗ |r n U r n |. Notice that the reduced system state tr U {Vρ S ⊗ ρ U } = rn ρ rn,rn |r n r n | is different from the initial state ρ S (t − n ) unless it was diagonal in the measurement basis. Finally, the measurement is completed by projecting the unit in the measurement basis |r n U . This yields the unnormalized statẽ ρ SU (n) (t + n ; r n ) = |r n U r n |V[ρ S ⊗ ρ U ]|r n U r n | = ρ rn,rn |r n S r n | ⊗ |r n U r n |, which completes the description of the measurement process. To look at the energetic changes we specialize to the case where ρ S (t − n ; r n−1 ) = |ψ(t − n ; r n−1 ) ψ(t − n ; r n−1 )| and expand the wavefunction in terms of the measurement basis at time t n , |ψ(t − n ; r n−1 ) = n c n |r n , with coefficients c n = c n (t − n ; r n−1 ). It is then straightforward to compute the work (28) and heat (29) during the control step, which become W ctrl S (r n−1 ) = (100) n,n |c n | 2 δ n,n − c n c * n r n |H S (λ n )|r n , Q ctrl S (r n , r n−1 ) = (101) n δ n,n − |c n | 2 r n |H S (λ n )|r n . Both are non-zero in general on the trajectory level and both vanish identically when the initial system state is diagonal in the measurement basis (for an initially pure state this means that c n = δ n,n apart from a phase factor). Furthermore, it is easy to confirm that the injected heat vanishes on average, cf. Eq. (33), whereas the work associated to the projective measurement does not. Thus, we reach a very different conclusion compared to Ref. [44], namely that a projective measurement can be seen on average as a work and not as a heat source. This difference in the interpretation of our results is quite striking because in our splitting of the work-and heat-like contribution in Eqs. (28) and (29) we actually applied the same philosophy as Elouard et al. by identifying the energetic changes caused by the projective measurement in our isolated system-unit space as heat (we just refrained from calling it 'quantum' heat, see below). Thus, we have the following paradox: two different mathematical descriptions, which reproduce identical system dynamics, can produce two different thermodynamic interpretation albeit we apply the same basic definitions in each case. Therefore, it should be stressed that it is only the personal believe of the author that the thermodynamic interpretation we have worked out here is 'superior' to the interpretation put forward in Ref. [44] for two reasons. First or all, albeit it is more complicated, it is also more general (also see the arguments in Sec. VI A). Second, also within the framework of Ref. [44] it is in principle possible to transform away the quantum heat by means of unitary transformations without changing the statistics of the measurement trajectories. This works as follows: We expand the wavefunction before the n'th measurement in the measurement basis as above, |ψ(t − n ; r n−1 ) = rn c n |r n . Then, in each run of the experiment, we select with probability |c n | 2 (which can be computed theoretically or measured experimentally) a unitary operator U (r n ), which rotates the state of the system shortly before the n'th measurement from |ψ(t − n ; r n−1 ) to |r n = U (r n )|ψ(t − n ; r n−1 ) . This will inject an amount of work r n |H S (λ n )|r n − n,n c n c * n r n |H S (λ t )|r n . into the system, which precisely equals the quantum heat (94). We then obtain measurement outcome r n with certainty, but on average it happens with probability |c n | 2 . Thus, this protocol yields identical measurement statistics for r n and an always vanishing quantum heat, but the work cost associated to U (r n ) clearly does not vanish. We also note that, albeit the heat associated to the projective measurement is on average zero in our formalism, their is still an entropic cost associated to the measurement quantified by S Sh [p(r n |r n−1 )]. Furthermore, we refrained from calling Q ctrl S 'quantum heat' as, e.g., it also plays an important part in classical stochastic thermodynamics and cannot be neglected there, cf. Sec. V D. To end this comparison, we finally note that our second law is also different from the framework of Ref. [44]. Instead of the inequality (95), the accumulated entropy production over all intervals reads in our case The meaning of this entropy production was already discussed in Sec. V D. VI. DISCUSSION, EXTENSIONS AND OUTLOOK In this final section we will present alternative approaches to our framework from Sec. IV and we will explain why we have not used them. Furthermore, we will discuss applications and extensions of our framework, thereby connecting our theory also to other fields of current interest. A. Getting rid of the units in the dynamics We have already argued in Sec. V C that -while allowing to treat a larger class of experimentally relevant systems -for certain applications the explicit modeling of the units can be cumbersome as it simply involves additional computational efforts. Furthermore, we have seen in Sec. V F that -even in the case where we got rid of the units in the thermodynamic description -the thermodynamic description can be quite different (even conceptually) from an approach, which is only based on control operations acting on the system [44]. Also other approaches to analyze quantum measurements thermodynamically have been put forward, e.g., in Refs. [87][88][89][90], without reaching any consensus though. We here argue that a general quantum stochastic thermodynamic description (including arbitrary measurements) only leads to unambiguous definitions for the work and heat injected during the control operation if we model the external units explicitly. We therefore cannot get rid of the units in the dynamical description for the most general thermodynamic framework. To support this claim let us consider an arbitrary efficient control operation (see Sec. V C) A(r n |r n−1 )ρ S = A(r n |r n−1 )ρ S A † (r n |r n−1 ). The polar decomposition theorem allows to write A(r n |r n−1 ) = U (r n )P (r n ) where U (r n ) is a unitary matrix and P (r n ) = A † (r n |r n−1 )A(r n |r n−1 ) a positive Hermitian matrix. This splitting naturally suggests alternative definitions for heat and work during the control operation: = tr S {H S (λ n , r n )[A(r n |r n−1 ) − P(r n )]ρ S (t − n ; r n−1 )}, where we used a superoperator notation for conciseness. Note that these definitions would coincide with the framework of Ref. [44] in case of projective measurements. Despite being seemingly appealing, they involve two problems: first of all, the author was not able to show that the stochastic entropy production during the control step Σ ctrl (r n ) = S S (t + n , r n ) − S S (t − n , r n ) − βQ ctrl S (r n ), (107) where the thermodynamic entropy S S (t, r n ) is defined in Eq. (66), is on average positive. Second and more importantly, if we use an alternative polar decomposition A(r n ) = P (r n )U (r n ) with P (r n ) = P (r n ) in general, we obtain a different set of definitions for the heat and work exchanged and in general there is no way to judge which polar decomposition is more meaningful. Thus, finding a thermodynamic framework based solely on the control operations seems to involve an unwanted amount of ambiguity even for efficient operations [not to mention the inefficient case where Eq. (4) needs to be decomposed]. In contrast, modeling the control operations with the help of an external stream of units removes the ambiguity. Here, the projective measurement P(r n |r n−1 ) must necessarily act on the unit after the unitary operation V(r n−1 ), which first of all correlates the system and the unit. Of course, one could wonder why there is not an additional unitary acting after the projective measurement, but this could be indeed easily taken into account by an additional control operation. Thus, by using the framework of repeated interaction we arrive at a meaningful decomposition of arbitrary measurement processes. B. Quantum stochastic thermodynamics without theory input To set up our framework of quantum stochastic thermodynamics, we needed to be able to know the work (20) and heat (21) exchanged with the bath in between two control operations. Those are path dependent quantities [i.e., they are not determined alone by ρ S (t ± n , r n )] and estimating them requires additional theoretical input. Therefore, one might argue that we do not have a 'complete' quantum stochastic thermodynamics framework in the sense that not all thermodynamic quantities are determined by measurements (or respectively the process tensor) alone. On the other hand, by keeping the definitions (20) and (21), we were able to present a framework, which includes the ensemble averaged description as a limiting case. Nevertheless, we here discuss two possible ways to avoid the use of any theory input. Without changing any of our general conclusions, one way would be to consider only a specific subset of control protocol λ t . These control protocols consist of a sudden switch of the Hamiltonian after each control operation, i.e., the protocol changes instantaneously from λ n−1 to λ n at time t + n , and after the switch we keep the protocol constant for the rest of the time until the next control operation. Note that the protocol is still allowed to depend on r n , which we have suppressed for notational convenience. Thus, in short we can write that λ t (r n−1 ) = λ n−1 (r n−1 ) if t ∈ (t n−1 , t n ]. Those sets of control protocols are characterized by the fact that the work (20) and heat (21) can be computed without any knowledge about the system state in between two control operations: tr S {H S (λ n−1 , r n−1 )[ρ S (t − n ; r n−1 ) − ρ S (t + n−1 ; r n−1 )]}. Another way to approach this problem is to try to set up an effective thermodynamic description based solely on knowledge of the dynamical map M n defined in Eq. (41). Note that the dynamical map can be inferred from knowledge of the process tensor. The very problem of this approach comes from the fact that different physical situations (with different thermodynamic values for W (n) S and Q (n) S ) can give rise to the same dynamical map M n . Thus, if we try to pursue the second way, we will not be able to recover the results from Secs. V A and V B in general. Nevertheless, the author believes that it could be worthwile to pursue this direction because the thermodynamic description of dynamical maps was already investigated before [91][92][93][94]. Especially, for dynamical maps which have additional properties, such as being Gibbs state-preserving, it should be possible to find a meaningful thermodynamic interpretation. C. Time reversal symmetries and fluctuations theorems An essential feature of conventional stochastic thermodynamics is the fact that the entropy production along a single trajectory can be linked to the probability of observing the time reversed trajectory, which allows a particular elegant proof of the fluctuation theorem and precisely links the second law of thermodynamics to (breaking of) time reversal symmetry [7,8,12,13,21,22]. In our context, this would mean that every trajectory r n appearing in our 'forward' process has an associated twin trajectory r † n in a suitably chosen 'backward' process. Typically, r † n is just r n observed backwards starting with r n as the first outcome and ending with r 1 . The probabilities for the forward and backward process are denoted by p(r n ) and p † (r † n ). Then, provided that the operation † is an involution and that p † (r † n ) = 0 only if p(r n ) = 0, the following fluctuation theorem follows trivially: Furthermore, by defining the 'entropy production' also a 'second law' of the form follows straightforwardly. Unfortunately, outside the limit of traditional stochastic thermodynamics and the two-point measurement scheme, where forward and backward processes are linked by time reversal symmetry of the underlying Hamiltonian dynamics, the meaning ofΣ(r n ) is quite obscure as there is no unambiguous choice for the backward process. This causes already troubles for perfectly observed classical systems as soon as feedback control is considered as it is not clear whether feedback should be performed in the backward process or not. Especially for time-delayed feedback control (see also Sec. VI E), certain choices can lead to acausal (non-physical) dynamics [95]. Even without feedback control, the definition of a time reversed general quantum operation bears much ambiguity. Different proposals have been put forward [96,97] and they or different definitions were used in Refs. [36-38, 44-47, 93]. In addition to this ambiguity, they also only apply to efficient quantum operations (see Sec. V C) and therefore, they cannot be used for our general purposes here. To conclude, the author believes that it must be possible to express any definition of entropy production solely in terms of forward quantities -at the very end also because it is not even guaranteed that the backward process can be experimentally realized. For certain situations it is clearly beneficial to think in terms of a time reversed process. In the most general case, however, a precise definition of a backward process appears to be too ambiguous at the moment and the meaning of the so deduced 'second law' remain a matter of debate. D. The limit of continuous measurements In our formalism we assumed the control operations to happen at a finite and discrete set of times {t k } n k=1 with t n ≥ · · · ≥ t 1 . Some open quantum systems are, however, continuously monitored in time using weak measurements [62,63,66,[98][99][100], what was in part already thermodynamically analysed in Refs. [42][43][44][45][46][47]. We here point out some general features of weak quantum measurements, how they are incorporated in our thermodynamic framework and how far this differs from the framework in Ref. [43]. We refrain from showing detailed calculations though as it is likely to expect that many results are model specific. We start by investigating the dynamics of a weakly measured quantum system. In absence of any measurement, we assume that the state of the system changes according to some quantum master equation, denoted in an incremental form by where L 0 is a generator in Lindblad-Gorini-Kossakowski-Sudarshan form. Such master equations follow, e.g., from the conventional weak-coupling approach of quantum thermodynamics [3][4][5][6]. Now, we decide to weakly measure an arbitrary system observable X at small time steps dt. As in Sec. V D we set t n = t and t n−1 = t − dt. Furthermore, we follow the notation of Ref. [100] with the only difference that we denote the measurement outcome by r n , which now has a continuous range. The measurement operators are where k denotes the measurement strength. After obtaining outcome r n , the system state changes to (we only consider efficient measurements here) The probability to obtain outcome r n given the previous measurement record r n−1 is p(r n |r n−1 ) = tr S {A † (r n )A(r n )ρ S (t − ; r n−1 )}. (116) This can be well approximated by [100] p(r n |r n−1 ) ≈ 4kdt where X = tr S {Xρ S (t − ; r n−1 )} is the conditional expectation value of X, which depends on r n−1 . Finally, in the limit dt → 0 one can model the dynamics of the system by a stochastic master equation of the form [100] Here, dW (t) denotes a Wiener increment with variance dt and we have kept all dependence on time and the measurement record r n implicit. 11 The sequence of outcomes r n obeys the stochastic process r n = X + (8k) −1/2 dW (t)/dt. We add that it is known how to obtain the limit of continuous measurements from the repeated interaction framework [66,99]. We now turn to the thermodynamic description, which will depend strongly on the dynamics without measurement L 0 and the chosen observable X. Therefore, we point out only a few general remarks. First of all, as we are applying only efficient control operations, we can get rid of the units in the thermodynamic description by using the results from Sec. V C. In fact, this is also justified by the repeated interaction models studied in Refs. [66,99]. Then, according to our general framework from Sec. IV, by averaging the entropy production of the time interval (t n−1 , t n ] = (t − dt, t] over p(r n |r n−1 ), we get the always positive quantity S (r n−1 ) ≥ 0. 11 In view of our previous notation we need to set ρ S (t) = ρ S (t + n ; rn) and ρ S (t − dt) = ρ S (t + n−1 ; r n−1 ). A straightforward, but a little lengthy calculation using Eq. (117) yields Thus, in the dt → 0 limit the entropy production Σ (n] (r n−1 ) diverges. This makes perfect sense for the microscopic models considered in Refs. [66,99]: in every time step dt a fresh unit in a pure (zero entropy) state is consumed by writing a finite amount of information on the memory. Notice that the amount of information revealed about the system [the last term in Eq. (120)] is indeed infinitesimal. Finally, we remark that -as long as we do not measure the energy of the system -the measurement will have an impact on the energetic balance, i.e., W ctrl S is in general non-zero on average. This is in contrast to Ref. [43] where Eq. (118) is split into a unitary and non-unitary part and changes of the energy according to the former are identified as work and according to the latter as heat. Despite the fact that the splitting of a master equation in a unitary and non-unitary part is not unique, it is important to note that part of the heat identified in Ref. [43] would be identified as work in our framework. Finally, entropy and entropy production were not studied in Ref. [43]. E. Time-delayed feedback control Every feedback control operation involves some timedelay simply due to the fact that it takes time to process the signal. Often, however, one takes the 'Markovian' limit by assuming that the time-delay is negligible [62]. This does not only simplify the computation significantly, but also for many controlled systems it is important to react as fast as possible. However, there are also many systems in nature, which have some intrinsic (quasi-) periodicity in their dynamics (a simple example is a weakly damped harmonic oscillator) and for them it could be beneficial to wait with the control operation until the system has returned (close) to its original state. Unfortunately, the thermodynamic description of systems subjected to a time-delayed feedback control is very difficult and has been only achieved for certain models [95,[101][102][103][104]. With the present approach, which can take into account arbitrary control operations triggered with or without time-delay, we connect the hope that we will be able to understand the thermodynamics of time-delayed feedback control in general. One important step towards achieving this goal would be to understand the continuous measurement limit better (see Secs. V D and VI D) because the systems investigated in Refs. [95,[101][102][103][104] are continuously monitored and controlled. A detailed investigation of such scenarios is, however, beyond the scope of the present paper. F. Multiple heat reservoirs An open problem, partially even in classical stochastic thermodynamics, is the treatment of multiple reservoirs interacting simultaneously with the system. Already the average description in the weak coupling regime can bear surprising difficulties [105]. Furthermore, also in classical stochastic thermodyanmics (cf. Sec. V D) it can happen that multiple reservoirs induce transitions between the same pair of states s and s . Using the notation of Sec. V D, the transition rate typically splits additively into multiple contributions W s,s = ν W (ν) s,s where each W (ν) s,s describes a transition from s to s triggered by the reservoir ν. For a complete thermodynamic description it is necessary to be able to resolve the contributions from the different reservoirs [106], but if we only observe a transition from state s to s, there is no way to judge whether the jump was caused by reservoir ν or ν . Classically, a way out of this dilemma is to assume that each transition W s,s can only be caused by a single reservoir ν, for instance by geometrically separating the system into subsystems, where each subsystem interacts only with one reservoir. This is indeed what happens in transport experiments through quantum dots [28,29], which always use a double quantum dot as a system where each dot couples only to one reservoir. Quantum mechanically, it seems in general harder to separate the effect of each reservoir dynamically. At least within the standard approach based on a Born-Markovsecular approximation [3][4][5][6], the system jumps between energy eigenstates of the composite system which are in general entangled. On the other hand, it was recently also argued that a 'local' approach to the dynamics (where each dissipator in a quantum system acts only on a specific subsystem) is feasible from a thermodynamic point of view [74,107]. If that is the case, it should be in principle possible to apply our framework to a situation with multiple reservoirs. Investigations in that direction are left for the future. G. Strongly coupled and non-Markovian systems So far we have focused our description on weakly coupled and Markovian systems, which still constitute by far the most often investigated systems in quantum thermodynamics. Nevertheless, many systems are not weakly coupled to a bath and can behave strongly non-Markovian. Even the averaged thermodynamic description of such systems is currently only known under certain limiting cases. Most progress has probably been achieved for a classical system coupled to a single heat reservoir [108][109][110][111][112][113]. Based on redefined thermoydnamic quantities for internal energy, heat and entropy [the definition of work remains indeed the same as in Eq. (20)], it was possible to derive fluctuation theorems for the dissipated work [108,110] and the entropy production [109,111,112] as well as a valid first and second law of thermodynamics [109][110][111][112][113]. Provided that our control operations do not disturb the system and provided that we do not perform any feedback control (we still allow for incomplete measurements such that the system state after the control operation might not be pure), the formalism developed here can be also applied to the strong coupling situation. The only difference would be that the entropy production (40) in between two measurements during the n'th interval might not be positive (not even on average). However, what will be always positive is the summed up entropy production n =1 Σ ( ) ≥ 0. We remark that the question how far a negative entropy production Σ ( ) < 0 is linked to non-Markovian dynamics was answered in Ref. [113]. At the moment, it is unclear whether the same definitions (with the modifications for internal energy, heat and entropy as in Refs. [109,[111][112][113]) can be also applied to the situation where the control operations disturb the system or where we perform feedback control. The difficulty in answering this question is rooted in the fact that the strong coupling framework assumes a specific initial state of the bath (which is a conditionally equilibrated state, see Refs. [109,[111][112][113] for more details). While it might be reasonable to assume such an initial state at time t 0 , this assumption will be in general not fulfilled at later times t n . If we do not disturb the average dynamics, this does not matter, but if we start to perform arbitrary control operations, it does. A way to avoid this problem is to enlarge the system space by including only those modes of the reservoir which have the strongest influence on the system. Such a strategy was directly or indirectly proposed in Refs. [112,[114][115][116][117][118][119][120][121]. A simple example for this procedure could be an atom in a high quality cavity. Suppose that we are primarily interested in the (thermo) dynamics of the atom, which we assume to be able to manipulate, e.g., by external laser fields. Unfortunately, the atom will be in general correlated with the cavity field due to a non-negligible coupling. Hence, it will not obey the form of the laws of thermodynamics, which we have presupposed in Sec. IV A. On the other hand, the high quality cavity is only weakly coupled to the outside modes of the electromagnetic field such that the atom together with the cavity can be considered within the conventional framework of Sec. IV A. In general, whenever such an identification is possible (which can also be the case for quite abstract models [114][115][116][118][119][120][121]), one can try to apply our framework to an enlarged system, which contains the original subsystem of interest. All our definitions and derivations will also hold in this situation. One important difference is, however, that it will be impossible to apply the framework without theory input (cf. Sec. VI B) because the additional degrees of freedom of the bath, which we now model explicitly, will be in general hard to probe experimentally. Further research in this direction therefore seems necessary such that we eventually obtain a complete framework for quantum stochastic thermodynamics of non-Markovian systems. unit 2 to be trivial. Thus, one is inclined to think that there should never be any energetic cost associated to the second control operation. However, from a subjective or Bayesian point of view of the external agent, it makes sense to associate an energetic cost to it because the second measurement does not only reveal the state of the system S, but also the state of unit 1. If we add everything together, we obtain W ctrl (t 1 ) + Q ctrl (t 2 , 0) = 0, (A13) which makes perfect sense on the trajectory level. Also note that the first law would be analogous, with Q ctrl (t 2 , r 2 ) simply replaced by Q ctrl (t 1 , r 1 ), if we had performed a projective measurement of unit 1 after the first control operation.
2018-10-30T13:05:23.447Z
2018-10-01T00:00:00.000
{ "year": 2019, "sha1": "64bd4f076db44ff377a71a75ec17d7024e44256a", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevE.100.022127", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "2cb8a465d8b73b5b0c81c862b9be2e58e2ffa117", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Computer Science", "Medicine" ] }
55950097
pes2o/s2orc
v3-fos-license
Electrical Properties of Self-Assembled Nano-Schottky Diodes A bottom-up methodology to fabricate a nanostructured material by Au nanoclusters on 6H-SiC surface is illustrated. Furthermore, a methodology to control its structural properties by thermal-induced self-organization of the Au nanoclusters is demonstrated. To this aim, the self-organization kinetic mechanisms of Au nanoclusters on SiC surface were experimentally studied by scanning electron microscopy, atomic force microscopy, Rutherford backscattering spectrometry and theoretically modelled by a ripening process. The fabricated nanostructured materials were used to probe, by local conductive atomic force microscopy analyses, the electrical properties of nano-Schottky contact Au nanocluster/SiC. Strong efforts were dedicated to correlate the structural and electrical characteristics: the main observation was the Schottky barrier height dependence of the nano-Schottky contact on the cluster size. Such behavior was interpreted considering the physics of few electron quantum dots merged with the concepts of ballistic transport and thermoionic emission finding a satisfying agreement between the theoretical prediction and the experimental data. The fabricated Au nanocluster/SiC nanocontact is suggested as a prototype of nano-Schottky diode integrable in complex nanoelectronic circuits. INTRODUCTION Understanding the effects of downscaling the devices dimensions to the nanometer size is one of the most important topics in the modern material science applied to microelectronics.In fact, the confinement of electrons in dimensions typical of atoms and molecules obliges to consider their quantum behavior.Therefore, a new class of effects are characterizing ultrascaled devices.In the last years, these ideas led to the birth of the "nanotechnology and nanoelectronic revolution" [1][2][3][4] with the aim to understand the effects of downscaling the matter in the atomic range and to develop innovative nanostructured materials and quantum effects based devices [1][2][3][4] following a bottom-up procedure with respect to the traditional top-down scaling scheme. In particular, the nanometric level knowledge of the structural characteristics of such innovative materials and the nanometric control and manipulation of these characteristics acquired a fundamental importance in the design and realization of innovative electrical nanodevices.In fact, it is well known that the local electrical characteristics of such devices are dramatically dependent on the local structural ones.Hence, a precise control and manipulation (at atomic level) of the structural characteristics allow the precise control and manipulation of the electrical ones that are always innovative properties with respect to the traditional devices. A promising topic of nanotechnology research is, surely, the study of the structural and electrical properties of nanometric metal clusters deposited on or embedded in semiconductor/insulating substrates in view of the realization of nanostructured materials with electrical properties dependent on and tuned by the structural ones (clusters size, density, etc.) [5]. We developed a methodology to control and manipulate the clusters structural properties based on the selforganization mechanism of the Au nanoclusters (NCs) on the SiC surface induced by thermal processes.The Au clustering is shown to be a ripening process of three-dimensional structures controlled by surface diffusion and the application of the ripening theory enabled us to derive the surface diffusion coefficient and all other parameters necessary to describe the entire process so that we achieved a control on size, size distribution, clusters distance distribution, and surface fraction of area covered by the clusters by simply controlling the process parameters. We suggest to apply the self-organization of Au NCs as a nanotechnology step to fabricate innovative nanostructured devices.For main example, we studied, by the conductive atomic force microscopy (C-AFM) technique, the local electrical properties of the nanometric systems Au NC/SiC substrate.As expected, the main result was the strongly dependence of the electrical properties on the clusters size, density, fraction of covered area.In particular, we observed the Schottky barrier height dependence of the Au NC/SiC nanocontact on the cluster size.Furthermore, we propose a model to interpret such a behavior. EXPERIMENT 6H-SiC substrates (previously etched in 10% aqueous HF solution to remove the native oxide) were used.A set of substrates was covered by a 2 nm (nominally) thick Au layer sputtered using an Emitech K550x Sputter coater apparatus (Ar plasma, 10 −6 mbar).The samples so obtained were named "as-deposited samples."Some as-deposited samples were then annealed in Ar at different temperatures (873 K ÷ 1073 K) for several times (5 minutes ÷ 60 minutes) and analyzed by Rutherford backscattering spectrometry (RBS), atomic force microscopy (AFM), and scanning electron microscopy (SEM). The RBS analyses were performed using 2 MeV 4 He + backscattered ions at 165 • .The AFM analyses were performed using a digital instruments microscope dimension 3100 in high amplitude mode and ultrasharpened Si tips were used and substituted as soon as a resolution loose was observed during the acquisition.The AFM images were analyzed by using the Nanoscope III software.The SEM analyses were performed by a Zeiss FEG-SEM Supra 25 Microscope and the SEM images were analyzed using the Gatan Digital Micrograph software.The local transversal current-voltage (I-V tip ) analyses were carried out at room temperature using the Veeco DI 3100 AFM, in contact mode, equipped with the conductive-AFM (C-AFM) head and ultrasharpened diamond coated Si tips.The conductive diamond coating is polycrystalline and the effective tip diameter is ultimately set by the very small diamond grain (a few nanometers) placed at the apex of the tip.For each sample, 400 I-V acquisitions in 400 different positions were performed in a matrix of 20 × 20 points with step of 500 nm. Self-organization of Au nanoclusters on hexagonal SiC surface RBS analyses allowed to determine, in particular, the Au atomic concentration Q in the samples and it gave the same result (within a statistical error of 5%) for all the samples (as-deposited and annealed) and its value is Q = 4.5 × 10 15 Au/cm 2 .So we can conclude that no Au loss occurs during thermal treatments for any of our samples (out diffusion, evaporation, reaction with C and/or Si).The change in morphology has been followed by AFM.Despite a tip-cluster, deconvolution was considered; surface morphology can present some artifacts derived by the tip-NCs interaction (not allowing an accurate determination of NCs shape and dimension).So, for a supplementary accuracy, we compared the information acquired by AFM with the NCs images obtained by SEM.From the AFM and SEM images, the NCs size distributions and the distributions of center-to-center distances between nearest NCs were determined by using a software that defines each NC area by the surface image sectioning of a plane that was positioned at half NC height.However, the results obtained by AFM and SEM analyses are in good agreement (the respective results are identical within the statistical error).So, AFM and SEM analyses were crossed to derive NC size distributions and center-to-center NC distance distributions. As an example, Figures 1(a), 1(b) show representative AFM and SEM images for the as-deposited sample and Figures 1(c), 1(d) for the 1073 K-60 minutes sample. In particular, the mean NC radius R(t) was derived for each examined annealing temperature and time by the obtained NC size distribution.Furthermore, if V (T, t) = (4/3)π R 3 is the mean volume of the NC for each annealing temperature T and annealing time t, being [6] Ω = 1.69 × 10 −29 m 3 the Au atomic volume, then n = V /Ω (supposing a 100% packing density) is the mean number of Au atoms forming the NC.Therefore, N s (T, t) = Q/ n is the mean number of NC per unit area and F(T, t) = πR 2 (T, t)N s (T, t) the covered area by the NCs.The obtained experimental F(T, t) is showed (dots) in Figure 2(a). In the following, we briefly recall the kinetic growth evolution of the NCs by the coarsening (or ripening) model to explain the observed self-organization mechanism of the Au NCs on the SiC surface. At any stage during coarsening there is a so-called critical particle radius R * being in equilibrium with the mean matrix composition; particles with R > R * will grow and particles with R < R * will shrink [7].Many coarsening theories are presented in literature [7][8][9][10].In particular, the concepts that we report here (which are directly connected with our experimental situations and data) are primarily based on the review works of Baldan [7] and the theoretical work of Lifshitz-Slyozov-Wagner [8,9] (LSW theory), Allmang-Feldman-Grabov [10]. The aim of mathematical modeling of ripening process of particles dispersed (Au in our case) in/on a matrix is to calculate the growth rate of individual particle.Despite the particular differences derived by the boundary conditions relative to the particular case examined, the general theory of ripening process based on the LSW ideas has the same result for the asymptotic temporal evolution of the system, and is summarized as follows [10]: the mean particle radius R evolves as a function of time (for time sufficiently great, F. Ruffino et al. i.e., in stationary state) according to with R 0 the radius of the particle at time t = 0, and K * an appropriate constant depending on the diffusion coefficient. In particular, the fundamental formulations of the ripening theory differ themselves by the value of n: n = 2 (2D/2D case) for the growth of two-dimensional (2D) particles on a surface (2D); n = 3 (3D/3D case) for the growth of threedimensional (3D) particles embedded in a bulk matrix (3D); n = 4 (3D/2D case) for the growth of three-dimensional (3D) particles on a surface (2D).The Au on SiC has a strong nonwetting nature [11], being its adhesion energy on SiC E adh (Au/SiC) = 445 mJ/m 2 [11] much lower than the Au surface energy γ Au = 1500 mJ/m 2 [12], so that the Au NCs have grown on SiC as 3D structures. According to this considerations, if the Au clustering on SiC surface, in the examined temperature range, is guided by a ripening process of 3D structures limited by diffusion, the temporal variation of the mean NC radius R should be regulated by (1) with n = 4; it is demonstrated by the data reported in Figure 2(a).In such a figure are reported the experimental data R 4 − R 0 4 (dots) for all the investigated temperature and the lines (continuous, dashed, and dotted) represent the theoretical fits by (1) (using n = 4), being K * the fit parameter.The good agreement between the experimental data and the fits is evident. Therefore, in the assessed growth modes, the mean radius of the Au NCs on SiC in the examined temperature and time ranges increases with time as indicated by (1) with n = 4 and with K * defined by [13] annealing temperatures, allowed us to determine K * (T) for the three examined temperatures.Moreover, inversion of (2) allowed us to determine the diffusion coefficient of Au on SiC: D s (873 K) = 1.67 × 10 −15 cm 2 /s, D s (973 K) = 3.52 × 10 −15 cm 2 /s, D s (1073 K) = 6.58×10 −15 cm 2 /s.Such values are consistent with an Arrhenius behavior D s (T) = D 0 e −Ea/kB T so as predicted for a thermally activated diffusion process [14].The fit of the experimental D s (T) with this activated form allows as to obtain the preexponential factor D 0 = (2.6 × 10 −12 ± 1.6 × 10 −13 ) cm 2 /s and the activation energy E a = (0.55 ± 0.01) eV/atom.Furthermore, the exposed model allows simulating the F(t) behavior: in fact, in Figure 2(b) the continuous lines represent the prevision according to the exposed model and it is evident the good agreement with the experimental data. Electrical properties of the Au nanoclusters/SiC contacts The nano-Schottky contacts observing their dependence on NCs size and fraction of area covered by cluster.Hence, by opportune annealing process we are able to control the structural properties of the fabricated nanostructured materials and, as a consequence, the electrical properties of nanodevices based on such systems. According to Giannazzo et al. [15], a biased C-AFM tip in contact with a continuous ultrathin metal film on a semiconductor forms a nano-Schottky diode due to the nanometric localization of the current across the metalsemiconductor (MS) interface.In our case of discontinuous film, for each tip position on the sample surface, the typical rectifying Schottky contact I-V characteristics were found, with the threshold voltage (correlated to the Schottky barrier height (SBH)) depending on the tip position.As an example, in Figure 3 are compared the characteristics recorded in bare SiC (Figure 3(a)) and in SiC covered with Au NCs of different size (Figures 3(b), 3(c)).Each I-V tip curve is typical of thermoionic emission [16], and in the reference sample the I-V tip characteristics belong to a unique family that can be associated to the Schottky contact between the diamond tip and the 6H-SiC substrate.In Au covered samples, the I-V tip curves are splitted in two families: one corresponds to the diamond/6H-SiC Schottky contact (area not covered by Au) and the second one corresponds to the Au NC/SiC Schottky contact.The second family shift towards higher voltage when the mean NCs size increases.To determine the SBH, the current onset region of each I-V curve was fitted with a parabolic function and the SBH value was determined as the parabola vertex derived by the fits, as described in [13].So the SBH spatial distribution for each sample was obtained and reported in the normalized distributions of Figure 4(a) for the reference sample (sample without Au clusters).The SBH distribution is peaked at (1.24 ± 0.02) eV, with a broadening due to statistical fluctuations.This measured value of 1.24 eV is associated to the SBH of the diamond-tip/6H-SiC Schottky contact.For the samples with Au clusters on the surface, by increasing the mean cluster sizes, the SBH distributions exhibit a bimodal shape (see Figures 4(b), 4(c), 4(d)), with two broad peaks fitted by two Gaussian curves.For all the samples, the first peak is centred at 1.24 eV, that is, the value of the tip-diamond/6H-SiC SBH.Hence, the presence of this first peak can be associated to the surface regions in which the diamond tip is directly in contact with the 6H-SiC substrate.Interestingly, the position of the second peak changes with the NCs average dimension.The SBH values in the hystograms around the second peak can be associated to the direct contact with a single Au NC, that is, to a single Au NC/6H-SiC nano-Schottky diode.In fact, according to the used tip shape and to the average cluster-distance/cluster-dimension ratio (and the step of 500 nm between each point), it is quite unlikely that more than one cluster could be simultaneously contacted by the tip.Hence, when the tip is in contact with a single NC, the nearest NCs can contribute to the total current only by a tunnel component through the air that is negligible, being, according to realistic estimations, at least two orders of magnitude smaller than the current due to the direct tip-NCsubstrate contact.Moreover, such a hypothesis is supported by the fact that the fraction of I-V tip curves belonging to the family in which the tip is in contact with Au corresponds to the fraction of covered area by Au NCs for that sample (Figure 5), measured by the structural analyses. The data of Figure 4 demonstrate the dependence of the SBH on the NC size.As an example, in the as-deposited sample, the average cluster size is ∼1.46 nm with a Gaussian distribution (full width height maximum σ = 0.03 nm).Correspondingly, Figure 4(b) shows that most of NC/6H-SiC contacts present a SBH around 1.35 eV while only a low number of contacts present smaller or larger SBH.Similarly, in the other samples the peak corresponding to the tip-Ausubstrate contact is centred on the SBH value corresponding to the mean Au cluster size with dispersion due to the cluster dimensions.Accordingly, we associated to each sample a unique SBH corresponding to the cluster mean size (the peak at higher SBH in Figure 4) and the error bar on each SBH value was evaluated from the σof each hystogram in Figure 4. In Figure 6, the evaluated SBH (dots)is reported as a function of the mean NC size.It increases with increasing average NC size, tending asymptotically to the ideal SBH value of a continuous Au film/SiC contact (∼1.9 eV) [17].This latter evidence indicates that the larger Au NCs (>7 nm) on SiC approach the behavior of the Schottky barriers formed by continuous Au films. We based our interpretation of the SBH dependence on the NCs size considering the thermoionic transport theory through the MS barrier coupled with the concept of ballistic transport and the constant interaction (CI) model for the electron transport in few electrons quantum dots [18].The process was schematized as indicated in the inset of Figure 6.For a forward (positively) biased tip, an additional electron from the substrate overcomes the SBH by thermoionic emission and falls on the lowest unoccupied energy level μ(N + 1) within the 3D box containing N electrons (NC).As the electron mean free path λ e for the considered electron kinetic energy in the Au NCs is ranging between [19] 10 and 20 nm (i.e., λ e larger than the average cluster dimension), the electron moves ballistically within the Au dot and it is collected to the tip in ohmic contact with the Au grain.Hence, the Au Nc/SiC SBH is given by Φ B (N) = Φ B0 − Δμ(N), with Δμ(N) the energetic distance between E F and μ(N + 1) (inset of Figure 6) and Φ B0 the Schottky barrier height as defined for the macroscopic contact.According to the CI model of few electron quantum dots, Δμ(N) = (e 2 /C) + ΔE where E c = e 2 /C is the electrostatic energy ("charging energy") necessary to add or subtract one electron to the dot, taking into account the Coulomb interactions of that electron with all other electrons, in and outside the dot.We derived the capacitance C as C = C 0 + n s C c ; C 0 = 2πε r ε 0 L is the self-capacitance of the dot, that is, the capacitance of a sphere of diameter L embedded in a dielectric of constant ε r , C c ≈ (π 3 ε r ε 0 L 2 )/4(s + L) is an approximated expression for the coupling capacitance between two nearest clusters, described as two spheres with the same diameter L and sited at center-to-center distance s + L (s is the surfacesurface distance between the two clusters).Finally, n s is the number of nearest neighbour clusters whose analytical expression is derived starting from the results exposed in Section 3.1.According to those results, the surface clusters density is expressed by N s (L) = 3QΩ/4πL 3 , where Q is the Au atomic surface concentration.In a random distribution of the clusters on the surface, a circular symmetry around each fixed cluster can be assumed, so that n s = 2π(s + L) N s .So: Clearly, Φ B (L) tends to saturate to Φ B0 for sufficiently large L values, that is, the behavior of large NCs approaches that of the bulk material.By (3) it is evident that, decreasing L, the barrier height Φ B becomes zero for a certain L * value, which in our case is L * = 0.85 nm.For L < L * , Φ B would be negative.Since Φ B (L) = Φ m − χ − Δμ(L) = Φ meff (L) − χ, a negative Φ B value is equivalent to an effective metal work-function Φ meff (L) lower than the electron affinity χ in the semiconductor.In such a situation, the Au NC/6H-SiC contact would be not any more a Schottky contact, but an ohmic one and the transport mechanism at the interface would change in consequence.Although the developed model is valid for T = 0, it gives a good approximation even at room temperature (our experimental case) for which (e 2 /C) kT.Comparing the theoretical prediction by (3) with the experimental data of Figure 6 using the values Φ B0 = 1.85 eV, ε r = 1, Q = 4.5 × 10 15 cm −2 (by RBS analyses), and Ω = 1.69 × 10 −29 m 3 , the continuous curve reported in Figure 6 is obtained.The only free fitting parameter was m * .The best agreement was obtained for m * = 0.08 m.For NCs characterized by size larger than 10 nm, it is evident that .As a consequence NCs with size larger than 10 nm acquire bulk properties.This is confirmed by other experimental evidences as Au NCs melting point and structural properties dependence on size [20]. CONCLUSION The possibility of controlling and modeling size and size distribution of Au NCs deposited on SiC surface by process parameters such as thermal treatments has been demonstrated. The clustering kinetic process and surface diffusion of Au on SiC substrates were experimentally characterized by Rutherford backscattering spectrometry, scanning electron microscopy, and atomic force microscopy.The evolution kinetics has been interpreted by classical models involving surface diffusion limited ripening of spherical threedimensional clusters on a substrate.From the mass transfer surface diffusion coefficients of gold on SiC hexagonal and SiO 2 surfaces, determined in the 873 K ÷ 1073 K temperature range, activation energy of (0.55 ± 0.01) eV/atom was obtained.The knowing of the details of the self-organization mechanisms of Au NCs on SiC allowed us to fabricate nano-Schottky diodes with tunable electrical properties by tuning the parameters characterizing these mechanisms. Figure 1 : Figure 1: (a) AFM image of the Au clusters as-deposited on 6H-SiC substrate, (b) SEM image for the same sample, (c) AFM image of the Au clusters as-deposited on 6H-SiC substrate and annealed 1073 K-60 minutes, (d) SEM image for the same sample. Figure 2 : Figure 2: (a) Experimental R 4 − R 4 0 (black dots) as a function of annealing time for each fixed temperature and the relative linear fits (continuous lines), being K * the fits parameter.(b) Experimental (dots) fraction F (normalized) of surface area covered by clusters as a function of annealing time and relative theoretical simulation (curves). Figure 3 : Figure 3: I-V tip curves measured (by C-AFM) in SiC covered with Au nanoclusters with different sizes. Figure 5 : Figure 5: Comparison between the data concerning the fraction of covered area by the Au clusters (normalized) derived by the structural analyses (square dots) and by the electrical ones (circular dots). Figure 6 : Figure 6: Experimental values (dots) of the SBH as a function of mean cluster size and theoretical prediction for Φ B = 1.85 eV, m * = 0.08, and m * = m.The inset shows the considered band diagram of the system (AFM) tip-cluster-SiC substrate.
2018-12-10T22:37:24.528Z
2008-10-21T00:00:00.000
{ "year": 2008, "sha1": "1b1602fd0d7787a0bff42b0a24a53829e42684e6", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jnm/2008/243792.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1b1602fd0d7787a0bff42b0a24a53829e42684e6", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
250688130
pes2o/s2orc
v3-fos-license
Electrochemical & optical characterisation of passive films on stainless steels The formation and breakdown of the passive film are mainly controlled by ionic and electronic transport processes; processes that are in turn controlled by the electronic properties of the film. Consequently a comprehensive understanding of mechanisms behind passivity and localised corrosion require a detailed perception of the electronic properties of the passive films together with compositional and structural information. As a step towards this goal the passive film on austenitic stainless steel, AISI 316L, formed in borate solution was characterised by in situ Raman spectroscopy and photocurrent spectroscopy coupled with electrochemical measurements. The composition, structure and semiconductivity of the passive films depended on the potential; Fe rich n-type oxide and a Cr rich p-type oxide dominated at more positive potentials and more negative potentials respectively whilst n-type dual layered film formed at intermediate potentials. Analyses of the bandgap determined for these oxides suggested their structures to be Fe2O3 and a Fe-Cr spinel. This hypothesis was supported by the results of in situ Raman spectroscopy. 1. Introduction The corrosion resistance of stainless steel arises from a "passive" chromium-rich oxide film that forms on the surface. Although extremely thin, less than 5nm, this protective film is strongly adherent and chemically stable. Nevertheless in certain environments, especially ones containing chloride, this oxide film will breakdown and rapid corrosion will ensue. One of the principal factors to thought to control the behaviour of a passive film is its electronic properties. The formation and breakdown of the passive film layer are mainly controlled by ionic transport reactions and electronic transport reactions. Both of these are controlled by the energetics of the metal/film and film/electrolyte interfaces and the electronic properties of the passive film. Consequently it is indispensable to comprehend the electronic properties of the film before the mechanism behind localised corrosion processes can be fully understood. The invention of several optical spectroscopical techniques has intensified the potential of examining the characteristics of the film to a better extend of accuracy. In particular Di Quarto et al. have developed empirical relationships, based on anion and cation electronegativities, which allow the composition of simple oxides and hydroxides to be related to experimentally determined bandgaps. [1] The approach in this paper is to use in situ photocurrent spectroscopy to determine the bandgaps of the oxides in the passive film on 316L stainless steels, use the theory of Di Quarto et al. to propose compositions and then support these claims with in situ Raman spectroscopy. Thus an in depth understanding of the electronic properties of the passive film together with its structure and composition will be revealed. Experimental The specimens were fabricated from the 316L stainless steel; the nominal composition of which is shown in Table 1. The specimens were ground consecutively with 600 grit and 1200 grit SiC paper and were further polished with alumina polishing powder down to 0.1 micron. Samples were then cleaned in deionised water and degreased in ethanol. The electrolyte was 0.1M Na 2 B 4 O 7 .10H 2 O (pH 9.2); prepared in de-ionized water using reagent grade chemicals. Before any experiment was carried out, the electrolyte was deoxygenated with high purity nitrogen gas for one hour. A saturated calomel reference electrode was used and all potentials quoted in this paper are versus this system. The counter electrode was a platinum grid. Photoelectrochemical measurements were performed by using a 300W xenon lamp and a monochromator. The photocurrents were generated by focusing the light with a fused silica lens thorough a quartz window of the electrochemical cell onto the working electrode. The lock-in amplifier technique was used to separate the photocurrent from the passive current by chopping the light at 29Hz. The measuring procedure was that first the film was grown potentiodynamically at a rate of 10mV/min from -900mV(SCE) up to 800mV(SCE). Then photocurrent spectra were obtained as the potential was brought back in the negative direction, the potential being held constant during the course of each measurement. The wavelength of the light was changed in steps of 10nm. The photocurrent was corrected for the output of the lamp and the efficiency of the monochromator by using a calibrated photodiode. Calibration was performed before and after every set of experiments. For in situ Raman spectroscopy a specially designed electrochemical glass cell produced by Ventacon with a quartz window to accommodate a three electrode system was used. The surface enhance Raman spectroscopy (SERS) technique was used to overcome the weak intensities of peaks which restrict the detectability. [2] The silver deposition required for SERS was performed, after polishing and cleaning immediately prior to the Raman experimentation. A Dilor-Jobin HR 800 confocal Raman spectrometer with argon ion green laser (514.5 nm) operating at 40mW was used. The spectral resolution of the Raman instrument was 2.5cm -1 . Raman spectra were recorded at each 100mV interval and the observed peaks were compared with literature values [3][4][5][6][7][8][9][10] for identification of phases present. Bandgap estimations The simplified relationship between photocurrent (I ph ) and the bandgap (E g ) of the amorphous passive films can be written in the form; Region (c). From -450mV to -900mV: no photocurrent could be observed between -450mV and -600mV, indicating the possible location of the flat band potential. Below -600mV the photocurrent switched signed to become negative, signifying p-type semiconductivity. A single bandgap value obtained in this potential region was 2.9 0.05eV; identical to that found for the wide bandgap n-type material formed in the region (b). The types of semiconductivity (p or n) were also confirmed with photocurrent transients and Mott-Schottky plots. Table 2 shows the phases identified in in situ Raman for the same potential ranges used for the photocurrent spectroscopy. Excellent agreement between the two techniques was obtained. The only Raman also identified a Cr(VI) oxide in the transpassive region, which was not detected by the photocurrent measurements, indicating that it is not a semiconductor and thus it is probably an insulator. In situ Raman results Based on all the data obtained from Raman spectroscopy, photocurrent measurements and the cyclic voltammograms it is believed that the most likely oxide phases on the 316L stainless steel at different potentials are as shown in Table 3. Conclusion Photocurrent spectroscopy and in situ Raman spectroscopy have been used to reveal the nature and the potential dependence of the structure and the composition of the passive film formed on 316L stainless steel. It was found that in the passive region the film consists of two n-type oxides, most likely Fe 2 O 3 and a Fe-Cr spinel thought to be Fe(II)[Cr(III) 0.85 Fe(III) (0.15) ] 2 O 4 . At extreme negative potentials the Fe-Cr spinel is able to switch to p-type behaviour. In the transpassive region the spinel is oxidised to a Cr(VI) compound, whilst the Fe 2 O 3 remains intact. [10] Nyquist R A and Kagel R O 1997 The handbook of infrared and Raman spectra of Inorganic compounds and organic salts: volume 4, Infrared spectra of inorganic compounds (Academic press San Diego) [11] Mott N F and Davis E A 1979 Electronic Processes in Non-crystalline Materials -2 nd edition (Clarendon Press Oxford)
2022-06-28T03:32:20.731Z
2006-01-01T00:00:00.000
{ "year": 2006, "sha1": "b150e19f77b341c531734b897861182882f567a1", "oa_license": null, "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/28/1/015/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "b150e19f77b341c531734b897861182882f567a1", "s2fieldsofstudy": [ "Materials Science", "Physics", "Chemistry" ], "extfieldsofstudy": [ "Physics" ] }
247504733
pes2o/s2orc
v3-fos-license
Research on 3D MFL testing of wire rope based on empirical wavelet transform and SRCNN . Magnetic flux leakage (MFL) testing is one of the most effective methods in nondestructive testing of wire rope. However, traditional MFL testing devices have problems such as low recognition rate, single detection dimension and fuzzy magnetic leakage image. Based on the non-saturated magnetic excitation 3D MFL testing device, this paper proposes a wavelet denoising method based on empirical wavelet transform (EWT) to denoise the collected 3D MFL signal. After noise reduction, the Signal-to-Noise Ratio (SNR) and Root Mean Squared Error (RMSE) of the three dimensions have improved. Color imaging technology is used to fuse defect grayscale images into color images, and Super-Resolution Convolutional Neural Network (SRCNN) is applied to MFL images of broken wires. After SRCNN reconstruction, the resolution of defect color images is improved. The color moment feature of the defect color image is extracted as the input of the Elman neural network to quantitatively identify broken wires. Experimental results show that the noise reduction algorithm can effectively suppress the noise in three dimensions, and the broken wires recognition rate after reconstruction has been significantly improved, which verifies the effectiveness of SRCNN in wire rope MFL images. Introduction Wire rope has the advantages of high strength, good elasticity, and strong carrying capacity, and is widely used in coal, mines, cable cars and other fields. As a load-bearing component, wire rope will have fatigue, wear or even fracture due to various reasons during service. Its load-bearing capacity and remaining life are directly related to the safety of production equipment and personnel [1]. Therefore, it is very important to detect the damage of the wire rope. The existing testing methods include ultrasonic, infrared, radiographic and electromagnetic testing technologies. Among them, electromagnetic testing is widely used in nondestructive testing of wire rope due to its low cost, simple principle, and suitable for wire rope with complex structures but good magnetic conductivity. Electromagnetic testing includes magnetic flux leakage (MFL), eddy current testing, and metal magnetic memory (MMM) testing. The principle of the MFL testing technology is to apply a magnetic field to the surface of the wire rope, when there are defects in the wire rope, the permeability of the defects changes, and the magnetic field signal leaks into the air. The magnetic sensor detects the leakage magnetic field and analyzes the collected signal to determine whether the component has defects, and then quantitatively identify the defects [2]. In the design of MFL testing device, common sensors include induction coils, hall sensors, giant magneto resistance (GMR) sensors, tunnel magneto resistance (TMR) sensors, and so on. The improved magnetic field detection sensors can detect wire rope damage more effectively, so many scholars have done a lot of designs in this regard. Zhao et al. [3] designed a wire rope nondestructive detection system composed of 30 hall sensor array, the system realizes the accurate axial positioning and circumferential distribution of defects, and effectively overcomes the influence of traditional induction coil detection on the speed of wire rope movement. Zhang et al. [4] designed the GMR sensor array to be evenly distributed on the circumference of the wire rope to collect the wire rope radial MFL signal. Zhang [5] proposed a small excitation system that uses a small-volume hall sensor element to form a sensor array to obtain MFL signals on the wire rope surface. Compared with the traditional device, the device is small in size and it weighs only 508 g. Although the existing devices can better detect the MFL signal of the wire rope, most of them only collect the signal of the radial or the radial and axial components of the leakage magnetic field. In fact, the spatial MFL signal has three components: axial component, radial component and tangential component. Each component carries a lot of defect summary information. Dutta [6] showed through simulation and analysis that the tangential flux leakage element is indeed a potential key part of the MFL test. Peng [7] designed a 3D MFL testing device using TMR sensor, which can simultaneously collect component information in three dimensions: axial, radial and tangential. Compared with a one-dimensional acquisition device, this device can collect more dimensional information, and the recognition rate of three-dimensional components is higher than that of single-dimensional component. After the magnetic sensor obtains the leakage magnetic field on the surface of the wire rope, it uses signal processing and image processing technology to reduce the noise of the collected signal. Since the MFL signal on the surface of the wire rope contains a lot of noise, the noise reduction algorithm used directly affects the damage identification of the wire rope. Wavelet is the most commonly used method in the application of MFL signals, such as wavelet multi-resolution analysis [8], wavelet threshold [9], wavelet denoising based on compressed sensing [4], etc. Tan [4] used the compressed sensing wavelet filter algorithm to reduce the noise of the MFL signal, and verified the superiority of the proposed noise reduction algorithm in suppressing high-frequency noise, and achieved good recognition results. Kim and Park [10] used Hilbert transform to process the signal to obtain the signal envelope. This algorithm is simple and effective, but the denoising effect is average. Qiao [11] adopted the improved EEMD noise reduction algorithm to reduce the noise of the magnetic memory signal of the mine wire rope, but the denoising effect is not obvious, and the SNR is only slightly improved. Image processing technology can visualize the MFL signal of the defect, improve the quality and resolution of the image, and is of great significance to the quantitative analysis of the defect. Tan [12] adopted the SR reconstruction method based on Tikhonov regularization to enhance the defect grayscale image and increase the resolution of the magnetic field grayscale image by 3 times. Li [13] used a super-resolution algorithm based on interpolation, using Non-Subsampled Shear Wave Transform (NSST) combined with Principal Component Analysis (PCA) and Gaussian Fuzzy Logic (GFL) to improve the resolution and quality of grayscale images. Although these methods improve the image quality, the processing result is still grayscale. The amount of information contained in the grayscale image is too small, and it is not easy to quantitatively identify the defect image. Zheng [14] performed pseudo-color image processing on the MFL image of the wire rope, so that the generated image can show subtle differences that were difficult to detect in the previous grayscale image. But the image effect of small broken wires is not obvious. Aiming at the problems of the existing MFL testing device with single dimension, low SNR after denoising, and inconspicuous MFL image. A non-saturated magnetic excitation 3D MFL testing device was designed [7], using wavelet denoising method based on EWT to denoise the collected MFL signal. Compared with the wavelet filtering algorithm and the EEMD algorithm, it improves the SNR of the 3D magnetic leakage signal and reduces the RMSE. The cubic spline interpolation method is used to interpolate the perimeter of the 3D MFL data, and the gray level is normalized. Using the method of combining color imaging and SRCNN, the grayscale image is fused into a higher-resolution color image. Extract the color moment features of the color image before and after SRCNN reconstruction as the input of the Elman neural network. By comparing the broken wires recognition rate, it verifies the feasibility and effectiveness of Elman neural 781 network and SRCNN in the broken wires recognition. MFL data collection This article uses the non-saturated magnetic excitation 3D MFL testing device in Reference [7]. The schematic diagram of the acquisition device is shown in Fig. 1. The device includes a magnetization device, a magnetic sensor array, a control board and a data storage module. The magnetization device is composed of 12 Nd-Fe-B permanent magnets, which are evenly distributed on the circumference of the wire rope as a non-saturated magnetic excitation source. The residual magnetism of each permanent magnet is 1.18T. The magnetization device can effectively suppress the interference of the external magnetic field and excite the wire rope more uniformly and fully. The data collection process is as follows: push the testing device along the defective wire rope, every time the collection device moves 0.31 meters, the encoder sends out 1024 sampling pulses at equal intervals, and the magnetic sensor array converts the collected MFL signals into voltage signals and stores them in the SD card of the data storage module. The front of the designed 3D MFL signals acquisition board is shown in Fig. 2(a), radial sensors are distributed to collect radial component signals. Fig. 2(b) is the back of the 3D MFL signals acquisition board, with tangential sensors and axial sensors distributed, which are used to collect tangential component signals and axial component signals, respectively. Due to size limitations, the number of sensors in each direction is set to 10, they are distributed in the same circumferential position, and the sensitive directions are perpendicular to each other. Data processing Six different broken wires are artificially manufactured for a wire rope in advance, with a total of eight defects. The detection length of the wire rope is 3.8 meters, the diameter is 28 mm, and the distance between the permanent magnet and the wire rope is 15 mm to ensure the same magnetization effect of the wire rope in different dimensions. Push the acquisition device along the wire rope detection direction, the collected original data is shown in Fig. 3(a). The original data contains a lot of noise, these noises mainly include oil pollution on the surface of the wire rope; uneven excitation; the influence of the earth's magnetic field; strand noise caused by the winding of the wire rope, etc. Each channel is extracted from the axial ( -axis), radial ( -axis), and tangential ( -axis) components respectively, and the original single-channel signal is obtained as shown in Fig. 3(b). The location, size and number of defects cannot be judged from the original data. In order to achieve noise reduction, this paper proposes the wavelet denoising method based on EWT. The processing flow used is shown in Fig. 4, including digital signal processing and image super-resolution reconstruction. EWT theory Gilles proposed a new signal processing method in 2013 -"Empirical Wavelet Transform". EWT constructs a set of filters by adaptively segmenting the Fourier spectrum of the signal, and decomposes the signal into a series of AM-FM components arranged from high to low frequencies, thereby extracting feature quantities. EWT avoids the problems of excessive decomposed modal components in EEMD and the need for a large number of iterative operations during extraction. At present, EWT has been initially used in seismic signal processing [15], mechanical fault detection [16], power system signal analysis [17] and other fields. Take real signals as an example. Their frequency spectrum is symmetrical about frequency = 0. A normalized Fourier coordinate system with a period of 2 is established. The research range is ∈ 0, , assuming Fourier definition The domain [0, ] is divided into continuous partitions, and is used to represent the demarcation bandwidth between each partition ( = 0, = ). As shown in the fig below, each partition is expressed as ∧ = − 1, , where a transition section (width 2 ) is defined around each , = , and is the coefficient. The specific division is shown in Fig. 5. After determining the segmentation interval ∧ , add a wavelet window to it. According to the Meyer wavelet construction method, Gilles defines the empirical scale function and empirical wavelet function [18]. The detail coefficients are generated by empirical wavelet function and signal inner product: The approximate coefficients are generated by the scaling function and the inner product of where ( ) and ( ) are the empirical wavelet function and scaling function, respectively, ( ) and ( ) are the Fourier transform of ( ) and ( ), ( ) and ( ) denote the complex conjugate of ( ) and ( ), respectively. The original signal is reconstructed as follows: The empirical mode ( ) is defined as follows: Wavelet analysis algorithm Wavelet analysis is an effective signal transformation analysis method developed on the basis of Fourier. Compared with the Fourier transform can only be applied to the decomposition of stationary signals, Wavelet analysis is a partial analysis of the time and frequency of the signal. Through multi-scale subdivision of the signal, it finally achieves the time subdivision at the high frequency and the frequency subdivision at the low frequency. In order to achieve fast decomposition, Mallat [19] proposed a fast signal decomposition process that expresses non-stationary signals as the expansion of the basis wavelet function and an algorithm that restores and reconstructs the decomposed signal, which is called Mallat algorithm. In the Mallat algorithm, the signal decomposition process in the form of coefficients is as follows: Among them, ℎ ( ) = ℎ(− ), ℎ is a low-pass filter, h 1 is a high-pass filter, Eq. (8) is the decomposition formula of Mallat algorithm, Mallat algorithm can separate the signal into multi-resolution. Eq. (9) is the signal reconstruction formula. Algorithm description Using the characteristics of EWT algorithm in the signal spectrum adaptive division and high time-frequency resolution, the wavelet soft threshold and median filtering are combined in the EWT algorithm. Median filtering is to replace the value of a certain point in the sequence with the median value of each point in a neighborhood of the point, which can protect the edge of the signal from blurring, thereby eliminating isolated noise points [20], the noise reduction algorithm steps are as follows: Step 1: Select the leakage magnetic field data on the surface of the channel wire rope, = 1:30. (1) The signal is Fourier transformed to obtain the frequency spectrum ( ), and the frequency spectrum is represented in the scale space. (2) Calculate the threshold , and use the threshold T to select the smallest frequency in the scale space as the boundary frequency division of the spectrum. (4) Use the inverse Fourier transform calculation to obtain the AM-FM modal component of the signal. Step 2: Select useful AM-FM modal components in the signal, and use wavelet soft threshold to denoise, using db2 wavelet basis to decompose the modal components in 7 layers. Step 3: Wavelet reconstruction obtains the signal modal component after noise reduction. Step 4: Perform inverse EWT on the processed signal modal components to obtain noise reduction data. Step 5: Perform median filtering to make filtering smoother. The original data is processed through the algorithm, and the 3D MFL image after noise reduction is shown in Fig. 6(a). Compared with the image before noise reduction, the data after noise reduction reduces the interference of noise, and more obvious defect signals can be observed. From the x-axis in the Fig. 6(a), it is obvious that the defects are mainly around channel 24. This is mainly because the defect is set on the same straight line and the sensor numbers in the three dimensions above the defect are 23, 24, and 25. The approximate position of the defect can be judged through the y-axis, so as to realize the positioning of the defect. Fig. 6(b) shows the single-channel MFL signal of the axial, radial, and tangential components after noise reduction. Not only can the size of the defect be roughly judged by the amplitude, but also the location of the defect can be realized based on the x-axis, and it can be clearly judged that there are roughly 8 broken wires of different sizes on the wire rope. Compared with the original data, it has been significantly improved. a) b) Fig. 6. a) 3D MFL signals after noise reduction, b) single-channel MFL signal of -axis, -axis, -axis components after noise reduction In order to verify the effectiveness of the algorithm, the results calculated in this paper are compared with the wavelet filtering algorithm and the EEMD algorithm. Both the wavelet filtering algorithm and the EEMD algorithm are common and effective noise reduction algorithms. The superiority of the noise reduction algorithm used in this paper is verified by calculating the SNR and the RMSE of the axial, radial and tangential components. The SNR calculation formula is defined as follows: where is the number of axial sampling points, ( ) is the MFL signal after noise reduction, and ( ) is the effective MFL signal after noise reduction. The higher the SNR, the better the noise reduction effect. The root mean square error is defined as follows: Among them, ( ) represents the noisy signal for which the RMSE value needs to be calculated, and ( ) represents the MFL signal after noise reduction. The smaller the RMSE value, the less noise is included. The noise reduction effect is more obvious. By extracting the MFL data of 7 wire ropes, donising with different algorithms and calculating their SNR and RSME, the average values obtained are shown in Table 1. it can be seen that the mean value of the SNR in the axial, radial, and tangential directions calculated by the algorithm in this paper is significantly higher than other algorithms, and the mean RMSE is smaller than other algorithms. Among them, the SNR of the radial component is the highest among the three directions, indicating that the radial signal is least affected by noise and has the best denoising effect, which is more conducive to defect location and segmentation. Image processing In this section, we will use more mature processing techniques in the field of image processing and deep learning to process MFL data and obtain clearer defect images. The image processing introduced in this section includes data normalization, cubic spline interpolation, color imaging, and SRCNN reconstruction. Algorithm description Data normalization is an indispensable part of image processing. MFL information is converted into grayscale images, and the -axis component, -axis component, and -axis component of the MFL signal are sequentially normalized. Take the -axis component with the best denoising effect as an example, the specific algorithm is as follows: (1) Find and record the maximum and minimum of the -axis components, and calculate the mean of the data. (2) Normalize the -axis data, as in Eq. (10): Among them, data ( , ) is the MFL data value before normalization, and data ( , ) is the normalized MFL data value after removing the mean. The average removal of the -axis component, -axis component, and -axis component of the MFL signal can make the imaging background information basically consistent. The designed acquisition system uses a 30-channel sensor array with 10 channels in each dimension. Therefore, the circumferential resolution of the MFL image is 10, which is much smaller than the number of axial sampling points. In order to make the MFL data more intuitive, the circumferential resolution of the -axis component, -axis component and -axis component of the MFL signal is interpolated from 10 to 192 using cubic spline interpolation. Compared with Fig. 6(a), Fig. 7 shows the 3D MFL signal after improving the circumferential resolution. It can be seen that the signal after interpolation is smoother than before, and the defects are more obvious. Color imaging The current MFL images mostly use grayscale images, and a clearer and more intuitive display of MFL characteristics can be obtained by performing image enhancement processing on the grayscale images. The human eye can only distinguish dozens of different grays, but it can distinguish hundreds of different colors [21]. Therefore, applying color imaging to the display of MFL images can highlight the subtle differences that are difficult to detect in the grayscale image, thereby improving the ability to distinguish the details of the image. The specific method is as follows: (1) Map the normalized -axis component, -axis component and -axis component of the normalized MFL signal to three color channels of red, green, and blue to obtain the MFL color image. To make it easier to distinguish defects, set the weight of the green channel to 0.8, and the weight of the red and blue channels to 1. (2) Using the modulus maximum method, in the first color channel of the MFL image, the MFL data is summed circumferentially and the sequence ( ) is obtained, where = 1, 2, 3... , N is the number of sampling points. (3) Set a threshold, perform threshold processing on the sequence ( ), keep the larger value in the sequence, set the points smaller than the threshold to 0, and record the sequence number of the local maximum in the sequence. (4) According to the actual defect width of the wire rope, the axial length of the defect image is about 192 pixels, so the 192×192×3 color defect image segmentation is based on the maximum serial number value. Taking five broken wires as an example, the defect color image obtained according to the above algorithm is shown in Fig. 8(b). a) b) c) Fig. 8. a) Physical picture of five broken wires, b) color image of five broken wires, c) color image of five broken wires after reconstruction SRCNN reconstruction Image super-resolution reconstruction technology uses a group of low-quality, low-resolution images to generate a single high-quality, high-resolution image. This technology can improve the recognition ability and accuracy of the image, and effectively improve the quality of the image. In recent years, deep learning technology has developed rapidly. Dong et al. [22] of the Chinese University of Hong Kong first applied the Convolution Neural Network (CNN) to super-resolution reconstruction in 2016 and proposed a new network SRCNN. Start with the relationship between deep learning and traditional sparse coding (SC), and divide the network into three stages: patch extraction and representation, Non-linear mapping, and reconstruction, make them correspond to the three convolutional layers in the deep convolutional neural network framework, and be unified in the neural network. Thus, super-resolution reconstruction from low-resolution image to high-resolution image is realized. The network directly learns the end-to-end mapping between low-resolution images and high-resolution images. The following is the realization of the SRCNN model [23]. The model consists of three parts. The first part is patch extraction and representation. patch extraction and representation are similar to convolution operations on MFL images through a set of filters. Its first layer is represented as operation : The second part is Non-linear mapping. Each n1-dimensional vector obtained in the previous step corresponds to an image block in the original image, and these extracted features are mapped to the n2-dimensional vector. The operation of the second layer is: The third part is reconstruction, which defines the convolutional layer to generate the final high-resolution image. The operation of the third layer is: Putting the above three operations together form a convolutional neural network, as shown in Fig. 9. SRCNN is widely used in medical imaging, deep learning and other fields. This is the first time that SRCNN combined with color imaging has been applied to the field of MFL image of wire rope. In this way, 194 MFL images were extracted from different wire ropes with different broken wires and reconstructed by SRCNN. Still taking the five broken wires as the example. Fig. 8(b) and Fig. 8(c) are comparison diagrams of a five broken wires defect before and after SRCNN reconstruction. The defect image has been enlarged from 192×192×3 to 576×576×3. It can be clearly seen from the Fig. 8 that the defect image after SRCNN reconstruction can not only be seen more edge information, and greatly improve the image resolution. Compared with images in other fields, the MFL image itself is blurry and has no particularly significant features. In order to verify the effectiveness of SRCNN, the color moment features of the images before and after reconstruction are selected and used in Elman neural network for broken wire identification. Feature extraction The color moment feature is proposed by Stricker and Orengo [24]. It is a commonly used color feature and is widely used in the field of image processing. The advantage is that it has the lowest feature vector dimension and lower computational complexity. The distribution of image color information is mainly concentrated in the low-order moments. Using the first-order moment (mean), second-order moment (variance), and third-order moment (skewness) of color information can fully express the color distribution of the image [25]. That is, color characteristics are expressed by color moments. The mathematical model is as follows: Among them, , represents the probability that a pixel with a gray level of in the th color channel component of a color image appears, and represents the number of pixels in the image. The first three-order color moments of the image , and , the three components of the image, constitute a 9-dimensional histogram vector, namely, the color features of the image are expressed as follows: Table 2 lists the characteristic values of the color moments of the six broken wires on the surface of the sample wire rope in this experiment. Quantitative identification Aiming at the problem of slow learning speed of BP neural network algorithm and greater possibility of network training failure, this paper uses Elman neural network to identify defects. Elman neural network is a simple recurrent neural network, proposed by Elman in 1990 [26]. Compared with other feedforward neural networks, the Elman neural network has an additional layer to memorize the output value of the hidden layer neurons at the previous moment, which enhances the global stability of the network. Compared with BP neural network, it has stronger computing power and faster approximation speed, which is more suitable for solving pattern classification problems. Elman neural network is generally divided into four layers: input layer, hidden layer, receiving layer and output layer. In this paper, a 16× ×7×7 Elman neural network is designed, and the 9 color moment feature vectors extracted are used as the input of the neural network. is the number of hidden layers, and both the receiving layer and the output layer are 7. Result and analysis In this experiment, a wire rope with a diameter of 28 mm and a structure of 6×37S+FC was used. In the experiment, broken wires with small spacing are more difficult to identify than broken wires with large spacing, so it is more meaningful to identify broken wires with small spacing. So we obtained 194 samples, including 1, 2, 3, 4, 5, 7 small gaps (about 0.2cm) of broken wires. 135 samples were randomly selected from 194 samples as training samples, and the remaining 59 samples were used as test samples to test the recognition accuracy of the network. In order to verify that the image reconstructed by SRCNN has more edge information and can improve the recognition rate of broken wires, we designed two different types of experiments. In the first group of experiments, 194 samples were reconstructed by SRCNN to extract color moment features, which were trained by Elman neural network. In the second experiment, color moment features were directly extracted from 194 samples and then trained. Elman neural network was used in both experiments, with "tansig" function as the activation function of the hidden layer and "logsig" function as the activation function of the output layer. When = 5, 10, 15, 20, train the network with training samples, evaluate with test samples, and count the broken wires recognition rate results of each Elman neural network. Fig. 10 shows the recognition rate of Elman network with different hidden layer nodes before and after SRCNN reconstruction. It can be seen from the Fig. 10 that the percentage errors of one broken wire and two broken wires are 0.45 % and 0.90 %, respectively, and the maximum error recognition rate is 2.7 %. After SRCNN reconstruction, the recognition rate of one broken wire error and two broken wires error is always higher than before reconstruction. When the percentage errors are 0.45 % and 0.90 %, the recognition rates after reconstruction are 85 % and 96 %, respectively. Among them, when the hidden layer contains 29 nodes, the Elman neural network has the highest recognition rate for broken wires, When the percentage error is 0.90 %, the recognition rate is 98.31 %, the recognition result is shown in Fig. 11. Fig. 10. The recognition effect of different hidden layer nodes before and after SRCNN reconstruction, a) the recognition rate of 5 hidden nodes, b) the recognition rate of 10 hidden nodes, c) the recognition rate of 15 hidden nodes, d) the recognition rate of 20 hidden nodes Fig. 11. The recognition rate of 29 hidden nodes after SRCNN reconstruction Conclusions This paper realizes the acquisition and noise reduction of 3D MFL signal of wire rope, as well as the image enhancement and quantitative recognition of broken wires. The selected acquisition device can simultaneously acquire MFL signals in three dimensions of the wire rope. The wavelet denoising method based on EWT is used to denoise the 3D MFL data of the wire rope. Compared with the SNR and RMSE of other filtering algorithms, the superiority of the filtering algorithm in suppressing the MFL noise is verified. The 3D MFL image is mapped to the color space to generate defect color images, and the SRCNN technology is used to reconstruct each defect color image. It solves the problems of low resolution and low recognition rate of traditional MFL grayscale images. The 9 color moments of the reconstructed defective color image are used as the input of the Elman neural network. Compare the broken wire recognition rate of images before and after reconstruction. Experiments show that the recognition rate of the reconstructed image is significantly improved under the error of one broken wire and two broken wires. When the percentage error is 0.90 %, the recognition rate is 98.31 %. This paper verifies that the application of SRCNN technology in the field of wire rope magnetic flux leakage detection can effectively improve the recognition rate and image quality of broken wires. In the future work, we will study diversified wire rope defect signal collection, optimization of noise reduction algorithm and selection of defect image feature quantity, and use more advanced technologies in the fields of deep learning and artificial neural networks for non-destructive testing of wire ropes. Qihang Chen received the bachelor's degree in Electrical Engineering from Ningxia University of Science and Technology in 2019, and is currently studying for a master's degree in electrical engineering at Henan University of Science and Technology. His research interests include electromagnetic nondestructive testing, metal magnetic memory testing, artificial intelligence and image processing. Juwei Zhang received the Ph.D. degree in school of electrical and information engineering from Tianjin University, Tianjin, China, in 2008. He is currently a professor with electrical engineering college, Henan University of Science and Technology, Luoyang, China. His current research interests include intelligent electrical information processing, artificial intelligence and image processing, electromagnetic nondestructive testing, and fault diagnosis theory. Bing Li received the bachelor's degree in electrical engineering from Henan Normal University in 2019 and is currently studying for a master's degree in electrical engineering at Henan University of Science and Technology. His research interests include metal magnetic memory detection, theoretical modeling and pattern recognition.
2022-03-18T15:17:19.501Z
2022-03-16T00:00:00.000
{ "year": 2022, "sha1": "179c70f79598d3bd3057d67906b7ff6e1c245acc", "oa_license": "CCBY", "oa_url": "https://www.extrica.com/article/22267/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "1ea64ca45884794882fed6a91c1e321da98cf405", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
254920775
pes2o/s2orc
v3-fos-license
Comparative analysis of observations of the selected exoplanet transits obtained at the Kyiv Comet station with the database of the orbital telescopes TESS and Kepler We present a comparative analysis of observations of the selected exoplanet transits obtained at the Kyiv Comet station with the database of the TESS (Transiting Exoplanet Survey Satellite) and Kepler space telescopes. The light curves obtained by the TESS and Kepler orbital telescopes were processed using a program based on the Python package Lightkurve 2.3v which is freely available in the MUST archive (Barbara A. Mikulski Archive for Space Telescopes). The ground-based observations were carried out with the 70-cm telescope AZT-8 (Lisnyky). Photometric processing of the ground-based observation was performed by using the Muniwin program. The light curves and parameters of the observed transits as well as the exoplanet orbital parameters obtained from ground-based observations were published in the ETD (Exoplanet Transit Database). Determined transit parameters were compared with the results of the TESS command, which are stored in the MUST archive. Here we present a comparison of the parameters of transit phenomena (period, depth, transit duration) and some orbital parameters were obtained from two independent sets of observations, terrestrial and orbital, performed in different epochs. Introduction There are a large number of methods for finding exoplanets. The transit method is one of the most effective. The planet covers part of the star when it passes over the disk of a star and the visible brightness falls (Winn, 2010). The time sequence of the observed during a transit events allow us to see these events. The magnitude of the fall in brightness depends on the relative size of the star and the planet. Therefore, the light curve provides information about the radius of the planet and some orbital parameters. But this method has several disadvantages. First, the plane of the planet's orbit should be located in a such way that we can observe the passage of the planet over the star's disk. Secondly, the planet must be large enough to to create a detectable drop in the star brightness. For this reason most of the planets are found by the transit method and all the planets that we present in this investigation are hot Jupiters. Hot Jupiters are planets with a mass of the order of the mass of Jupiter, which rotate close to their host stars and are always turned to them only by the one side. Therefore the period of the planet's rotation is small, which allows us to observe transits regularly and makes hot Jupiters the most convenient targets for the observations. The TESS and Kepler orbital telescopes conducted searches for exoplanets using the transit method. The quality of observations obtained from the orbital telescopes is definitely higher than the quality of observations obtained from ground-based telescopes. We are not hindered by various atmospheric phenomena we do not depend on the weather, time of day, phase of the Moon, etc. However, the ground based observations allow us to gather the data over large time span, which provide information about possible changes in transit parameters. The best result can be achieved by combining these two types of observations. To do this, it is necessary to determine how the results obtained from space and ground-based observations correspond to each other, and whether ground observations can be considered sufficiently accurate. The main goal of our research is to determine to what extent the parameters of exoplanets obtained from observations at the Lisnyky comet station coincide with the parameters obtained from observations from the TESS and Kepler orbital telescopes. 2 The Kepler and TESS data bases 2.1 MAST MAST (The Mikulski Archive for Space Telescopes) is an archive of data from numerous space telescopes and contains data obtained in the optical, ultraviolet and near-infrared ranges. The MAST archive also provides the parameters of the planet transits (periods, transit depths, the phases of transit events, and some planet orbital parameters) calculated by the TESS and Kepler pipelines for all events surpassing some threshold. We used these published data in order to compare with the results of our ground based observations as well as with those transit parameters obtained from the Kepler and TESS light curves using the program codes developed on the base of the Python package Lightkurve v2.3. Kepler and TESS space missions The Kepler Orbital Telescope was launched by NASA on March 7, 2009 to search for Earth-sized planets. The spacecraft repeated the path of the Earth, revolving around the Sun. This arrangement allowed telescope to constantly monitor one part of the sky. The field of view covered 115 square degrees near the plane of the Milky Way. The telescope of 1,039 kilograms mass contained a Schmidt camera with a 0.95-meter front corrector plate feeding a 1.4-meter primary mirror. The light reflected by the mirror was collected in the main focus, where there was a mosaic of 21 pairs of specially created astronomical CCD matrices, capable of recording almost every incident photon. The dimensions of the entire mosaic are approximately 30 x 30 cm and it consists of 95 megapixels. In May 2013 the telescope's second flywheel engine failed. A year after the failure of the engine, the telescope began to transmit data to Earth again. The new mission has been named K2. The telescope began to observe a section of the sky along the ecliptic. Over 9 years of operation, the telescope discovered more than 2,680 exoplanets, 550 of which may be rocky, and 21 potentially habitable (Hall & Barentsen 2020a). The TESS (Transiting Exoplanet Survey Satellite) space telescope was launched on April 19, 2018. The mission was planned for two years, during these years the telescope should examine the entire area of the sky. The celestial sphere was divided into 26 observation sectors, each sector being 24°× 96°to detect transits of previously unknown exoplanets near the closest and brightest stars. TESS would focus on G, K, and M-type stars with apparent magnitudes brighter than magnitude 12, and 1000 nearest red dwarfs. The rotational period of the telescope is 13.7 days. Each sector is observed for 27.4 days. The sole instrument on TESS is formed of four wide-angle CCD cameras. Each camera has a 16.8-megapixel detector with a low energy consumption and low noise, which was developed in the Laboratory of Lincoln. Each camera has a 24°× 24°field of view, a 100 mm effective pupil diameter, a lens assembly with seven optical elements, and a band-pass range from 600 to 1000 nm (Hall & Barentsen 2020a). 2.3 The data processing The program we used to process data from the TESS and Kepler orbital telescopes was developed based on the Python package Lightkurve v2.3, which is available in the MUST archive. First, the program finds data for the object we are interested in the catalogs (Hall & Barentsen 2020a) and download the data. If the observed data cover several sectors, the light curves are stitched and normalized. The next step is searching for and subtracting the long-period garmonical oscillations, which can be star's oscilations and some kind of artifacts (Hall & Barentsen 2020d. The Lomb-Scargle periodogram is used for this (Hall & Barentsen 2020c). To detect transit events we build a periodogram, using the Box Least Squares (BLS) method (Saunders 2020). This method is much more sensitive to find periodic transit events (Terebizh 1992). Fig. 1 presents the periodograms of star TIC236887394 from the TESS data base constructed with Lomb-Scargle and BLS methods. The program outputs are period, duration, and first epoch of the detected transit based on the periodogram built with the BLS method for the period at maximum power. Using these parameters we construct the folded light curve in the phase space, which helps to notice a shifts (if any) in the moments of the beginning and end of the transit. These displacements may indicate the presence of one or more planets in this system. Fig.3 demonstrates the folded phase curve for star TIC 236887394 with the considerable decrease in the star brightness caused by the transit event. In order to find other possible transits we cut out parts of the light curve where the first transit occurs and repeat the procedure again building periodogram using the BLS method. 3 Observations obtained at the Lisnyky comet station Observations Our observations were also carried out from March 24, 2021, to February 14, 2022, at the Lisnyky comet station with the 70-centimeter AZT-8 reflector telescope, using the R filter. The telescope is equipped with a FLI PL4710 back illuminated CCD and UBV RI Bessel filters. For faint objects we use a mode with 2 x 2 binning, which gives a scale of 1.96 arcsec/pixel. The FoV of the instrument is 16 × 16 arcmin. The limiting magnitude of a 300 s exposure image is 20 mag under good sky conditions. It is possible to reach 21.5-22 mag with 1800 s exposures. For our observations, the exposure time varied from 10 to 30 seconds, depending on the brightness of the observed object. We didn't observe one particular object all the night. We chose the star and the time span for the expected transit event and conducted observations during the transit only. We began shooting half an hour before the intended start time of the event and completed half an hour after the intended completion to identify possible displacements in the moments of the beginning and end of the transit. Table 1 gives the list of he observed objects and some parameters of their host stars. Processing observations with Muniwin The processing of the observations obtained in the Lisnyky comet station was carried out using the C-Munipack package https://c-munipack.sourceforge.net/. This program uses differential photometry: we compare the brightness of two or more stars. One star is one which brightness must change due to the transit of the planet through its disk, and the other (others) are reference stars which brightness should be unchanged. First, we calibrate the images using dark, bias, flatfield files. The next step is to set the parameters by which the program determines which object in the image is the star. The next step is to look for the corresponding stars on each image. After that, we select a star through the disk of which transit should take place and the stars which brightness should be unchanged. Reference stars cannot be variable stars. We use the Simbad database to make sure that the selected star has constant brightness. When the necessary stars are selected, we proceed to the light curve construction. The resulting light curve was uploaded to the Czech Exoplanet Transit Database (ETD) for further processing. Exoplanet Transit Database The ETD has an algorithm for processing light curves, by which it determines the main transit parameters from these light curves: the moments of the beginning, end and middle of transit, the depth, duration and radius of the planet, and the inclination of the orbit. The period of the planet is determined from all observations published in the database. All our observations were uploaded to this database and received data quality 2 and 3. The data quality in the ETD is estimated on a scale from 1 to 5, where 1 is the data with the highest quality. Results The parameters of the exoplanet transits obtained from the ground-based observations and determined from the data base of the orbital telescopes are listed in Table 1. We could not extract some parameters from our own observations, therefore they were taken from ETD based on all observations published in this data base. Such parameters are indicated by symbol "*". The table also comprise the periods and transit durations calculated in this work from the Kepler and TESS light curves. Conclusion The comparison of the transit parameters determined from two different data sets, i.e., ground-based and space-based, indicates that: 1) The transit periods and durations obtained from the ground-based observations with the small telescope agree well with those obtained from Kepler and TESS orbital telescopes, highlighting the high accuracy of the ground-based observations presented in this work. 2) It is shown that the average agreement of transit parameters is at the level of 0.00001 day (transit period) and 0.1 hour (transit duration) 3) Independent processing of the light curves from the Kepler and TESS databases indicates that the preliminary processing of the light curves (the light curve detrending, removing of the long-periodic harmonic stellar oscillations) can significantly affect the accuracy of the extracted transit parameters.
2022-12-21T16:07:47.618Z
2022-12-14T00:00:00.000
{ "year": 2023, "sha1": "7c2d2c273376e72cd3e76f58f870b3b1a929a2d6", "oa_license": "CCBYNC", "oa_url": "http://oap.onu.edu.ua/article/download/268007/263778", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "f1a899dbd2bf75e997a0d438e80af0831d9b1aed", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
36060344
pes2o/s2orc
v3-fos-license
Evaluation of Diagnostic Values of Clinical Assessment in Determining the Maturation of Arteriovenous Fistulas for Satisfactory Hemodialysis Background: Fistulas are the preferred permanent hemodialysis vascular access, but a significant obstacle to increasing their prevalence is the fistula's high “failure to mature” (FTM) rate. This study aimed to identify postoperative clinical characteristics that are predictive of fistula FTM. Materials and Methods: This descriptive cross-sectional study was performed on 80 end-stage renal disease patients who referred to Al Zahra Hospital, Isfahan, for brachiocephalic fistula placement. After 4 weeks, the clinical criteria (trill, firmness, vein length, and venous engorgement) examined and the fistulas situation divided to favorable or unfavorable by each criterion, and the results comprised with dialysis possibility. Data were analyzed with SPSS version 21. Diagnostic index for CLINICAL examination was calculated. Results: Among the 80 cases, 25 (31.2%) female and 55 (68.8%) male were studied with the mean age of 51.9 (standard deviation = 17) year ranged between 18 and 86 years old. Sixty-two (77.5%) cases had successful hemodialysis. All four clinical assessments were significantly more acceptable in patients with successful dialysis (P < 0.001). According to the results of our study, the accuracy of all physical assessments was above 70% and except vein length other criteria had a sensitivity and negative predictive value of 100%. In this study, firmness of vein has highest specificity and positive predictive value (83.9% and 64.3%, respectively). Conclusion: Results of our study showed that high sensitivity and relatively low specificity of the clinical criterion. It means that unfavorable results of each clinical criterion predict unfavorable dialysis. Clinical evaluation of a newly created fistula 4–6 weeks after surgery should be considered mandatory. Introduction The incidence of end-stage renal disease (ESRD) has been increased 43% based on age, gender, and race around the world since 1991. [1]The patient physical state and other factors determine choice treatment.Although, the creation of vascular access is a necessary maneuver for hemodialysis, creation and maintenance of a well-functioning vascular access are remained the most challenging problems for hemodialysis. [2]The first access method was Brescia-Cimino fistula which was introduced in 1966.In the 1 st years, only young and healthy patients were candidates for AVF creation. [3]Nowadays, the creation of arteriovenous fistula (AVF) is feasible in most cases including diabetics and old patients.In patients undergoing hemodialysis Autogenous AVFs are considered as the most reliable long-term vascular access that compared with For reprints contact: reprints@medknow.comprosthetic arteriovenous grafts and tunneled catheters, require fewer interventions, are less susceptible to failure due to infection and thrombosis, and have been shown to improve patient survival.Although, thrombosis and/or lack of maturation are the reasons of primary failure, [4] but the risk factor for primary failures is not limited to these like the site and diameter of vessels are thought to fulfill an important trole. [5] recent meta-analysis has demonstrated 15.3% primary failure rate for native AVF.[6,7] Fistula maturation depends on several changes involving the vein such as increased blood flow, increased vein diameter, and increased visibility of the vein.Traditionally, one-quarter to one-third of all autogenous hemodialysis AVF created never mature. [8,9]Nephrologists and surgeons often wait for up to 6 months and even longer with the hope of AVF will eventually grow to support dialysis before declaring that the AVF has failed.In the interim, if dialysis is needed, then a tunneled catheter is inserted, exposing the patient to the morbidity and mortality associated with the use of this device.In general, a blood flow of 500 ml/min and a diameter of at least 4 mm are needed for an AVF to be adequate to support dialysis therapy.In most successful fistulae, these parameters are met within 4-6 weeks.[10][11][12][13][14] Most important, commonly encountered problems (stenosis and accessory veins) that result in early AVF failure can be diagnosed easily with the skillful physical examination.Recent studies have indicated that a great majority of fistulae that have failed to mature adequately can be salvaged by percutaneous interventions and become available for dialysis.Early intervention regarding identification and salvage of a nonmaturing AVF is critical for several reasons.First, an AVF is the best available type of access regarding complications, costs, morbidity, and mortality.Second, this approach minimizes catheter use and its associated complications.Finally, access stenosis is a progressive process and eventually culminates in complete occlusion, leading to access thrombosis.[14][15][16] Fistulas are the preferred permanent hemodialysis vascular access, but a significant obstacle to increasing their prevalence is the fistula's high "failure to mature" (FTM) rate.This study aimed to (1) identify postoperative clinical characteristics that are predictive of fistula FTM and (2) use these predictive factors to develop and validate a scoring system to stratify the patient's risk for FTM. Study design This study is a descriptive-analytic single-center prospective study based on referral patients to vascular surgery clinic of a university hospital, who underwent primary AVF creation. Patient's selection All patients with ESRD requiring hemodialysis and candidate for creating AVFs who referred to Al Zahra Hospital (affiliated to Isfahan University of Medical Sciences) between 2011 and 2013 enrolled to this cross-sectional study.This study performed on patients with side to end brachiocephalic AVF.Patient with distal or brachiobasilic fistula, side-to-side anastomosis, very obese patients (body mass index >35) and patients under 14 years old excluded from our study.This study was approved by the ethics committee of our institution, and each patient who participated provided informed, written consent. Methods Demographic and clinical data were collected for all patients including: Age, sex, etc.In addition, 6 weeks after fistula placement the clinical criteria of maturation including: Thrill, firmness, vein length, and venous engorgement examined and recorded.The AVFs situation divided to favorable or unfavorable by each criterion, and the results comprised with dialysis possibility. All examinations were performed by a single blind general surgery resident before hemodialysis. On the same day, all patients were referred for hemodialysis in the dialysis unit.In patients with a minimum of 4 h with 300 ml/min flow were undergoing hemodialysis, [17] hemodialysis was deemed satisfactory.Patients based on whether they have been satisfactory hemodialysis or not divided into two groups, and scores were compared between the two groups. Physical assessment Palpation is the key assessment process to determine access development.The thrill should feel like a vibration or purring that is soft and easy to compress. With a loosely applied tourniquet (inflating the cuff blood pressure with a pressure approximately 5 mmHg above the diastolic pressure by cuff blood pressure) to the axilla area of the upper arm, document the baseline width of the fistula by either taking a photo, marking the fistula margins with an indelible pen or by measuring the width with a tape measure.If the access is arterializing appropriately, there will be a noticeable increase of the size of the vessel.Using your fingertips, palpate the entire length of the fistula.Not only should the vessel increase in size, it needs to thicken in order to withstand repeated needle punctures, increased pressure created by the arterial blood flow and eventually by the blood pump.Take a minute and feel the vein in your wrist and see how soft and pliable an immature "fistula" is.A clinical sign that a patient's fistula wall is thickening is when you compress and release the fistula, and the vein wall rebounds under your fingers with a springy, firm feel. In our study, the score for clinical evaluation to determine AVF maturation was defined: Vein length visible during light tourniquet pressure: Up to 6 cm and more than 6 cm.Vein stiffness and hardness (firmness) with light tourniquet pressure: Feel firm or not. Vein expansion (engorgement): Dilated and engorged without tourniquet pressure or not engorged with tourniquet pressure.Thrill palpable on fistula and the vein: Machinery thrill on fistula or vein as a desirable or systolic thrill as undesirable criteria. Statistical analysis Data were analyzed with SPSS version 21 (233 South Wacker Drive, 11 th Floor, Chicago, IL 60606-6412.).All data are expressed as mean ± standard deviation (SD).The distribution of nominal variables was compared using the Chi-squared test.In order to compare the mean values of quantitative variables the independent t-test.Furthermore, diagnostic indices including sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV), and accuracy of physical assessment in determining the maturity of new AVF for satisfactory hemodialysis were calculated.Advanced Biomedical Research | 2017 A two-sided P < 0.05 was considered to be statistically significant. Results Among the 80 cases, 25 (31.2%)female and 55 (68.8%) male were studied with the mean age of 51.9 (SD = 17) year ranged between 18 and 86 years old.Sixty-two (77.5%) cases had successful hemodialysis.Independent t-test analysis demonstrated that the mean age difference was not statistically significant in patient with successful and unsuccessful dialysis (P = 0.852).Evaluating gender status qualitatively with Chi-square analysis, it showed that the gender difference was not statistically significant in patient with successful and unsuccessful dialysis (P = 0.348). Vein visible length during light tourniquet pressure was more than 6 cm in 43 (53.8%). Vein feel firm with light tourniquet pressure, vein engorgement, and machinery thrill palpable on fistula was in 52 (65%), 43 (53.8) and 51 (63.8) respectively.Qualitative evaluation of clinical assessment status in two groups of study (successful and unsuccessful dialysis) is illustrated in Table 1.As it is shown, all four clinical assessments were significantly more suitable in patients with successful dialysis (P < 0.001). A sensitivity of 88.9%, specificity of 66.1%, accuracy of 71.25%, PPV of 43.2%, and NPV of 95.3% were found in determining satisfactory dialysis for vein length visible during light tourniquet pressure.It means that if vein visible length during light tourniquet pressure was less than 6 cm hemodialysis was not successful in 88.9% of cases and we can rely on negative result of our test (vein visible length during light tourniquet pressure was less than 6 cm) in 95.3% to cases.On the other hand, if vein visible length during light tourniquet pressure was more than 6 cm, hemodialysis was successful in 66.1% of cases and we can rely on positive result of our test (vein visible length during light tourniquet pressure was more than 6 cm) in 43.2% to cases.Overall the results of vein visible length have 71.25% accuracy in determining dialysis status. Diagnostic values of all clinical examination are summarized in Table 2. Discussion According to the results of our study, the accuracy of all physical assessment was above 70% and except vein length other criteria had sensitivity and NPV of 100%.Which means that hemodialysis is low probability of success in the case of the four clinical assessment were undesirable, in other words, maturation of AV fistula for successful hemodialysis was diagnosed by physical examination and if clinical assessment was not desirable the practical success of dialysis and maturation of fistula is low. These results manifest that clinical examination is a useful and noninvasive method in determining the maturation of AVFs for suitable hemodialysis.In this study firmness of vein has highest specificity and PPV (83.9% and 64.3%, respectively). Specificity and PPV indicate that desirable clinical examination (even desirable for all the clinical criteria) cannot be completely sure about the success of the hemodialysis.Results of this study showed that among four criteria firmness with an accuracy of 87.5% had greatest accuracy and followed by trill, engorgement, and vein length respectively. To date, to the best of our knowledge, our study is one of the first to evaluate postoperative clinical assessment in determining the maturation of AVFs for suitable hemodialysis. [18] another recent study, Wayne et al. [19] showed a significant association of the absence of peripheral vascular disease, aspirin use, and absence of previous permanent dialysis access with higher primary patency rates.They concluded that higher blood pressure during the maturation period relative to preoperative blood pressure was associated with lower patency rates.In a similar study, the clinical predictors that were associated with FTM were aged ≥65 years, peripheral vascular disease, coronary artery disease, and white race. [20]In 2008, Berman et al. [21] observed during the 12-month period in 70 autologous AVFs intraoperative blood flow measurements at the time of autologous AVF construction can identify fistulas that are unlikely to mature; and therefore, that require immediate revision or abandonment which will ultimately expedite the establishment of a useful access in the HD patient. Feldman et al. [22] found maturation was associated with greater intraoperative doses of heparin, use of large-diameter veins, and mean arterial pressure of 85 mm Hg or greater.Using the optimal surgical technique, the probability of successful AVF maturation would have been as high as 84%. In a study conducted by Patel et al., [11] preoperative duplex ultrasonography scanning was performed in 68% of patients and venography in 32% of patients.Autogenous fistula creation rate increased from 61% to 73% in all patients with hemodialysis access.Functional maturation rate decreased from 73% to 57% after implementation of preoperative imaging and more aggressive vein use. They concluded the implementation of preoperative duplex US scanning and venography as a component of a more aggressive protocol to create native fistulas was pivotal in exceeding Dialysis Outcome Quality Initiative (DOQI) guidelines for hemodialysis access. Recently, in one study pattern of blood flow was evaluated as a predictor of maturation of AVF for hemodialysis.Doppler ultrasound was used immediately postoperatively and at follow-up (6 weeks).They concluded spiral laminar flow was strongly supportive of successful fistula maturation.A "thrill" was characteristic of spiral rather than turbulence. [23] obese subjects, for example, even veins that are well developed can be difficult to visualize or palpate because of their depth; in these cases, duplex ultrasonography (DUS) can reveal whether the fistula is mature, and US mapping of the outflow veins can facilitate the first cannulation and simplify subsequent punctures. [24]In this regard, it is important to recall the proposal of Rayner et al., [25] which was incorporated in the K-DOQI guidelines [26] as "the rule of 6."It identifies the ultrasound characteristics that confirm that a fistula is mature and therefore, ready for use: A flow volume of 600 ml/min, an outflow vein diameter of 6 mm, and an outflow vein depth of 6 mm below the skin surface.Clearly show that maturation should be sonographically monitored until the fistula is used, especially when maturation seems to be proceeding slowly and in patients whose veins cannot be easily assessed with physical examination alone (e.g., due to obesity).DUS measurement of AVF flow volumes is perhaps the only imaging tool that can be used to monitor the fistula even during its maturation. [27,28] mentioned above, various studies about different factors affecting and predicting AVFs maturation have been done, while our approach was identifying postoperative noninvasive clinical characteristics that are predictive of fistula FTM.Also in many centers, Doppler ultrasound and expert operator were not accessible.This is the first study to establish the clinical examination needed for determining AVFs maturation to a functional access.Results of our study showed that the high sensitivity and relatively low specificity of clinical criteria mean that unfavorable results of each clinical criterion predict unfavorable dialysis.Evaluation of a newly created fistula 4-6 weeks after surgery should be considered mandatory.If the fistula is going to become adequate for dialysis, it will be apparent at this time.This evaluation can be accomplished by physical examination.However, it must be performed by someone who is knowledgeable.Using a systematic approach facilitates the evaluation and ensures that a problem is not overlooked.Once it is determined that the fistula is dysfunctional, the case should be immediately referred for management to an interventionalist who is experienced in dealing with early fistula failure.The majority of these cases can be salvaged. Financial support and sponsorship This study was not financial support by any organization. Table 2 : Diagnostic values of clinical examination Positive predictive value, NPV: Negative predictive value Advanced Biomedical Research | 2017
2018-04-03T03:03:05.187Z
2017-03-01T00:00:00.000
{ "year": 2017, "sha1": "33049b591e03b6c68faec0b9fa20c98795de19b8", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/2277-9175.201330", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0322fbc4f1fe41768e6d63f0cba9bb814576a405", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245479129
pes2o/s2orc
v3-fos-license
Case Report Idiopathic intracranial hypertension leading to bilateral optic atrophy in a patient with recent COVID-19 infection: a case report and radiographic findings of acute optic Neurologic complications are common in patients hospitalised with COVID-19 infection. Most common complications are myalgias, headaches, encephalopathy and dizziness. Uncommon complications are stroke, motor and sensory deficits, seizures, ataxia and movement disorders. Multiple neuro-ophthalmological manifestations have also been reported in association with COVID-19. These complications may be the result of a range of pathophysiological mechanisms like hypoxic neuronal injury during active COVID-19 infection, RAS dysfunction, immune dysfunction and direct injury by the virus etc throughout the course of the disease. Here we reported a case of neuro-ophthalmic complication of Idiopathic intracranial hypertension (IIH) followed by bilateral optic atrophy in a middle-aged man with recent COVID-19 infection. He presented to the emergency with complaints of headache, dizziness and sudden painless bilateral diminution of vision for 3 days. His fundus examination was suggestive of bilateral papilledema, his MRI brain was normal and opening pressure of CSF was raised on lumbar puncture. His MRV was normal, there was no evidence of CSVT. He was started on steroids and acetazolamide. His headache improved but there was no improvement in visual acuity. Repeat fundus showed pale disc and MRI orbit was suggestive of bilateral optic atrophy. INTRODUCTION Since December 2019, Coronavirus disease 2019 (COVID-19) has become a global pandemic caused by the highly transmissible Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). 1 A wide variety of neuro-ophthalmologic manifestations have also been found in association with COVID-19, mostly related to demyelinating disease. While the mechanism of these manifestations is unknown, hypotheses include direct neuronal invasion, endothelial cell dysfunction leading to ischemia and coagulopathy, or a widespread inflammatory 'cytokine storm' induced by the virus. 2 Optic neuritis has developed in several infected patients, presenting with neuromyelitis Optica spectrum disorder and anti-Myelin oligodendrocyte glycoprotein (anti-MOG) antibodies. 3 There was no significant drug intake history. He complained of headache, not unilateral, continuous, sometimes pulsatile, associated with nausea but no vomiting. This headache was followed by blurring of vision that worsened gradually over a period of 3 days. It was painless, acute in onset and progressive. Vitals were normal; pulse=78/min, blood pressure of 130/80 mmHg, oxygen saturation of 99%, and respiratory rate of 17 breaths per min. CNS examination showed no neurodeficit, no cranial nerve palsies. Ophthalmology examination showed bilateral mild papilledema. Keeping in view a possibility of idiopathic intracranial hypertension patient was subjected to an urgent MRI brain and MR venogram. It showed no intracranial abnormality, no evidence of sinus venous thrombosis, no apparent bilateral venous sinus narrowing/stenosis. Lumbar puncture (LP) revealed an opening pressure of 40 cm H2O, and 30 ml clear and colourless CSF was drained at this time. CSF analysis was within normal limits. With evidence of increased intracranial pressure by LP findings, acetazolamide 250 mg TID was initiated. The patient reported improvement in his headache but his visual acuity remained the same. Over the next three days there was no improvement in his vision. Keeping a possibility of post viral/post infectious optic neuritis patient was started on injection dexamethasone 6 mg IV TID. LP was repeated which revealed CSF opening pressure of 25 cm H2O. Further imaging of bilateral orbit MRI revealed bilateral optic atrophy. There was no evidence of optic neuritis and no peri-optic enhancement thereby settling the suspicion of anti-MOG syndrome or NMOSD in correlation with his recent COVID-19 infection. Patient has been on follow up for 2 months. There has been no significant improvement in his visual acuity. He was discharged on acetazolamide and topiramate and supportive drugs for headache prophylaxis; with regular follow-up in our OPD. He showed improvement in his headache but no visual improvement or deterioration. Our patient's symptoms of headache, optic disk oedema, and high opening pressure on LP made the diagnosis of Idiopathic intracranial hypertension. His MRI brain and MRV were unremarkable. LP showed high opening pressure (40 cm H2O) and normal CSF analysis, with headache that improved with therapeutic LP, acetazolamide and steroids. He fulfilled the modified Dandy's criteria for IIH. However as there was no improvement in visual acuity, MRI orbit was done which was suggestive of bilateral optic atrophy. DISCUSSION COVID-19 infection caused a horrific pandemic worldwide. Some people experienced flu-like symptoms, while many died due to pulmonary complications. During the first phase of the pandemic, pulmonary symptoms were in the limelight but later other signs and symptoms were also reported. There haven't been many studies which analyse the ophthalmic and neuro-ophthalmic COVID-19-associated demyelination is also hypothesized to be attributed to the cytokine storm due to IL-1, IL-6, and TNF-α, which then activates the glial cells and thereby causes demyelination. 6 Another possible hypothesis is ascribed to SARS-CoV-2-triggered production of antiglial cell antibodies in the Para infectious or postinfectious state, thereby leading to demyelinating pathologies such as Acute or subacute disseminated encephalomyelitis (ADEM), acute haemorrhagic leukoencephalitis with MRI features of concentric demyelination pattern, acute transverse myelitis, and neuromyelitis optica. There have also been cases reported of secondary intracranial hypertension with concurrent COVID-19 infection and MIS-C in paediatric population. 8 Owing to autoantibody production and thrombophilic disorders in COVID-19, physicians must have low threshold to investigate secondary IIH and demyelinating disorders in patients with headache and decreased vision following recent COVID-19 infection. 9 Idiopathic intracranial hypertension (IIH) constitutes a constellation of signs and symptoms of raised intracranial pressure with fulfilment of modified Dandy's criteria. Physicians must ask leading questions about double vision, decreased vision, pain with eye movements, gait abnormalities, or other neurological conditions while screening patients with COVID-19 symptoms. In patients presenting with these complaints, COVID-19 testing may be prudent while doing the tests to determine aetiology. Treating doctors should also do a quick assessment of visual acuity, pupillary response, ocular motility, ptosis, optic disc, and reflexes since majority of these conditions occur in the early phase of the disease. Neuroimaging with angiography with attention to cranial nerves for any abnormal enhancement or cerebral infarcts can be advised based on the assessment. CONCLUSION The pandemic caused by SARS-CoV-2 has had health implications of unparalleled magnitude. The infection can range from asymptomatic to mild to life threatening respiratory distress. It can affect almost every organ of the body. Direct effect due to virus, immune mediated tissue damage, activation of the coagulation cascade and prothrombotic state induced by the viral infection, the associated comorbidities and drugs used in the management are responsible for the findings in the eye. Ophthalmic manifestations may be the presenting feature of COVID-19 infection or they may develop several weeks after recovery. Ophthalmologists and physicians should be aware of the possible associations of ocular diseases with SARS-CoV-2 in order to ask relevant history, look for specific signs, advise appropriate tests and thereby mitigate the spread of infection as well as diagnose and initiate early treatment for life and vision threatening complications. In the COVID-19 pandemic, health systems struggled to prioritize care for affected patients; however, physicians also attempted to maintain care for other less-threatening medical conditions that could have led to permanent disabilities if untreated. IIH is a relatively common condition affecting young females that could lead to permanent blindness if not properly treated. Diagnosis and follow-up of papilledema due to IIH during and after the COVID-19 pandemic can be facilitated by nonmydriatic fundus photography and optical coherence tomography. COVID-19 may mimic IIH by presenting as cerebral venous sinus thrombosis, papillophlebitis, or meningoencephalitis, so a high index of suspicion is required in these cases. When surgical treatment is indicated, optic nerve sheath fenestration is the primary procedure of choice. IIH is a serious vision-threatening condition that could lead to permanent blindness and disability at a relatively young age if left untreated. It could be the first presentation of a COVID-19 infection. Certain precautions if taken during the diagnosis and management of this condition, may allow appropriate care to be delivered to these patients while minimizing the risk of COVID-19 infection.
2021-12-26T16:14:25.072Z
2021-12-23T00:00:00.000
{ "year": 2021, "sha1": "5f151c4d71553c85775530d894a4f7727697dbf6", "oa_license": null, "oa_url": "https://www.ijmedicine.com/index.php/ijam/article/download/3260/2220", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "f69b6fb731a344b5c7ec252f8e1db554c7e651f4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
251831221
pes2o/s2orc
v3-fos-license
Drinking during social isolation: investigating associations between stress, inhibitory control, boredom, drinking motives, and alcohol use Abstract Background: We aimed to assess whether stress, boredom, drinking motives, and/or inhibitory control were related to alcohol use during a period of social isolation. Method: Analyses were carried out on questionnaire data (N = 337) collected during the first wave of the COVID-19 pandemic (7 April–3 May 2020). We first assessed changes in drinking behavior, stress and boredom. We then regressed drinking behavior on drinking motives, inhibitory control, stress, and boredom. We also investigated interactions between change in stress/boredom and inhibitory control. Results: A minority of respondents reported increased alcohol use (units = 23.52%, drinking days = 20.73%, heavy days = 7.06%), alcohol-related problems (9.67%), and stress (36.63%). Meanwhile, most respondents reported increased boredom (67.42%). Similarly, boredom significantly increased (B = 21.22, p < .001), on average, while alcohol-related problems decreased (B = −1.43 p < .001). Regarding drinking motives, decreased alcohol-related problems were associated with social drinking motives (B = −0.09, p = .005). Surprisingly, risk-taking was associated with decreased alcohol-related problems (B = −0.02, p = .008) and neither stress nor boredom independently predicted changes in alcohol use. Finally, several significant interactions suggested that those who were more impulsive and less bored were more likely to report increased alcohol use and vice versa. Conclusions: These data provide a nuanced overview of changes in drinking-related behavior during the COVID-19-induced period of social isolation. While most people reduced their drinking, there was evidence of complex interactions between impulsivity and boredom that may be explored in future studies. Introduction Increased mortality and morbidity have been linked to social isolation (e.g. loneliness) for decades (e.g. House et al. 1988). A large volume of theoretical and empirical work states that this effect ultimately results from increased activation of the hypothalamic pituitary adrenocortical (HPA) axis (Cacioppo et al. 2015). Chronic HPA axis activation results in dysfunctional stress responses and deficits in emotional regulation (Milivojevic and Sinha 2018). In turn, these neuroadaptations contribute to the development and maintenance of addiction and offer an explanation as to why stress is a prominent risk factor for alcohol misuse (e.g. Jose et al. 2000;Ruisoto and Contador 2019). 'Boredom' (i.e. the inability to find satisfaction or interest while participating in an activity) has also been associated with addictive behaviors such as gambling (Eastwood and Mercer 2010) and alcohol misuse (Biolcati et al. 2018). Those with reduced inhibitory control tend to have greater boredom proneness (Struk et al. 2016;Isacescu et al. 2017). Therefore, poor inhibitory control may moderate the relationship between boredom and alcohol use, whereby the impact of boredom on alcohol use is greater among those with poor inhibitory control. Other well-researched moderators of drinking behavior exist: so-called drinking motives (Cooper 1994). Several general patterns emerge when examining the impact of drinking motives on alcohol use: social motives (i.e. drinking to improve social situations) tend to be related to drinking frequency; enhancement motives (i.e. drinking to increase positive affect) are related to heavy drinking; coping motives (drinking to reduce negative affect) are associated with a greater number of alcohol-related problems; and conformity motives (i.e. drinking to fit in with a group) are typically negatively associated with frequency and quantity of alcohol use (Kuntsche et al. 2005(Kuntsche et al. , 2014Lyvers et al. 2010). Drinking motives have also been shown to impact alcohol use following crisis. For example, after the 9/11 terrorist attack, Beseler et al. (2011) found that both drinking to cope and drinking for enjoyment (i.e. enhancement) were associated with increased alcohol use. Similarly, 'drinking to cope' has been highlighted as a prominent risk factor for increased alcohol use during the COVID-19 pandemic in the USA (Rodriguez et al. 2020) and Canada (Wardell et al. 2020). The COVID-19 pandemic and associated 'lockdowns' (i.e. government mandated periods of social isolation characterized by orders to remain at home to mitigate the spread of disease; Anderson et al. 2020) have resulted in increased mental distress worldwide through (for example) social isolation, loss of income, increased childcare responsibilities, and monotony (Bhattacharjee and Acharya 2020; Gavin et al. 2020;Ornell et al. 2020;Pfefferbaum and North 2020). Thus, the pandemic presents a naturalistic source of negative affect. Early in the pandemic, several scholars warned that long-term isolation may create an unforeseen public health crisis involving increased alcohol consumption (Clay and Parker 2020;Finlay and Gilmore 2020;Ramalho 2020). As a result, attempts were made to synthesize work conducted in-relation to other crises involving trauma (e.g. the 9/11 attack), epidemic outbreaks (e.g. the 2002-03 SARS pandemic), and economic hardship (e.g. the 2008 recession) in relation to alcohol use (Gonçalves et al. 2020). Ultimately, two opposing scenarios were proposed ): (1) increased psychological distress may drive an increase in alcohol use and related harms; (2) alcohol policies which reduce the physical and financial availability of alcohol would cause a reduction in alcohol consumption and associated problems. Following these predictions, recent work has tried to characterize those most at-risk of increased alcohol consumption, although this literature offers a somewhat mixed picture. Several studies provide evidence that increased distress was associated with increased drinking (Koopmann et al. 2020;Neill et al. 2020;Tran et al. 2020;Garnett et al. 2021;Jacob et al. 2021). Conversely, in a large-scale study comprising data from 21 European countries, Kilian et al. (2021) found evidence that drinking decreased in most countries and that this reduction was primarily driven by reduced availability of alcohol. Nevertheless, increased distress dampened this relationship. Additionally, recent work has shown that impulsivity acts as a moderator of stress-related pandemic drinking (Clay et al. 2021). However, that paper reports a secondary analyses of birth cohort data, and such surveys prioritize brevity and breadth. Thus, single-item measures of impulse-control were utilized, which were not empirically validated and may suffer from reduced content validity. Overall, previous research provides strong evidence for the prediction that those who increased their drinking during the pandemic were drinking to cope, which may be moderated by impulsivity, and limited evidence that a reduction in affordability or availability played a role. Therefore, our work here was motivated by the need to evaluate risk factors for those who increased their drinking during the pandemic; whether they were drinking to cope and whether this relationship, if present, was moderated by impulsivity (using empirically validated measures). As we move out of the pandemic, this work is of importance as it pertains to drinking in the home (versus in public settings). For instance, prior to the pandemic, a significant proportion of alcohol was consumed at home (perhaps due to convenience, cost, safety, autonomy, and stress relief) (e.g. Foster and Ferguson 2012;Callinan et al. 2016). Moreover, most long-term harms that occur because of alcohol use (e.g. liver disease and cancer) are linked to total alcohol consumption (GBD 2016 Alcohol Collaborators 2018). However, research typically focuses on public drinking (Callinan and MacLean 2020). Thus, if a large amount of alcohol is typically consumed in the home, further research which focuses on drinking in this setting is crucial in reducing the burden of alcohol, and data collected during the COVID-19 pandemic provides the perfect opportunity for this (Callinan and MacLean 2020). We aimed to investigate how some of the theoretical mechanisms that underlie alcohol use (in a non-clinical sample in the hope that our results are generalizable to as many people as possible) may have operated during a period of social isolation brought on by the COVID-19 pandemic. We hope that this increased theoretical understanding of socially isolated home drinking, will have broader implications beyond the pandemic by, for instance, identifying those most at-risk of future alcohol-related long-term harm. We preregistered several hypotheses 1 : (1) alcohol use would increase during social isolation; (2) both coping and enhancement motives would be associated with increased 1 The original preregistration listed ten hypotheses. Data testing hypotheses one to seven and hypothesis nine are reported in the main body of this paper. These have been briefly summarized in the Introduction. As there was no significant association between a change in stress and perceived stress reactivity (see Appendices) our planned moderation analysis, detailed in hypothesis eight of the preregistration, was not conducted. As this is a twopart study, Hypothesis 10 pertains to additional longitudinal work which is, to date, ongoing. alcohol use; (3) poor inhibitory control, stress, and boredom would be positively associated with an increase in alcohol use; and (4) the association between poor inhibitory control would be greater among those with higher negative affect (stress and boredom). Recruitment A survey designed to assess changes in, and factors related to, drinking behavior during social isolation was created using Qualtrics (Provo, Utah). The survey was developed in English, and then translated into French, Spanish, Italian, Portuguese (European and Brazilian), and Hebrew by the native speaking authors. Some wording had to be changed slightly to retain the original meaning and to ensure consistency across countries. Participants were eligible if they were ! 18 years of age, had a reliable internet connection, and they were proficient in at least one of the languages listed above. Participants could complete the survey on either a computer, smartphone, or tablet. All responses were completed between 7 April 2020 and 3 May 2020. During this time, the survey was advertised by several news media outlets and throughout the co-authors' networks via email, word-of-mouth, and social media. All participants gave their informed consent and were not compensated. The study was approved by the University of Portsmouth Science Faculty Ethics Committee (ref: SFEC 2020-030). Demographic information Demographic data collected were age, gender, ethnicity, country of residence, education level, occupation, whether the respondent was a key worker, gross individual income over the last 12 months, subjective social status, marital status, the number of people in the same household as the respondent, number of offspring, who the respondent was isolated with, and whether the respondent was suffering from any COVID-19 associated symptoms. Country of residence was recoded to reflect sub-regions of the world based on the United Nations M49 Standard (United Nations 2020). This allowed us to find a balance between the number of levels and the number of participants within each level (Hox et al. 2018). The gross individual income question was presented in local currency relative to British Pounds and then recoded to relative income using World Bank adjusted net national income per capita data (The World Bank 2020), where: An index of socioeconomic status (SES), combining relative income, education, occupation, and subjective social status (Diemer et al. 2013), was calculated using exploratory factor analysis (EFA)see Appendices. This allowed us to conserve statistical power during hypothesis testing by controlling for the variables entered into the final EFA using a single model parameter. Similar approaches to creating an index of SES have been published elsewhere (e.g. Scharoun- Lee et al. 2009;Yu et al. 2014). Alcohol use and drinking behavior Alcohol Use Disorders Identification Test (AUDIT): The AUDIT was created by the World Health Organization as a brief assessment of alcohol misuse (Babor et al. 1992(Babor et al. , 2001. It has been shown to have excellent psychometric properties when used to assess alcohol use disorders in a variety of settings (Fleming et al. 1991;Claussen and Aasland 1993). The AUDIT is scored on a scale from 0 to 40, where scores between 0 and 7 indicate low-risk drinking, scores between 8 and 15 indicate increasing risk of harm, scores between 16 and 19 higher risk drinking, and a score >20 suggests alcohol dependence. Internal consistency of the AUDIT in the present study was good, Cronbach's a ¼ 0.78. Typical Atypical Drinking Diary (TADD): The TADD was used to retrospectively assess alcohol use (Patterson et al. 2019). When completing the TADD, participants fill in two weekly diaries: one for typical weeks and another for atypical weeks (i.e. either less than or greater than a typical week). Participants specified the type, strength, volume, and quantity of the beverages they consumed for each day of the 7day week and then estimated how many weeks they drank this typical/atypical amount during the specified period. Participants were asked to estimate what they drank before (i.e. 'before the COVID-19 induced isolation') and during (i.e. 'after the COVID-19 induced isolation') social isolation. This method allows for the calculation of units, 2 drinking days, and heavy drinking days 3 per week. Research indicates that the TADD is more accurate and time-efficient than other retrospective assessments of drinking, such as the Timeline Followback (Patterson et al. 2019). Alcohol Problems Questionnaire (APQ): Alcohol-related problems were assessed using the Alcohol Problems Questionnaire (Drummond 1990). The APQ is a standalone scale that consists of 44 binary (yes/no) items designed to assess alcohol-related problems across four domains: commonly faced alcohol-related problems, problems related to romantic relationships, problems related to children, and problems related to work. Therefore, the maximum score on the APQ is 44, with a higher score reflecting a greater number of alcohol-related problems faced. Here, we added a 'Not Applicable' option to the latter subscales to allow the questionnaire to be relevant to a larger proportion of the population than the original scale. For instance, an 18-yearold student may not have any children. We also changed the wording for questions about romantic relationships from 'spouse' to 'spouse/partner' for the same reason. The APQ has been shown to have good validity and test-retest reliability (Williams and Drummond 1994). In the present study, the internal consistency was excellent, Cronbach's a ¼ 0.94. Drinking motives Drinking motives were assessed using the Revised Drinking Motives Questionnaire (DMQ-R; Cooper 1994). The DMQ-R is a 20-item scale which proposes four motives for alcohol consumption: conformity (e.g. 'so you won't feel left out'); coping (e.g. 'drinking to forget your problems'); enhancement (e.g. 'to have fun'); and social (e.g. 'because it helps you enjoy a party'). Here, participants responded to each item using a 5-point Likert scale (1 ¼ Almost never/never, 2 ¼ Some of the time, 3 ¼ Half of the time, 4 ¼ Most of the time, 5 ¼ Almost always/always). Each subscale contains five items. Thus, the maximum score per subscale is 25, with higher scores indicating greater endorsement of a motive. The DMQ-R has been shown to have good validity across cultures and in a variety of age groups (Fernandes-Jesus et al. 2016). Here, the internal consistency of the DMQ-R subscales ranged from acceptable to excellent, Cronbach's as ¼ 0.68-0.89. Negative affect Short stress overload scale (SOS-S) Self-report stress levels were measured before (i.e. 'before the COVID-19 related isolation') and during (i.e. 'since the COVID-19 related isolation') social isolation using the SOS-S (Amirkhan 2018). The SOS-S is a 10-item scale designed to act as a brief diagnostic tool for stress and stress-related disorders and has been shown to have good psychometric properties. Here, participants responded to each item using a five-point Likert scale (1 ¼ Not at all: 5 ¼ A lot). Therefore, the maximum score on the SOS-S is 50, with higher scores reflecting greater levels of stress. In the present study, internal consistency was excellent, Cronbach's as ¼ 0.90-0.92. Perceived stress reactivity scale (PSRS) Stress reactivity was assessed using the 23-item PSRS (Schlotz et al. 2011). The PSRS is a standalone scale with five subscales: prolonged reactivity, reactivity to work overload, reactivity to social conflict, reactivity to failure, and reactivity to social evaluation. Participants responded to each item using a 3-point Likert-type scale that varied depending on the framing of each item (e.g. 'When tasks and duties build up to the extent that they are hard to manage … ', 0 ¼ ' … I am generally untroubled', 1 ¼ ' … I usually feel a little uneasy', 2 ¼ ' … I normally get quite nervous'). Therefore, the maximum total score on the PSRS is 46, with higher scores indicating greater levels of stress reactivity. The psychometric properties of the PSRS has been established in several countries, with scores correlating with numerous stress-related disorders (Schlotz et al. 2011). In the present study, the internal consistency was good, Cronbach's a ¼ 0.88. Multidimensional state boredom scale (MSBS) Boredom before and during social isolation was assessed using the MSBS (Fahlman et al. 2013). The MSBS is a 29item scale with good psychometric properties that can be used to quantify boredom by either using the total score or across five subscales: disengagement, high arousal, low arousal, inattention, and time perception. Here, participants responded to each statement using a seven-point Likert scale . Thus, the maximum score was 203, where higher scores reflect greater levels of boredom. The internal consistency here was excellent with Cronbach's a ranging from 0.96 to 0.97. Inhibitory control The shortened urgency, premeditation, perseverance, sensation seeking, positive urgency, impulsive behaviour scale (S-UPPSP) The S-UPPSP was used to assess negative urgency (i.e. the tendency to act rashly under extreme negative emotions), lack of premeditation (i.e. the tendency to act without thinking), lack of perseverance (i.e. the inability to remain focused on a task), sensation seeking (i.e. the tendency to seek out novel and thrilling experiences), and positive urgency (i.e. the tendency to act rashly under extreme positive emotions) (Cyders et al. 2014). The S-UPPSP is a 20item scale where participants rate several statements related to their impulsive behavior on a four-point Likert-type scale (1 ¼ Agree strongly, 2 ¼ Agree some, 3, Disagree some, 4 ¼ Disagree strongly). Each subscale is made up of four items; therefore, the maximum score on each subscale is 16, with higher scores reflecting a greater level of impulsivity. Numerous studies have suggested associations between impulsive traits measured using the original and shortened UPPS-P scales and alcohol use (e.g. Coskunpinar et al. 2013). In the present study, internal consistency of each subscale ranged from acceptable to good, Cronbach's a ¼ 0.67-0.82. Domain-specific risk-taking scale (DOSPERT) The DOSPERT was administered to assess risk-taking (Blais and Weber 2006). The DOSPERT is a 30-item scale designed to assess five sub-domains risk-taking: ethical, financial, health, recreational, and social. Here, participants rate how likely it is that they would engage with each activity or behavior using a 7-point Likert scale (1 ¼ Extremely unlikely, 2 ¼ Moderately unlikely, 3 ¼ Somewhat unlikely, 4 ¼ Not sure, 5 ¼ Somewhat likely, 6 ¼ Moderately likely, 7 ¼ Extremely likely). Scores can be summed across all items or by subscale. Each subscale contains six-items. Therefore, the maximum score overall is 210, with higher scores indicating a greater propensity to take risks. The DOSPERT has been shown to be a reliable and valuable assessment of 'real world' risk-taking via questionnaire (e.g. Highhouse et al. 2017). Here, the internal consistency of the DOSPERT was good, Cronbach's a ¼ 0.82. Procedure After informed consent was confirmed, participants reported their demographic information before completing the remaining scales in counterbalanced order to eliminate order effects. Scales that measured both pre-and intra-isolation data (e.g. the TADD) were presented as one block, whereby the scale which sought pre-isolation responses was presented first. Sample Due to limited financial and temporal resources, we used opportunity/snowball sampling to collect data from as many participants as possible within the study period (Lakens 2022). Overall, 1148 responses were recorded. Of these, 811 were excluded to ensure data integrity: 39.55% had >40% missing data 4 ; 21.43% reported living in sub-regions with an inadequate number of responses 5 ; 7.40% were classified as multivariate outliers based upon a Mahalanobis distance that is significant at p < .001 (Verardi and Dehon 2010;Tabachnick and Fidell 2014) and 0.17% were considered clear univariate outliers (see Appendices); 0.87% reported experiencing no social isolation; 0.52% were test data; 0.44% had gender recorded as transgender or 'prefer not to say' and 0.09% had ethnicity recorded as 'prefer not to say' 6 ; and 0.17% were duplicate responses. This left 337 cases for analysis. A simulation-based sensitivity power analysis (Lakens 2022) showed that our design had sufficient statistical power (1 -b) ¼ 80% to detect an effect size of B ¼ 0.0015 for our most complex model. Details of the sensitivity power analysis can be seen in the Appendices. Sociodemographic characteristics of the sample are shown in Table 1. Analysis Data, preregistered hypotheses, and code for analyses are posted on the Open Science Framework at https://osf.io/ mnz34/. Data were analyzed using Stata IC (version 16.1) and R (version 4.0.4). Missing data Missing data was dealt with using multiple imputation (MI; Enders 2010). White et al. (2011, p. 388) recommended that 'm should be at least equal to the percentage of incomplete cases'. Here, the overall percentage of cases with incomplete data on analysis variables was 37.69%. Therefore, we used the mi impute chained command in Stata to generate 40 imputed datasets, using predictive mean matching, with d ¼ 5 (Schenker and Taylor 1996). Graphical diagnostics (see Appendices) suggested that the datasets should be separated by at least 125 iterations of the imputation algorithm, thus we conservatively saved each dataset after the 150th iteration. The imputation model included all variables used in subsequent analyses together with the hypothesized interaction terms and three auxiliary variables that were believed to be correlated with missingness (percent progress in survey, date of response, AUDIT score). Interaction terms were imputed and estimated following Enders et al. (2014). Descriptive and inferential statistics Change scores were calculated for units, drinking days, heavy drinking days, alcohol-related problems, stress, and boredom, using the mi passive command. Descriptive statistics were calculated for each of the key study variables. Bivariate relationships were explored using Pearson correlations (see Appendices). Linear mixed-effects models (LMMs) were used to test our hypotheses. We included sub-region as a random effect to improve inference and generalizability (Barr et al. 2013). We first assessed change in alcohol use, stress, and boredom by entering change scores and covariates into models as fixed effects and interpreting the intercept (analogous to a one sample t-test comparing the change score to zero). Next, we regressed change in alcohol use scores on our predictors of interest and covariates. Finally, we entered our hypothesized interactions into the models. All continuous predictor variables were grand mean centered to aid interpretation and reduce potential collinearity. Models were separated by construct to conserve statistical power and to avoid erroneously conditioning the model estimates (Mcmullin et al. 2021). We implemented Benjamini and Hochberg's (1995) method of false discovery rate (FDR) control for pre-registered confirmatory analyses to reduce the probability of making a type I error due to multiple testing (Glickman et al. 2014). Significant interactions were probed using the Johnson-Neyman (JN) technique (Johnson and Neyman 1936) as suggested by Hayes (2017). Covariates included in all models were: age (e.g. Leigh and Stacy, 2004), gender (e.g. White et al. 2015), ethnicity (e.g. Twigg and Moon 2013), SES (e.g. Probst et al. 2020), the number of COVID-19 symptoms experienced (e.g. Chaaban et al. 2021), and whether the participant was isolated with children (e.g. MacMillan et al. 2021). Models including stress as a predictor also controlled for perceived stress reactivity (e.g. Clay and Parker 2018). As the sample lacked ethnic diversity, a dichotomous White/non-White variable was used. As the margins command is incompatible with imputed data, the first complete dataset was used to probe and visualize significant interactions. For brevity, non-significant LMM results are reported in the Appendices. Results were considered significant when p < .05. Associations between drinking motives and alcohol use behavior Social motives were associated with a decrease in alcoholrelated problems (B ¼ À0.09, FDR-adjusted p ¼ .005). No other significant relationships were found. Associations between inhibitory control, stress, boredom, and alcohol use Risk-taking (DOSPERT score) was associated with a decrease in alcohol-related problems (B ¼ À0.02, FDR-adjusted p ¼ .008). No other significant associations were found. Moderation analyses suggested that boredom modified the relationship between lack of premeditation and the number of units consumed per week (B ¼ À0.02, FDR-adjusted p ¼ .034), the number of weekly drinking days (B ¼ À0.004, FDR-adjusted p ¼ .027), and the number of heavy drinking days (B ¼ À0.002, FDR-adjusted p ¼ .048). No other significant interactions were observed. JN plots (see Figure 2) revealed that those who were more impulsive and less bored tended to report increased alcohol use, and vice-versa. Specifically, a decrease of !16 MSBS points was associated with an increase in the number of units consumed. Whereas an increase of !28 points was associated with a decrease in the number of units consumed. Similarly, decreased MSBS scores were associated with an increased number of drinking days. Meanwhile, an increase of <19 MSBS points was associated with a decrease in drinking days. Finally, a decrease of !16 MSBS points was associated with an increase in the number of heavy drinking days. Whereas an increase of ! 18 MSBS points was associated with an increase in the number of heavy drinking days. Discussion The present study aimed to better understand how a period of social isolation, brought about by the recent COVID-19 pandemic, affected alcohol use. By assessing associations between changes in drinking behavior, drinking motives, inhibitory control, stress, and boredom, we provide a nuanced overview of how some of the theoretical mechanisms which underlie alcohol use and misuse may have operated during this time. We found that approximately 1 in 4 respondents reported drinking more and around 1 in 10 reported experiencing an increased number of alcohol-related problems. These findings correspond to similar work conducted during the COVID-19 pandemic (Koopmann et al. 2020;Neill et al. 2020;Tran et al. 2020;Clay et al. 2021;Garnett et al. 2021;Jacob et al. 2021;Schmits and Glowacz 2021;Kilian et al. 2022). Most respondents reported feeling more bored during lockdown, as in previous work (Martarelli and Wolff 2020;Jackson et al. 2021;Latif and Karaman 2021). Stress levels, however, either stayed the same or decreased for most and, despite our prediction, stress was not significant in any model. Our findings are at odds with previous literature that has found the pandemic has been associated with increased mental distress (Bhattacharjee and Acharya 2020; Gavin et al. 2020;Ornell et al. 2020;Pfefferbaum and North 2020), and that pandemic-related distress was associated with increased drinking (Koopmann et al. 2020;Neill et al. 2020;Tran et al. 2020;Garnett et al. 2021;Jacob et al. 2021). One explanation for this discrepancy may be that the physiological and psychological effects of acute vs. chronic stress differ (Stephens and Wand 2012;Crosswell and Lockwood 2020). Thus, it is plausible that the effect of stress on drinking differs as a function of the timescale and severity. Alternatively, it may be due to differences in measures used; several studies cited above utilized measures that are typically used to diagnose manifestations of poor mental health (e.g. depression, anxiety) in clinical settings, while we used a measure of perceived stress. Similar to us, other nonclinical studies carried out during the pandemic, using momentary assessments of positive and negative affect, suggested that preconsumption affect was not associated with increased drinking during the pandemic (Tovmasyan et al. 2022). Finally, the discrepancy may relate to the nature of our sample, which was predominantly highly educated Westerners. Those who were high in risk-taking (DOSPERT total score) tended to face fewer alcohol-related problems during social isolation, despite impulsivity (i.e. the tendency to take risks) being an established risk factor for addictive behaviors (see Dalley and Ersche 2019;Lee et al. 2019 for reviews). However, boredom was found to be a critical moderator here: those who were less impulsive (in terms of lack of premeditation), who also reported feeling more bored, were more likely to increase alcohol use during the isolation and vice versa. Previous research has identified boredom as a risk-factor for health risk behaviors, such as substance misuse (e.g. Wegner and Flisher 2009). However, we found that although most participants reported increased boredom, the majority also reported a decrease in alcohol use. A reason for the decreased alcohol use in those that were showing higher rates of boredom may relate to the lack of interest in alcohol outside of the typical situations. For example, drinking is typically a social activity (e.g. Niland et al. 2013), and we found that social motives were the most endorsed drinking motive among our sample; indeed, those with higher Figure 1. Changes in alcohol use, alcohol-related problems, stress, and boredom during social isolation (N ¼ 337). Note. Both prevalence estimates (top) and effect sizes (bottom) were calculated using imputed data (m ¼ 40). Adjusted models controlled for age, gender, ethnicity, socioeconomic status, the number of symptoms experienced, and whether the respondent was isolating with children. 1 unit ¼ 8 g pure ethanol; 1 heavy day ¼ consuming >8 units per day for men or >6 units per day for women; APQ: Alcohol Problems Questionnaire; SOS-S: Short Stress Overload Scale; MSBS: Multidimensional State Boredom Scale. Ã FDR-adjusted p < .05, ÃÃ FDR-adjusted p < .01, ÃÃÃ FDR-adjusted p < .001. social drinking motives reported fewer alcohol-related problems. Thus, this suggested that, on average, our sample were motivated to drink when in social situations; something clearly impacted significantly by the social isolation. Reward expectancy (i.e. the anticipated reward associated with alcohol consumption) is determined by drinking motives, with those who tend to 'drink to cope' showing the highest anticipated reward expectancy (Birch et al. 2004;Grant and Stewart 2007). In our sample, coping was the one of the least endorsed motives, suggesting that our sample were low in this trait. In this sense, the expected positive reinforcement associated with drinking (i.e. alleviation of the boredom) would not be a strong motivator to drink in our sample. Further research is needed to disentangle the relationship between drinking motives, reward expectancy, boredom and alcohol consumption. Boredom is associated with a negative affective state, which can be high-or low-arousal (Fahlman et al. 2013). In either case, boredom is associated with anhedonia, thus theoretically decreasing the pleasure associated with usually rewarding activities (Watson et al. 2020). Although typically boredom-induced anhedonia is not associated with substance misuse (Nik cevi c et al. 2017), boredom is a complex and multifaceted phenomenon (Raffaelli et al. 2018). Therefore, as people were exposed to an unprecedented period of social isolation, and subsequently high levels of boredom were reported here and in other studies (e.g. Droit-Volet et al. 2020), it may be that the phenomenon experienced during the pandemic is dissimilar (in terms of intensity and duration) than previous work (e.g. laboratorybased studies) or during previous times. Taken together, these factors may offer a potential explanation for our findings. Limitations We acknowledge several study limitations. First, there were relatively high levels of attrition. This may have been driven by the length of the survey as several relatively long and detailed psychometric instruments were employed. However, a limitation of previous work in this area is that brief singleitem measures, that may be limited by reduced content validity, were used (Clay et al. 2021). Thus, the present work overcomes this limitation, providing nuance at the expense of sample size. Nevertheless, the bias introduced by missing data was minimized by employing multiple imputation. Second, respondents tended to be White, highly educated, and relatively wealthy. Ultimately, this may limit the generalizability of our findings to those with similar sociodemographic characteristics. Similarly, the COVID-19 pandemic has been an unprecedented time, thus pandemic-related findings may only hold true inside this timeframe. Third, self-report measures are prone to measurement error. For instance, there is no way to independently verify self-report drinking and people typically under-estimate their alcohol consumption on questionnaires (Northcote and Livingston 2011). Fourth, 'true' baselines for drinking behavior, stress, and boredom were unavailable and retrospective measures were employed as a proxy. Therefore, causal inference is precluded. Fifth, accurately estimating determinants of change is notoriously difficult and these considerations informed our analysis. Therefore, we purposefully tried to avoid spurious findings by not including baseline measures in our models (i.e. by using change scores instead) (Glymour et al. 2005). Finally, there are other potential confounding factors that were not accounted for here, such as mood disorders (Charles et al. 2021), as these data were not available. Note. Models were fitted using imputed data (m ¼ 40). Models were adjusted for age, gender, ethnicity, socioeconomic status, the number of symptoms experienced and whether the respondent was isolating with children. The first imputed dataset was used to visualize statistically significant interactions. 1 unit ¼ 8 g pure ethanol; 1 heavy drinking day ¼ consuming > 8 units per day for men or > 6 units per day for women. Dashed lines represent the 95% CI. Conclusions We aimed to understand how a period of long-term social isolation affected alcohol use, particularly focusing on drinking motives, negative affect (i.e. stress and boredom), and inhibitory control. Our rationale was not just to characterize patterns observed during COVID-19, but to use the government-enforced lockdowns to model theoretical mechanisms by which alcohol consumption in the home could be affected by periods of enforced social isolation. We found that approximately one-quarter of respondents reported drinking more and around one tenth reported facing an increased number of alcohol related problems. Coupled with recent national statistics, which suggest that alcohol-related deaths in the UK reached an all-time high in 2020 (14 deaths per 100,000 people) (Office for National Statistics 2021), it is clear an 'at risk' group of individuals, who deserve immediate attention, may also require the allocation of future resources to mitigate harm. Surprisingly, however, increased risk-taking was associated with a decrease in the number of alcohol-related problems faced during social isolation and there was no evidence of an association between either stress or boredom and a change in alcohol use behavior. Moreover, several significant interactions suggested that those who were more impulsive and less bored were more likely to report increased alcohol use and vice versa. Therefore, during a period of social isolation, some theoretical mechanisms which underlie alcohol use and misuse may not be observed. This has important implications when considering mechanisms of alcohol misuse; researchers should potentially consider evaluating people's social interactions and isolation status during future work and interventions.
2022-08-26T15:19:16.084Z
2022-08-23T00:00:00.000
{ "year": 2023, "sha1": "f25533f5f1104ccb68567ffb5503c29bfa97b2dc", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/16066359.2022.2099543", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "c2954d6313449d128f9951fe0fa5157496979a90", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
211522562
pes2o/s2orc
v3-fos-license
Cholesterol Reduces Partitioning of Antifungal Drug Itraconazole into Lipid Bilayers Cholesterol plays a crucial role in modulating the physicochemical properties of biomembranes, both increasing mechanical strength and decreasing permeability. Cholesterol is also a common component of vesicle-based delivery systems, including liposome-based drug delivery systems (LDSs). However, its effect on the partitioning of drug molecules to lipid membranes is very poorly recognized. Herein, we performed a combined experimental/computational study of the potential for the use of the LDS formulation for the delivery of the antifungal drug itraconazole (ITZ). We consider the addition of cholesterol to the lipid membrane. Since ITZ is only weakly soluble in water, its bioavailability is limited. Use of an LDS has thus been proposed. We studied lipid membranes composed of cholesterol, 1-palmitoyl-2-oleoyl-sn-glycerol-3-phosphocholine (POPC), and ITZ using a combination of computational molecular dynamics (MD) simulations of lipid bilayers and Brewster angle microscopy (BAM) experiments of monolayers. Both experimental and computational results show separation of cholesterol and ITZ. Cholesterol has a strong preference to orient parallel to the bilayer normal. However, ITZ, a long and relatively rigid molecule with weakly hydrophilic groups along the backbone, predominantly locates below the interface between the hydrocarbon chain region and the polar region of the membrane, with its backbone oriented parallel to the membrane surface; the orthogonal orientation in the membrane could be the cause of the observed separation. In addition, fluorescence measurements demonstrated that the affinity of ITZ for the lipid membrane is decreased by the presence of cholesterol, which is thus probably not a suitable formulation component of an LDS designed for ITZ delivery. ■ INTRODUCTION Pharmaceutical nanotechnology, also known as nanomedicine, 1 is the development of nanoscale drug delivery vehicles, also known as nanoparticles. It can be seen as the development of mechanisms to both increase efficacy and reduce toxicity associated with a given drug by targeting the delivery to the desired tissue, through either active or passive means; a specific dose of the drug can thus have an increased efficacy with reduced side effects. The liposome-based delivery system (LDS) is, so far, the most successful form of the nanoparticle, representing more than half of all currently approved nanomedicine-based drug therapies. 2−4 An LDS is composed of a phospholipid membrane formed into an enclosed sac; use of phospholipids and other biocompatible molecules for the membrane possesses the advantage of automatic biocompatibility. As a nanoparticle, the LDS is extremely versatile, it can carry hydrophobic drugs within the membrane 5 or hydrophilic drugs within the internal pocket. 6 Many aspects of the formulation can be altered to tune the properties of the LDS. A subset of the phospholipids can have polymers conjugated to their headgroup to create a protective polymer corona; poly(ethylene glycol) is currently the gold standard in this capacity. Other amphiphilic biocompatible molecules can be added to the membrane to tune its properties; the most commonly used of these is cholesterol (Chol). A common component of the LDS formulation, Chol is present in 9 out of 15 clinically approved LDS-based drugs and an additional 12 products currently in clinical trials. 7 As an important component that modulates the properties of biomembranes, Chol can play the same role in an LDS; it has the ability to modify the physical properties of a lipid membrane (for extensive reviews, see, e.g., refs 8−12). For example, the presence of Chol can increase the mechanical strength of the lipid bilayer 13−16 leading to both increased stability and decreased passive permeability to water, ions, and small polar molecules, e.g., glucose and drugs. 17−22 Liposomes with a high Chol concentration are also biocompatible since Chol is present in a high concentration in biomembranes; in particular, the cell membrane of erythrocytes has a Chol level as high as 50%. 23 There are, however, also disadvantages regarding the use of Chol; for example, Chol is prone to oxidation. 24,25 Not surprising, oxidized derivatives of cholesterol have been found in a wide range of cosmetics, 26 processed foods, 27 and liposomal pharmaceutics. 28 The drug itraconazole (ITZ) (Figure 1), used to treat mycotic infections, is an ideal candidate for delivery via LDSs. With a solubility of only 1 ng/mL, 29 ITZ bioavailability is a severe problem in terms of its efficacy. The incorporation of water insoluble drugs into an LDS has seen considerable success as a strategy to solve this problem. 30 In fact, the incorporation of ITZ into multilamellar vesicles, a form of LDS, was shown to increase efficacy in the treatment of pneumonia in comparison to the same drug provided orally, dissolved by PEG or incorporated into cyclodextrin. 31 In previous work, we have shown that ITZ can be incorporated into conventional and PEGylated liposomes at a concentration level of up to 15 mol %. 32 We have now, as a next step, investigated the effect of the addition of Chol into an LDS that already carries ITZ using a combined analysis platform that includes both computational molecular dynamics (MD) simulations of the LDS membrane and Brewster angle microscopy (BAM) of monolayers in a Langmuir balance (LB). Both computational and experimental results are in agreement and present a surprising result: ITZ and cholesterol do not coexist in the membrane; rather, they separate within the membrane. The separation of ITZ and Chol is the reason for the lower affinity of the drug for the lipid membrane containing Chol, as shown by fluorescence measurements. We thus propose that our results indicate that inclusion of Chol into the lipid membrane is probably not beneficial for the case of ITZ delivery. Langmuir Balance and Brewster Angle Microscopy (BAM) Measurements. The measurements were performed using a KSV 2000 Langmuir trough (KSV Instruments Ltd., Helsinki, Finland) equipped with an ultraBAM (Accurion GmbH, Goettingen, Germany) microscope, as previously described. 32,33 The BAM microscope was equipped with a 50 mW laser emitting p-polarized light at 658 nm, a 10× objective, and a CCD camera. The spatial resolution of the BAM images was 2 μm. To prepare stock solutions, POPC and Chol were dissolved in chloroform/methanol (4:1 v/v), and ITZ was dissolved in chloroform. Phosphate buffer (pH 7.4) was used as the subphase. All experiments were repeated at least twice to ensure consistent results. Surface pressure−area (π−A) isotherms were reproducible within an error of ±0.02 nm 2 molecules −1 . Liposome-Binding Constant Measurements. POPC and POPC/Chol 4:1 liposomes were prepared by sonication using the modified procedure described previously. 34 Briefly, POPC and Chol were dissolved in chloroform to form stock solutions. The appropriate volumes of the stock solutions were combined in a volumetric flask, and chloroform was evaporated under vacuum. Water was added to reach a lipid concentration of 2.5 mg/mL, and the sample was vortex mixed for several minutes. The lipid dispersion was subjected to five freeze− thaw cycles from liquid nitrogen temperature to 60°C and sonication at ice temperature for 10 min using a titanium tip SONICS VC 130 sonicator. Binding constants (K b 's) of the drug to liposomes were determined using a fluorescence titration technique. 35 Fluorescence spectra were measured using a LS-55 PerkinElmer fluorimeter. Molecular Dynamics (MD) Simulations. MD simulations were performed for four systems containing hydrated lipid bilayers composed of POPC and Chol (20 mol %) (the POPC/Chol bilayer) and ITZ, at a concentration of one Table 1. In system S1, a single ITZ molecule was inserted into the water phase and allowed to spontaneously insert into the lipid bilayer. System S2 was constructed by replicating the frame of system S1 three times over periodic boundary conditions in the bilayer plane (XY) to create a lipid bilayer nine times larger than that of S1. System S1* was constructed from model S1 by decreasing the number of water molecules. A physiological salt concentration (140 mM of NaCl) was used. The results were averaged over simulated replicates and molecules present in the bilayer. To parametrize all molecules and ions, we used the OPLS-AA force field. 36−38 We used lipid models derived in our prior studies 39−42 (molecular topologies of POPC and Chol are provided in Supporting Materials of ref 40). Partial charges for the ITZ molecule were derived in the previously published work. 32 To model water we used the TIP3 parameter set. 43 All simulations were performed using the GROMACS software The Journal of Physical Chemistry B pubs.acs.org/JPCB Article package. 44 The LINCS algorithm was used to maintain covalent bond lengths between hydrogens and heavy atoms, allowing for a 2 fs time step. 45 Simulations were performed at constant temperature (300 K) and pressure (1 atm). Temperature and pressure were controlled using the Nose− Hoover 46,47 and Parinello−Rahman 48 algorithms, respectively. The temperature of solute and solvent were controlled independently, and semi-isotropic pressure coupling was used. The long-range electrostatic interactions were calculated using Particle-Mesh-Ewald algorithm with a real space cutoff of 1 nm. 49,50 The neighbor lists were updated every 10 steps. ■ RESULTS AND DISCUSSION MD Simulations. Figure 2 shows snapshots of the systems taken at various simulation times. For the case of system S1, the ITZ molecule, placed initially in the water phase, entered the membrane after approximately 350 ns of the simulation. Insertion was observed in two out of the six replicas ,and partial insertion was observed in only one replica. For comparison, in our previous studies using a pure POPC bilayer, all ITZ molecules entered the bilayer after less than 450 ns. 32 This observation is consistent with the known reduction in permeability of the bilayer containing Chol. 9 On the other hand, the insertion process is similar for both POPC and POPC/Chol bilayers. During insertion, ITZ is oriented perpendicular to the bilayer surface. The difference lies in the ITZ orientation after insertion: in the POPC bilayer, the drug molecules orient their long axis parallel to the bilayer surface, while in the POPC/Chol membrane, ITZ molecules remain perpendicular to the membrane surface. For the case of system S2, nine ITZ molecules were partially inserted into the bilayer at the beginning of the simulation and separated from each other (Figure 2). During the simulation, all ITZ molecules fully entered the bilayer core. Although the drug/lipid ratio of systems S1 and S2 is identical, the behavior of ITZ in these two systems is significantly different. In system S2, the drug molecules adopt an orientation parallel to the bilayer surface, similar to the behavior of ITZ in the pure POPC bilayer (Figure 3). In addition, the ITZ molecules form aggregates of three molecules, and the drug tends to accumulate in Chol-depleted regions (Figure 3). The qualitative differences in the behavior of the ITZ molecules in the two systems (S1 and S2) can be attributed either to the limited size of the bilayer in system S1 or to the need for the collective action of ITZ molecules to form local clusters oriented parallel to the bilayer surface. Thus, the Figure 1) and the bilayer normal in systems S1 (dashed lines) and S2 (solid lines) over the last 600 ns of the trajectories. The data were averaged over all Chol or ITZ molecules present in the system. The Journal of Physical Chemistry B pubs.acs.org/JPCB Article behavior of ITZ molecules in system S1 would be representative of the highly diluted system where isolated ITZ molecules adopt the orientation of cholesterol molecules. On the other hand, the simulations of system S2 clearly demonstrate a tendency of ITZ molecules to aggregate even at low concentrations in Chol-containing lipid bilayers. The cause of ITZ aggregation in the lipid bilayer is probably the drug− Chol separation, which significantly reduces the volume available for the drug in cholesterol-containing membranes. As a result, the drug concentration increases locally. Figures 4 and 5 provide quantitative results regarding the location and orientation of ITZ in the lipid bilayers. Figure 4 shows the density profiles of ITZ and selected POPC atoms. In system S2, ITZ locates preferentially between the headgroup and the double bonds in the sn2 chain of POPC with the maximum at 1.4 nm from the bilayer center. In system S1, ITZ spreads over the entire leaflet, reflecting its orientation parallel to the membrane normal. Figure 5 shows time development and the distribution of the angles of the long molecular axes of Chol and ITZ. The long axis of Chol molecules makes an average angle of 17.2 ± 0.8°with the bilayer normal. Thus, Chol adopts an orientation approximately parallel to the bilayer normal. The POPC acyl tails have a similar orientation (21.6 ± 0.6°), considering the vector connecting the first and last atom in the sn1 chain. In contrast, ITZ molecules predominantly adopt a perpendicular orientation (72 ± 3°), thus parallel to the membrane plane. However, in system S1, in which only one ITZ molecule was inserted into the bilayer, the ITZ orientation is more similar to the Chol orientation (the ITZ long axis makes an average angle of 33 ± 5°with the bilayer normal). To characterize the interactions between ITZ and Chol in the lipid membrane, we calculated the number of contacts between heavy atoms of both molecules. We assumed that a contact occurred when the distance between two (nonhydrogen) atoms was smaller than 0.6 nm. Figure 6 shows the time development of the number of contacts during the simulation. In the first 400 ns of the simulation, the ITZ−Chol contacts were insignificant. After this time, the number of contacts increased quickly due to the change in the ITZ orientation from parallel to the bilayer normal (initial arrangement) to parallel to the membrane surface. With this arrangement of ITZ, the increase in contact is expected, since ITZ is a long molecule, thus interacting with many lipids while adopting an orientation parallel to the membrane surface. In the next part of the simulation, we observed a decrease in the number of contacts with Chol; this can be interpreted as a sign of separation. This process is, however, not completed within the simulation time. To further evaluate the ITZ−Chol interactions, we calculated the radial distribution functions (RDFs) for heavy atoms of ITZ and lipids according to the equation where n(r) is the number of atoms β in the spherical ring with radius r and width dr around the atom α, 4πr 2 dr is the ring volume, V is the volume of the system, N is the number of atoms, and ⟨ ⟩ denotes averaging over time and ensemble. The RDF function (Figure 7) for the ITZ-POPC pair shows a narrow maximum located at 0.5 nm, which indicates ITZ preference for interacting with POPC. In the case of a ITZ− Chol pair, the RDF function has a broad maximum centered at 2.8 nm, demonstrating that Chol tends to be located away from the drug. A small maximum can be noticed at approximately 0.6 nm, which indicates that Chol−ITZ interactions are also possible. To gain insight into the local ITZ impact on POPC properties, we calculated the order parameter, S CD , for the sn1 chains of POPC molecules located at the distances up to 1 nm, between 1 and 2 nm, and above 2 nm. The distance was calculated between the center of mass of the acyl tails and the center of mass of three ITZ fragments (see Figure 1) in the plane of the bilayer (only lipids in the same leaflet were included into the calculation). The S CD order parameter is defined as follows: where θ i is the angle between the C−H bond of the ith carbon atom and the bilayer normal. The angle brackets mean averaging over time and over appropriate C−H bonds in the bilayer. The S CD parameter profiles along the PA sn1 chains ( Figure 8) show a decrease in the order of hydrocarbon tails of the lipids in the vicinity of the drug molecules. Therefore, the presence of ITZ in the membrane should increase its fluidity. Figure S1, the Supporting Information). The calculated values indicate that both monolayers are in the liquid-condensed (LC) phase at larger surface pressures. We introduced a variety of concentrations of ITZ (5, 10, and 15 mol %) into these films. The addition of ITZ in the investigated concentration range does not drastically affect the position of the isotherms; it however significantly alters their slopes. This indicates that the incorporation of the drug, even at low concentrations, into the POPC/Chol membranes disturbs their structures. The fact that the isotherm slope for ITZ-containing monolayers is less steep than that for the POPC/Chol films indicates that the addition of itraconazole increases the fluidity of the model membranes. This is confirmed by the compression modulus values calculated for the ITZ-containing monolayers ( Figure S1, Supporting Information). The BAM images obtained during the compression of the films are shown in Figure 10. BAM images taken for both twocomponent (POPC/Chol) monolayers are similar. At low surface pressure (π = 1 mN/m), brighter oval domains of the liquid expanded (LC) phase that coexist with the gaseous (G) phase (darker areas in the images) are visible. When compressing films, the LC domains merge and the LC phase covers the whole interface up to the collapse point. This is reflected in the homogeneous BAM images that confirm the miscibility of POPC and Chol. Our results are consistent with previous studies showing that the excess Gibbs energy of mixing (ΔG exc ) for the POPC/Chol binary system is negative for the entire range of monolayer compositions. 52 The only difference that can be noticed between POPC/Chol films is that the LE domains observed for the POPC/Chol 4:1 monolayer are smaller than that for the POPC/Chol (1:1) monolayer. This indicates a higher condensation of the latter monolayer. The observed effect is due to the higher Chol concentration, whose condensing property on phospholipids is well-known. 52,53 For the case of ternary POPC/Chol/ITZ monolayers, in which the ratio of POPC to Chol was 4:1, their morphology does not change up to 10 mol % of ITZ in the mixed film (BAM images are practically identical to those for the POPC/ Chol monolayer, data not shown). The higher content of ITZ (15 mol %), however, causes the condensed phase domains observed at lower surface pressure (π = 1 mN/m) to be smaller than those observed for the two-component POPC/ Chol (4:1) monolayer. This confirms that the POPC/Chol/ ITZ monolayers have a more liquid character than the POPC/ Chol films. In addition, at higher surface pressures (π ≥ 10 mN/m), the monolayers are heterogeneous, and small condensed domains can be observed in the BAM images, suggesting phase separation. ITZ exerts a similar effect on the POPC/Chol 1:1 film; however, the morphology of the monolayer changes at the lower itraconazole content (10 mol %), and monolayers are inhomogeneous over the entire range of surface pressures. Furthermore, at a higher concentration of ITZ (15 mol %), the domains observed at higher surface pressures are very bright. This suggests that multilayer (3-D) structures are present, indicating that, at higher surface pressures, the film-forming molecules (most probably ITZ) are squeezed out from the monolayer. This is In our previous studies with the pure POPC bilayer, we showed that the POPC/ITZ monolayers were homogeneous at ITZ concentration levels up to 15 mol % over the entire range of surface pressures. 32 It can therefore be concluded that the higher the level of Chol in the lipid monolayer is, the lower the concentration of ITZ is at which the membrane morphology starts to be disturbed. Fluorescence Measurements. To assess the effect of Chol on ITZ partitioning between the liposomal and aqueous phases, we determined the so-called binding constant, K b , defined as 54 where c L and c w are ITZ concentrations in the liposomal and aqueous phases, respectively. [L] is the lipid concentration in the system. Two sets of samples containing constant ITZ concentration and increasing content of the POPC or POPC/ Chol 4:1 liposomes were prepared, and emission spectra were measured. We observed an increase in ITZ fluorescence intensity after the addition of the liposomes. Figure 11 presents a typical dependence of fluorescence intensity (F) and [L] for the ITZ solution titrated with the POPC and POPC/Chol 4:1 vesicles. K b was determined by fitting the experimental data to the formula 54 where F init , F, and F comp are the fluorescence intensity of the drug measured without lipid, after adding lipid to the concentration [L], and the asymptotic intensity achieved at complete binding, respectively; the fitted line is shown in Figure 10. BAM images taken for the investigated films at different stages of compression. The Journal of Physical Chemistry B pubs.acs.org/JPCB Article Figure 11. The average binding constants of ITZ to the POPC and POPC/Chol liposomes were found to be 32.0 ± 2.0 and 64.9 ± 5.2 mg mL −1 , respectively. These results indicate that the presence of Chol in the lipid membrane can significantly reduce the affinity of the drug for the membrane. This is in line with the results of MD simulations, which show that Chol hinders ITZ penetration into the lipid bilayer. ■ CONCLUSIONS Our results clearly demonstrate that Chol and ITZ do not mix in lipid bilayers but rather separate into different domains, thus reducing membrane stability. The orientation of the ITZ molecules in the bilayer results from the shape and distribution of the polar groups in the molecule, and this orientation clashes with that of Chol. Cholesterol is evolutionarily optimized to increase order of the lipids in biological membranes and adopts a slightly tilted orientation toward the normal to the bilayer. This orientation is maintained by its (1) hydroxyl group that locates to the interface between polar and hydrophobic regions, (2) rigid steroid ring that neighbors the ordered section of the lipid tails, and (3) isooctyl tail spanning the most disordered section of the bilayer. While also a rigid molecule, ITZ is, however, longer than the cholesterol molecule or POPC acyl tails; thus, ITZ, in orientation parallel to the bilayer normal, can span the disordered region of the bilayer or even protrude into the opposite leaflet. These two situations are entropically unfavorable due to the ordering effects that result from the presence of a rigid molecule in the highly disordered region of the bilayer. In addition, polar groups are distributed along the entire length of the molecule; thus, in orientation parallel to the bilayer normal, some of them would be buried in the hydrophobic core of the membrane. These two factors lead to the strong preferences of ITZ to locate to the interface between the hydrocarbon chain region and the polar region and then orient parallel to the membrane surface. ■ ACKNOWLEDGMENTS For financial support, we thank the Academy of Finland the Center of Excellence program (Grant 307415 (PC, TR)). CSC-IT Centre for Science (Espoo, Finland; Project tty3995) and the Finnish Grid and Cloud Infrastructure (persistent identifier urn:nbn:fi:research-infras-2016072533) are acknowledged for excellent computational resources. The Journal of Physical Chemistry B pubs.acs.org/JPCB Article
2020-02-27T09:16:49.297Z
2020-02-26T00:00:00.000
{ "year": 2020, "sha1": "15d1b311f73eb133756b3d91138b4f202943bcb4", "oa_license": "CCBY", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.jpcb.9b11005", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "2b11a0e9daf2c08fde0b1fbb0e192f7645105d24", "s2fieldsofstudy": [ "Chemistry", "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
258786527
pes2o/s2orc
v3-fos-license
Data Sharing Platform for MIMIC-IV and MIMIC-ED Data Marts: Designing a Data Retrieving System Based on the Intra ‑ . Accessibility to high-quality historical data for patients in hospitals may facilitate related predictive model development and data analysis experiments. This study provides a design for a data-sharing platform based on all possible criteria for Medical Information Mart for Intensive Care (MIMIC) IV and Emergency MIMIC-ED. Tables containing columns of medical attributions and outcomes were studied by a team of 5 experts in Medical Informatics. They completely agreed about the columns connection using subject-id, HDM-id, and stay-id as foreign keys. The tables of two marts were considered in the intra-hospital patient transfer path with various outcomes. Using the constraints, queries were generated and applied to the backend of the platform. The suggested user interface was drawn to retrieve records based on various entry criteria and present the output in the frame of a dashboard or a graph. This design is a step toward platform development that is useful for studies aimed at patient trajectory analysis, medical outcome prediction, or studies that require heterogeneous data entries. Introduction Retrospectively collected medical data provide the opportunity to improve patient care through algorithm development and knowledge discovery through modeling and outcome prediction. The model has higher quality in case of being developed with proper inputs in terms of enough quantity and dimensionality 1, 2. Thus, data sharing for model development should provide the required records based on the defined problem 2, 3. Despite the advances in patient data collection through electronic health records (EHR), registries, and self-care apps 2, data access remains a challenge, particularly concerning big data analysis. Sharing the whole big dataset when only a part of the data is required results in adverse consequences in terms of research ethical issues and waste of time for data understanding and preparation. The well-adjusted access to medical data based on the defined study questions may overcome multifaceted concerns. Platforms are a solution to narrow the data based on the set queries and criteria 2. The MIMIC-IV and MIMIC-ED are currently shared via the Physionet website 4 in the frame of several separate CSV files 3, 5. They contain the data of common records with specified ID from the emergency department (MIMIC-ED) to hospital wards and ICU with clinical details (MIMIC IV) providing the possibility of following each case's transfer across the hospital and the final outcome. Currently, data for cases at hospital departments and reports for radiology, laboratory, medication, clinical notes, vital signs in chart events, history of the disease, and demographic and clinical data are provided. The MIMI-IV is the medical information for over 40,000 patients admitted to intensive care units (ICU). The newer versions of the data have been even published with more features and volume of data3. Although the MIMIC-III database adopted a permissive access scheme that allowed for broad reuse of the data 5, there is no data-sharing platform to manage the records according to the queries. The mechanism of schema has already been used for the prediction of key patient outcomes such as mortality, clinical deterioration, and sepsis 6; however, the data access for a record in two different marts with the same id, to follow the patients' pathway in hospital wards or discharge, remained a lack. Furthermore, learning the metadata of the MIMIC marts and manually extracting records from the mart prolonge the accessing process. Designing a platform is a step forward to having a tool for overcoming these limitations and retrieving the data for clinical and research purposes. Due to the entity of available data in these marts with connected records of tables via subject-id starting from ED admission to ICU discharge, and to have a logical design for the sharing data platform, the patient intra-hospital patient transfer pathway is suggested. It starts from the emergency department (ED) where the patient refers or is transferred by ambulance for further care 1, 7. Transfer from ED to inpatient wards is a common event, with over 12 million events annually 1. Additionally, there are four million patients are admitted to ICU each year, either from hospital wards, or ED. Most of these patients transfer from ICU to a general ward (GW) 8. In each point of care including ED, GW, or ICU, there are possible outcomes that three of them are covered in MIMIC-ED and MIMIC-IV including transfer or admission to the next station, discharge, and death. Hence, using the patient pathway structure may facilitate patient trajectory analysis and outcome prediction by retrieving the corresponding cases. This study aimed to design a system for a data-sharing platform for MIMIC-IV and MIMIC-ED based on an intra-hospital patient transfer pathway. Material and Methods In MIMIC-ED and MIMIC IV, tables are linked by identifiers which usually have the suffix 'ID'. For example, SUBJECT-ID refers to a unique patient, HADM-ID refers to a unique admission to the hospital, and ICU stay-ID refers to a unique admission to an intensive care unit. They are unique across the patient transfer pathway and can be used to connect the columns of the marts' tables 1. By joining Chartevent and given outcomes such as death via ID-items, it is possible to create the constraint and have a tailored new table. To fulfill the idea of designing a SQL-based platform, a technical expert team at PLRI of TU Braunschweig in Germany was created. After one year of working with these marts for experimental purposes, the technical team started designing a platform for easier and maximum usage of the available data as a preliminary step for system development. Seven focus group meetings, every 2 hours by 5 experts in the Medical Informatics and data engineering field were conducted. After identifying the IDs as primary and foreign keys of tables' columns, the queries based on SQL were studied 9. That is, tables based on the mart structures are defined and connected. The experts agreed on the possible constraints for the backend of the platform; based on them, the features were considered for the front end of the platform. With complete experts' concurrence regarding the front end of the platform's design, the following functions for the platform were considered: ¾ Storing the data in an SQL database as it is more suitable for our use case (dynamic search), ¾ Using the subject-id and HDM-id as foreign keys to relate an attribute such as the vital sign in chart time while staying in a specific department with a given outcome ¾ Using SQL query to get the relative information based on the search criteria ¾ Adding the option of exporting the results to a new CSV file ¾ The platform should contain a graph and dashboard section to visualize the data. Experts completely agreed on these functions to be more customized according to the patient transfer pathway in hospitals in the developing step. Results The result of the first step of checking the common data elements in MIMIC-ED and MIMIC IV is presented in table 1. These columns of the tables in two marts could be connected via the subject-id, HDM-id, and ICU stay-id of the tables. To design the backend of the platform, the information schemas were depicted. Based on table 1 and the revealed platform functions, the frontend was designed, shown in figure 2. Conclusion According to the structure of MIMIC-IV and MIMIC-ED composed of the data of patients in the ED, triage, and admitted in the hospital (wards and ICU), designing a platform based on patient flow may be a solution for easy and quick data sharing. It is useful when the amount of data is continuously increasing. As Figure 1 shows there are several IDs presented in the same color that could connect data elements from different tables in the marts. These connections could be used for the criteria creation. As an example to get the records of a patient who died in the ICU with blood pressure greater than 140, the following steps should be done by the system: ¾ Getting the subject-id and stay-id of patient A to search the ICU table of MIMIC-IV ¾ Creating the constraints of death with BP>14 in ICU for the subject-and stay-id ¾ Searching the outcome column for the patients with the subject-id of patient A. ¾ Searching the death cases in the outcome table and picking the related record ¾ Using the constraints to bring up the required data for patient A in CSV. This will be used to create all possible criteria to develop the data retrieving platform toward efficient data management and supporting researchers with heterogeneous data requirements. Examples of these studies could be titles such as prediction the ICU length of stay for patients transferred from ED with comorbidity and unstable vital signs, or emergency triage tool development to estimate the risk of ICU transfer for elderly patients affected with diabetes type II, or the trajectory prediction after hospital admission for pregnant women with unstable BP. Designing a platform as an electronic tool might be an essential need for easing the data analysis and knowledge discovery profession. However, it may face challenges regarding the limitation in data accuracy, not organized timing, and lab event data based on the unit. In the next steps, the research team has the plan for the system development and evaluation to facilitate the marts' usage.
2023-05-20T06:17:11.121Z
2023-05-18T00:00:00.000
{ "year": 2023, "sha1": "87e8d85de2d4c4ecc67874e249ee35dc3ec5ac6c", "oa_license": "CCBYNC", "oa_url": "https://ebooks.iospress.nl/pdf/doi/10.3233/SHTI230072", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "c3f579b99a3ed5c076b128f9a1a8b864c470e5f4", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
215769131
pes2o/s2orc
v3-fos-license
The Circumbilliard: Any Triangle can be a 3-Periodic A Circumconic passes through a triangle's vertices. We define the Circumbilliard, a circumellipse to a generic triangle for which the latter is a 3-periodic. We study its properties and associated loci. Introduction Given a triangle, a circumconic passes through its three vertices and satifies two additional constraints, e.g., center or perspector 1 . We study properties and invariants of such conics derived from a 1d family of triangles: 3-periodics in an Elliptic Billiard (EB): these are triangles whose bisectors coincide with normals to the boundary (bounces are elastic), see Figure 1. Amongst all planar curves, the EB is uniquely integrable [8]. It can be regarded as a special case of Poncelet's Porism [3]. These two propeties imply two classic invariances: N -periodics have constant perimeter and envelop a confocal Caustic. The seminal work is [17] and more recent treatments include [11,16]. 1 Where reference and polar triangles are perspective [19]. We have shown 3-periodics also conserve the Inradius-to-Circumradius 2 ratio which implies an invariant sum of cosines, and that their Mittenpunkt 3 is stationary at the EB center [14]. Indeed many such invariants have been effectively generalized for N > 3 [1,2]. We have also studied the loci of 3-periodic Triangle Centers over the family: out of the first 100 listed in [9], 29 sweep out ellipses (a remarkable fact on its own) with the remainder sweeping out higher-order curves [6]. Related is the study of loci described by the Triangle Centers of the Poristic Triangle family [12]. Summary of the paper: given a generic triangle T we define its Circumbilliard CB: a Circumellipse to T for which the latter is a 3-periodic. We then analyze the dynamic of geometry of Circumbilliards for triangles derived from the 3-periodic family such as the Excentral, Anticomplementary, Medial, and Orthic, as well as the loci swept by their centers. Additional results include: • Proposition 5 in Section 3 describes regions of the EB which produce acute, right-triangle, and obtuse 3-periodics. • Theorem 1 in Section 3: The aspect ratio of Circumbilliards of the Poristic Triangle Family [4] is invariant. This is a family of triangle with fixed Incircle and Circumcircle. A reference table with all Triangle Centers, Lines, and Symbols appears in Appendix B. Videos of many of the experiments are assembled on Table 2 in Section 4. The Circumbilliard Let the boundary of the EB satisfy: Where a > b > 0 denote the EB semi-axes, and c = a 2 −b 2 throughout the paper. Below we use aspect ratio as the ratio of an ellipse's semi-axes. When referring to Triangle Centers we adopt Kimberling X i notation [9], e.g., X 1 for the Incenter, X 2 for the Barycenter, etc., see Table 3 in Appendix B. The following five-parameter equation is assumed for all circumconics not passing through (0, 0). Proposition 1. Any triangle T = (P 1 , P 2 , P 3 ) is associated with a unique ellipse E 9 for which T is a billiard 3-periodic. The center of E 9 is T's Mittenpunkt. Proof. If T is a 3-periodic of E 9 , by Poncelet's Porism, T is but an element of a 1d family of 3-periodics, all sharing the same confocal Caustic 4 . This family will share a common Mittenpunkt X 9 located at the center of E 9 [14]. Appendix A shows how to obtain the parameters for (2) such that it passes through P 1 , P 2 , P 3 and is centered on X 9 : this yields a 5×5 linear system. Solving it its is obtained a quadratic equation with positive discriminant, hence the conic is an ellipse. E 9 is called the Circumbilliard (CB) of T . Figure 2 shows examples of CBs for two sample triangles. 3.1. Excentral Triangle. The locus of the Excenters is shown in Figure 3 (left). It is an ellipse similar to the 90 • -rotated locus of X 1 and its axes a e , b e are given by [5,6]: Proposition 2. The locus of the Excenters the stationary MacBeath Circ The Excentral Triangle's X 6 coincides with the Mittenpunkt X 9 of the reference [9]. Since over the 3-periodics the vertices of the Excentral lie on an ellipse and its center is stationary, the result follows. Proposition 3. The Excentral CB is centered on X 168 , whose trilinears are irrational, and whose locus is non-elliptic. Proof. X 168 is the Mittenpunkt of the Excentral Triangle [9] and its trilinears are irrational 5 on the sidelengths. To determine if its locus is an ellipse we use the algebro-numeric techniques described in [6]. Namely, a least-squares fit of a zero-centered, axis-aligned ellipse to a sample of X 168 positions of the 3-periodic family produces finite error, therefore it cannot be an ellipse. 5 No Triangle Center whose trilinears are irrational on sidelengths has yet been found whose locus under the 3-periodic family is an ellipse [6]. Left: the CB of the Excentral Triangle (solid green) centered on the latter's Mittenpunkt is X 168 [9]. Its locus (red) is non-elliptic. Also shown (dashed green) is the elliptic locus of the Excenters (the MacBeath Circumellipse E ′ 6 of the Excentrals [19]), whose center is X 9 [6]; Top Right: the CB of the Anticomplementary Triangle (ACT) (blue), axis-aligned with the EB. Its center is the Gergonne Point X 7 , whose locus (red) is elliptic and similar to the EB [6]. The locus of the ACT vertices is not elliptic (dashed blue); Bottom Right: the CB of the Medial Triangle (teal), also axis-aligned with the EB, is centered on X 142 , whose locus (red) is also elliptic and similar to the EB, since it is the midpoint of X 9 X 7 [9]. The locus of the medial vertices is a dumb-bell shaped curve (dashed teal). Video: [13,PL#03] This had been observed in [6] for several irrational centers such as X i , i =13-18, as well as many others. Notice a center may be rational but produce a non-elliptic locus, the emblematic case being X 6 , whose locus is a convex quartic. Other examples include X j , j =19, 22-27, etc. Figure 3 (top right). The locus of its vertices is clearly not an ellipse. Anticomplementary Triangle (ACT). The ACT is shown in The ACT is perspective with the reference triangle (3-periodic) at X 2 and all of its triangle centers correspond to the anticomplement 6 of corresponding reference ones [19]. The center of the CB of the ACT is therefore X 7 , the anticomplement of X 9 . We have shown the locus of X 7 to be an ellipse similar to the EB with axes [6]: Remark 1. The axes of the ACT CB are parallel to the EB and of fixed length. This stems from the fact the ACT is homothetic to the 3-periodic. Medial Triangle. The locus of its vertices is the dumbbell-shaped curve, which at larger a/b is self-intersecting, and therefore clearly not an ellipse, Figure 3 (bottom right). Like the ACT, the Medial is perspective with the reference triangle (3periodic) at X 2 . All of its triangle centers correspond to the complement 7 of corresponding reference ones [19]. The center of the CB of the Medial is therefore X 142 , the complement of X 9 . This point is known to sit midway between X 9 and X 7 . Remark 2. The locus of X 142 is an ellipse similar to the EB. This stems from the fact X 9 is stationary and the locus of X 7 is an ellipse similar to the EB (above). Therefore its axes will be given by: This phenomenon is shown in Figure 4. Also shown is the fact that X i , i =7, 142, 2, 9, 144 are all collinear and their intermediate intervals are related as 3 : 1 : 2 : 6. In [10] this line is known as L(X 2 , X 7 ) or L 663 . X 144 is the perspector of the ACT and its Intouch Triangle (not shown). Computing the equation P 2 − P 1 , P 3 − P 1 = 0, after careful algebraic manipulations, it follows that x 1 satisfies the quartic equation . Construction for both ACT and Medial CBs, centered on X 7 and X 142 , respectively. The Incircle of the ACT (resp. 3-periodic) is shown blue (resp. green). The former touches the ACT at the EB and the latter touches the 3-periodic sides at the Medial CB. Also shown is line L(2, 7) = L 663 which cointains X i , i = 7, 142, 2, 9, 144. Their consecutive distances are proportional to 3 : 1 : 2 : 6. X 144 was included since it is the perspector of the ACT and its Intouch Triangle (not shown) [19]. Video: [13, PL#04,05] With y ⊥ obtainable from (1). Equivalently, a 3-periodic will be obtuse iff one of its vertices lies on top or bottom halves of the EB between the P ⊥ i , see Figure 5. Consider the elliptic arc along the EB between (±x ⊥ , y ⊥ ). When a vertex of the 3-periodic lies within (resp. outside) this interval, the 3-periodic is obtuse (resp. acute). Proposition 6. When a/b > α 4 , the locus of the center of the Orthic CB has four pieces: 2 for when the 3-periodic is acute (equal to the X 6 locus), and 2 when it is obtuse (equal to the locus of X 6 of T ′′ = P 2 P 3 X 4 . Proof. It is well-known that [9] an acute triangle T has an Orthic whose vertices lie on the sidelines. Furthermore the Orthic's Mittenpunkt coincides with the Symmedian X 6 of T . Also known is the fact that: Remark 4. Let triangle T ′ = P 1 P 2 P 3 be obtuse on P 1 . Its Orthic has one vertex on P 2 P 3 and two others exterior to T ′ . Its Orthocenter X 4 is also exterior. Furthermore, the Orthic's Mittenpunkt is the Symmedian Point To see this, notice the Orthic of T ′′ is also 8 T ′ . T ′′ must be acute since its Orthocenter is P 2 . The CB of the orthic is shown in Figures 6 for four 3-periodic configurations in an EB whose a/b > α 4 . Proposition 7. The coordinates (±x * , ±y * ) where the locus of the center of the Orthic's CB transitions from one curve to the other are given by: δ + a 6 Proof. Let P 1 = (x 1 , y 1 ) be the right-triangle vertex of a 3-periodic, given by (x ⊥ , y ⊥ ) as in (4). Using [5], obtain P 2 = (p 2x /q 2 , p 2y /q 2 ) and P 3 = (p 3x /q 3 , p 3y /q 3 ), with: It can be shown the Symmedian point X 6 of a right-triangle is the midpoint of its right-angle vertex altitude. Computing X 6 using this property leads to the result. Let α eq = 4 √ 3 − 3 ≃ 1.982 be the only positive root of x 4 + 6x 2 − 39. It can be shown, see Figure 7: At a/b = α eq , the locus of the Orthic CB is tangent to EB's top and bottom vertices. If a 3-periodic vertex is there, the Orthic is equilateral. Proof. Let T be an equilateral with side s eq and center C. Let h be the distance from any vertex of T to C. It can be easily shown that h/s eq = √ 3/3. Let T ′ be the Excentral Triangle of T : its sides are 2s eq . Now consider the upside down equilateral in Figure 7, which is the Orthic of an upright isosceles 3-periodic. h is clearly the 3-periodic's height and 2s eq is its base. The height and width of the upright isosceles are obtained from explicit expressions for the vertices [5]: Setting h/s eq = √ 3/3 and solving for α yields the required result for α eq . 3.6. Summary. Table 1 3.7. Circumbilliard of the Poristic Family. The Poristic Triangle Family is a set of triangles (blue) with fixed Incircle and Circumcircle [4]. It is a cousing of the 3-periodic family in that by definition its Inradius-to-Circumradius r/R ratio is constant. Weaver [18] proved the Antiorthic Axis 9 of this family is stationary. Odehnal showed the locus of the Excenters is a circle centered on X 40 and of radius 2R [12]. He also showed that over the family, the locus of the Mittenpunkt X 9 is a circle whose radius is 2d 2 (4R + r) and center is [12, page 17]. Let ρ = r/R and a 9 , b 9 be the semi-axis lengths of the Circumbilliard a poristic triangle. As shown in Figure 8: The ratio a 9 /b 9 is invariant over the family and is given by: 9 The line passing through the intersections of reference and Excentral sidelines [19]. Orthic CB for an EB with a/b = 1.5 > α 4 , i.e., containing obtuse 3-periodics, which occur when a 3-periodic vertex lies on the top or bottom areas of the EB between the P ⊥ . Top left: 3-periodic is sideways isosceles and acute (vertices outside P ⊥ , so 3 orthic vertices lie on sidelines. The Orthic CB centers is simply the mittenpunkt of the Orthic, i.e, X 6 of the 3-periodic (blue curve: a convex quartic [6]). Top right: The position when a vertex is at a P ⊥ and the 3-periodic is a right triangle: its Orthic and CB degenerate to a segment. Here the CB center is at a first (of four) transition points shown in the other insets as Q i , i = 1, 2, 3, 4. Bottom left: The 3-periodic is obtuse, the Orthic has two exterior vertices, and the center of the CB switches to the Symmedian of T ′′ = P 1 P 2 X 4 (red portion of locus). Bottom right:. The 3-periodic is an upright isosceles, still obtuse, the center of the Orthic CB reaches its highest point along its locus (red). Video: [13,PL#06]. Proof. The following expression for r/R has been derived for the 3-periodic family of an a, b EB [7, Equation 7]: Solving the above for a/b yields the result. Conclusion Videos mentioned above have been placed on a playlist [13]. Table 2 contains quick-reference links to all videos mentioned, with column "PL#" providing video number within the playlist. , whose Incircle (green) and Circumcircle (purple) are fixed. Here R = 1, r = 0.3625. Over the family, the Circumbilliard (black) has invariant aspect ratio, in this case a 9 /b 9 ≃1.5. Also shown is the circular locus of X 9 [12, page 17]. Video: [13,PL#07]. PL# Title Section 01 Mittenpunkt stationary at EB center 1 02 Circumbilliards (CB) of Various Triangles 2 03 CBs of Derived Triangles and Loci of Centers 3 04 CBs of ACT and Medial (separate) 3 05 CBs of ACT and Medial (superposed) 3 06 CB of Orthic and Locus of its Mittenpunkt 3 07 Invariant Aspect Ratio of Circumbilliard of Poristic Family 3 Table 2. Videos mentioned in the paper. Column "PL#" indicates the entry within the playlist [13] Symbol Meaning Note a, b EB semi-axes a > b > 0 P i , s i Vertices and sidelengths of 3-periodic invariant s i P ′ i Vertices of the Excentral Triangle a 9 , b 9 Semi-axes of Poristic Circumbilliard r, R, ρ Inradius, Circumradius, r/R ρ is invariant α EB aspect ratio a/b α 4 a/b threshold for obtuse 3-Periodics 2 √ 2 − 1 αeq a/b for equilateral Orthic 4 √ 3 − 3 P ⊥ Obtuse 3-periodic limits on EB x * , y * where X * 6 detaches from X 6 locus Occurs when some P i is at P ⊥ Table 4. Symbols mentioned in the paper.
2020-04-16T01:00:40.430Z
2020-04-14T00:00:00.000
{ "year": 2020, "sha1": "79fb279284d5a13a3695e6ce3dcb021d2212dd3d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "79fb279284d5a13a3695e6ce3dcb021d2212dd3d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
4836895
pes2o/s2orc
v3-fos-license
Efferent Copy and Corollary Discharge Motor Control Behavior Associated with a Hopping Activity Hoppers respond not only to stimuli from the ground surfaces but also to cues generated by their own behaviors. This leads to desensitization because although the afferent and reafferent signals have distinct causes, they are carried by the same sensory channels. From a behavioral viewpoint, it may be necessary to distinguish between signals from the two causes especially when monitoring changes in the external environment separate from those due to self-movement. We were able to separate afferent sensory stimuli from self-generated, reafferent signals using an action-oriented perception system and dynamic programming approach. This effort addressed the question of how the nerve system selects which particular degree of freedom (DOF) to cancel reafferent input. We have proposed an internal one-DOF model characterizing the motor control system during hopping, allowing the generation of an estimated ground reaction signal to drive natural shock absorption of the leg. Introduction The French physiologist Claude Bernard first described the difference between an organism's internal environment and its external environment [1]. Bernard concluded that the internal environment served as a kind of buffer between living cells and the fluctuating external environment. Later, American researcher Walter Cannon revised Bernard's suggestive ideas into a much more detailed form. In his popular book, The Wisdom of the Body [2], Cannon coined the term homeostasis for the modern concept This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. of biological self-regulation. Cannon's idea became a central concept in physiology, later borrowed by scientists in several other fields as well. The problem described herein is a popular formulation of biomechanical regulation where by the motivation is to keep the state of the system close to equilibrium [3]. Such problems are common in the theory of optimal control of a motion or a process. The basic task of a control system is to manage the relationship between sensory variables and motor variables. There are two basic kinds of transformations that can be considered: sensory-tomotor transformation and motor-to-sensory transformation. The transformation from motor variables to sensory variables is accomplished through the environment and musculoskeletal systems; these physical systems transform efferent motor actions into reafferent sensory feedback [4,5]. While the notion goes back to Von Helmholtz [6], the modern concept of "corollary discharge" and "efference or efferent copy" was put forward previously [7]. Action-oriented perception system Optimization theory provides a computational framework which is natural for a selection process such as motor planning [4]. In the optimal control approach, movement trajectories are not explicitly planned but are a consequence of the objective function and the system's dynamics. The problem, however, of coordinating the different muscle groups involved in the repetitive sequence of skeletal activities, such as locomotion, has not been given sufficient attention from the perspective of control theory. In fact, the musculoskeletal system with all of its physiologic actuators represents a large control system. Therefore it is interesting to explore a functional characterization obtained by using a systematic approach to muscle control. Model-based representation control strategies are those that rely on accurate internal models of the environment. These are constructed from a combination of perceptual information and prior knowledge. However, prior knowledge represents the primary source information for planning and executing actions even in the absence of perceptual information [8]. Forward models are a predictive internal approach for motor control that takes the available perceptual information, combined with a particular motor program including dynamic optimization [9] or static optimization [10], and tries to predict the outcome of the planned motor movement [11]. Most work in this vein of thinking has focused on the nature of the objective function, such as minimizing energy, motor variance, or performance errors. This approach requires complete information in the control algorithm about the controlled objects. In this application it would be necessary to account for the underlying physiological properties of muscle in order to obtain more accurate estimates of muscle force. An alternative to model-based control is that of information-based control. Informational control strategies organize movements and action based on perceptual information of the environment, rather than on cognitive models or representation of the external world. The actions of the motor systems are organized by environmental information and information about the current state of the functioning agent [12]. A core assumption of information-based control strategies is that perceptions of the environment are rich in information and veridical for the purpose of producing actions. This approach runs counter to the assumption of indirect perception made by model-based control strategies. In this study, we consider an information-based control approach with partial information about the objects and active information storage in the control process. Central to the information-based approach will be observed experimental sensory data. Suppose that we observe an output function x in the time interval t, where a 'black box' manipulation is comprised of subsystems with each system subject to differential equations in which unknown parameters of, or are generally, of unknown types. It is desired to determine the nature of the black box and, if we have control over the system, this may be regarded as an adaptive feedback problem, in which its gathered information gives an indication for what input will now provide the most helpful additional information (Figure 1). The objectives here are similar to a previously prescribed dual control theory [13], which suggests that adaptation can become specifically tuned to identify task-specific parameters in an optimal manner. Model Development The motor behavior of each subject's hopping activity is analyzed in terms of an actionperception cycle rather than stimulus and response. The hopping individual uses an internal model of the world to predict, not infallibly, the results of interacting with the surface in optimal policy. In this section, we establish the way in which action and perception are integrated, whether in intelligent human behavior, the activity of animal, or the smooth functioning of an adapted robot. We applied the feedback control approach (Figure 1) to a three DOF leg spring chain in a position of stable equilibrium under the influence of leg stiffness ( Figure 2 and Table 1). If the leg spring receives a displacement during hopping, the forces will no longer equilibrate, but the system will be exposed to the action of a force on a leg spring chain. The basic idea is to use the very deviation of the system from its desired performance as a restoring muscle force to guide the leg spring back to the proper functioning. The performer will commence to move with an optimal policy when the central nervous system (CNS) is permitted to select its own forward model from the available modes defining the freedom of action. A forward model uses the efference copy [7] to anticipate and cancel the sensory effects of movement ( Figure 1). Sensory signals arise in the periphery due to two causes: those resulting from environmental disturbances on the body (afference) and those resulting from self-generated movement (reafference) as the sensory consequences of the movement itself [14]. From a behavioral viewpoint, it may be necessary to distinguish between signals from the two causes especially to monitor changes in the external world separate from those resulting from self-movement. The internal sensory signals need to cancel reafference has been labeled corollary discharges [7]. This approach demands constant perception of the system. Alternatively, an additional extraneous force y(t), determined a priori as a known function of time, can be applied to the system in such a way as to keep x(t) as close as possible to z(t). This provides an alternate concept that is a model-based approach to determine the trajectories without further reference to the behavior of the subject. The Single DOF Internal Model and Policy In this study, we assume that the nervous system must have its own internal model as a simple representation of the body's dynamic interaction with the surrounding environment. When observing a hopping individual, they learn the movement first by reducing their DOFs through muscle stiffening in order to have focused control over the activity, they then tune themselves to the surface, gradually "loosening up", exploring the available DOFs as the task becomes more comfortable, and from there find an optimal repeated motion similar to a natural vibration mode. As introduced, the dynamics of a leg spring may be simplified with three DOFs consisting of three interconnected masses with three springs (Figure 2). The equivalence between an active motor synergy and a passive system in this spring model has been previously verified and illustrated [15]. The problem we wish to consider here is how a performer gradually adapts his/her behaviour to the information that he/she receives in an environment possessing unknown features during hopping. First, approximation in policy space leads to quasilinearization to define the influencing parameters. The policy depends on the particular choice of harmonic modes representing the activity. Dynamic programming then determines the exact control forces to match the desired ground reaction forces (GRFs). Previous research has described a computational technique for the determination of interaction parameters based on the observation of interacting performers with their environment [15][16][17][18][19][20]. Human movement control can be seen as a process that is distributed systemically over the performer-environment system, rather than being localized within an internal structure associated with the performer [21]. The learner and decision-maker are identified as the performer. The objects the performer interacts with, comprising everything external to the performer, is called the environment ( Figure 3). The environment also gives rise to rewards, special functions or values that the performer tries to maximize over time. During hopping, the reward is to suppress vibration as a functional or natural shock absorber [15]. We have postulated that rather than having the motor control be localized either as an internal model of the performer or between his/her environment (for example, the foot-surface interaction), control of the shock absorption is distributed over the performer-surface system. Thus, we predict that the performers respond not only to stimuli generated as a by-product of the performer's own behavior, but also to the environment, or afferent, sensory stimuli. Such self-generated, or reafferent, sensory information, is used to update or fine-tune the ongoing motor act [22] and, in active sensory systems, to help monitor the environment [23]. Movements and postures are controlled and coordinated to realize functionally specific acts that are themselves based on the perception of affordances, i.e., possibilities for actions [22]. More specifically, the performer and environment interact at each sequence of discrete time steps, t=0, 1, 2, 3… At each time step t, the performer receives some representation of the environment's state and on that basis selects on action. Sensory signals arise in limb periphery from two causes: those as results of environmental influences on the body, and those resulting from self-generated movement. The first are termed afference, while the second types of sensory signals are known as reafference as they are the sensory consequences of movement. Although the afferent and reafferent signals have distinct causes, they are carried by the same sensory channels. From a behavioral viewpoint it may be necessary to distinguish between signals from the two causes especially when monitoring changes in the external world separate from those resulting from self-movement. An analogy can be drawn between the control function and an individual who interacts with their environment. The individual studies their surroundings in order to influence them in a useful manner. But in order to direct their actions better, they must understand the environment better; therefore the individual may act on the environment not in order to obtain a direct advantage, but with the aim of improving their environmental understanding. Thus the influence on the environment and the study of it are closely linked. We propose an information based control theory whereby human locomotion is neither triggered nor commanded, but controlled. The basis for this control is the information derived from perceiving oneself in the world. Control therefore lies in the humanenvironment system. Human movement control can be seen as a process that is distributed over the performer-environment system, i.e., rather than being localized in an internal structure within the performer [22]. The performer and his/her environment (reaction surface) may be said to be co-participants in any resulting action. In this way, actions are specific to function rather than to mechanism [24]. Movements and postures are controlled and coordinated to realize functionally specific acts that are themselves based on the perception of affordances, i.e., possibilities for actions [21]: "The rules that govern behavior are not like laws enforced by an authority or decision made by a commander, behavior is regular without being regulated. The question is how this can be (p. 225)." Although the statement asserts Gibson's belief that behavior is regular without being centrally controlled, the question of how this exists remains unanswered. Motor behavior may be viewed as a problem of maximizing the utility of movement outcome in the face of sensory, motor, and task uncertainty [25]. Viewed in this way, Bellman pointed out (having previously developed dynamic programming) that in order to understand how an organism gradually adapts its behavior to the information that it received in an environment possessing unknown features, there is the issue that learning and performing occurs simultaneously [26]. The probabilistic aspect of this issue is not a "strange" element or an addition to the basic "regular" theory. The issue introduces constraints into the structure of automatic control theory, being an essential part while not explicitly addressed in this study. The conclusion to which we are slowly wending our way is that a new genre of mathematical problem has arisen in the last few years, that of controlling a large system, here the musculoskeletal system. The human skeleton represents a mechanical linkage system with many degrees of freedom (DOFs); furthermore, we do not address the ambitious question of control rather we introduce a feasible operation theory [27]. In this study we consider a vertical hopping activity as an application of the control theory within a large self-regulating biological system. This application is a special case of action-oriented perception control of human movement and can be addressed via the dynamic programming algorithm. Proponents of direct perception [21] suggest that the relevant information encoded in sensory signals is not derived from the physical properties of interacting objects, but rather the action potentials the environment affords. These affordances are directly perceivable without ambiguities, and preclude the need for internal models (states) of representations of the world. An irony of science is that in order to understand complexity, we must often throw away information. In order to reduce a complex system to its simplest form; we have introduce an approach based on a single DOF. Discovering the basic principles that underlie the reduction of DOFs is one of the major challenges understanding the motor control of limb movement. We bring forward a hypothesis, proposing that the reduction of the number of DOFs serves a direct perceptual purpose. The general aim of this study is therefore to indicate that the perception of possibilities for action, i.e., affordances, may be represented as action potentials associated with how the nerve system selects which particular DOF to use. We apply a forward modeling approach for the single DOF because we assume its action potential is directly related to the concept of affordance. An important new "policy" is embedded in a performer's inter-model in that a single DOF can execute no movement, which is not a motion about one definite mode. The purpose of this paper is therefore to identify the forward model for a hopping activity that uses a copy of the motor command, an "efference copy", to anticipate and cancel the sensory effects of the movement, the "reafference." We therefore also show that in the case of limb control during hopping, the efference has an additional sub-function of canceling the effects on sensation induced by self-motion and distinguishing self-produced motion from the sensory feedback caused by disturbances with objects in the environment. Multistage decision process We shall consider the use of dynamic programming (DP) as a computational algorithm capable of yielding numerical answers to a multistage decision process. Bellman recognized that many biological systems display a number of characteristic in motor behavior similar to a decision process [25]. He proposed an optimization policy having the property that whatever the initial state and initial decision are, the remaining decisions must be optimized with regard to the state resulting from the first decision. The term dynamic programming refers to a collection of algorithms that can be used to compute optimal policies given a perfect model of the environment via a Markov decision process (MDP). Thus, DP opens up a whole new class of decision-making solutions and has paved the way for modern control theory approaches. Having introduced a mathematical definition of a system, let us now precisely define the intuitive concept of process. Analytically, we conceive of a system as a state vector x(t) and a rule for determining its value at any time t. We shall replace a symbol x(t) by the symbol p, and think of p as a point in a set or space R. Next we consider a function T(p) as a transformation with the property that the transformed point p 1 = T(p) belongs to R for all p in R. Intuitively, p represents the initial state of a system and defines p 1 = T(p) as the state one time unit later. Generally, the set of vectors are established as: p, p 1 , p 2 , …, p n , … , (1) Where p 0 =p and p n+1 = T(pn), n = 0, 1, 2, …, represents the time history of a system observed at the discrete times n = 0,1,2,…, the successive states of the system. Let us begin with a generalized version of the control process. Let p be a point in a posed space S specifying the state of a system and q be a point in decision space D. To define a multistage decision process of the simplest type, we start with the previously established notion of a multistage process p 1 = T(p). We now enlarge this concept by taking the transformation T to depend on another vector as well, T = T(p,q). If the decision equivalent to q is made when in state p, then the system is transformed into the state defining p′ as: In this application, we wish to concentrate upon policies having the simpler form of: As a function of the current state and stage of the process. A return function of g(p,q) is then produced. Employing the Principle of Optimality, we see that the problem of obtaining the maximum total return for an unbound process leads to the functional equation for N iterations: f N (p) = max q g(p, q) + f T (p, q) . (4) Let then the maximum value of the return function, dependent only upon the initial state p and the number of stage N, be denoted by f N (p). In other words, f N (p)=the total N-stage return obtained starting in state p using an optimal policy. In geometric parlance, we can say that the classical view is that of a curve as a locus of points, while dynamic programming considers a curve to be an envelope of tangents ( Figure 3). Dynamic programming has the potential for dealing with problems of control and sequential decision making under uncertainty [28]. In this theoretical approach, we possess the tool now defining 'approximation in policy space'. To study the structure of the solution associated with the foregoing equations, we can employ various kinds of successive approximation. We can approximate the solution of the functional equation f(p), or we can approximate to the optimal policy function q(p). The important point here is that Equation (4) involves two functions: f(p), the return function and q(p), the optimal policy function. So far we have assumed that the result of the transformation T is to take the state vector p into the set of vectors p and then into the state vector p 1 , where p 1 is uniquely determined by p with perfect state information. In this study, however, since the parameters are unknown, T is not completely known. This means that a stochastic process must replace the deterministic models we have been using. In place of the statement that p 1 is unknown, we suppose that T is a stochastic transformation, which produces a random vector p 1 whose probabilistic distribution is determined by p. To illustrate this idea, let us begin with the three unknown spring constants k where p k is determined by the relation: p n = T p n − 1 , k n , n = 1.2, … Where p 0 = p and the k n is independent random variable with the identical probability distribution dG(k) and: f N (p) = exp k g(p) + f N − 1 (T (p, k)) = g(p) + ∫ f N − 1 (T (p, k))dG(k), (6) with f 0 (p) = g(p) Here the notation "exp," indicates that the expected value is to be taken with respect to the random variables k. Applied biomechanics Eleven healthy, well-trained subjects (4 women and 7 men) gave their written informed consent to participate in this study (via Institutional Review Board approval). All the subjects performed a sequence of unilateral hops on her/his dominant lower limb until voluntary exhaustion. To determine the 60% peak hop height as a control parameter for the hopping activity, each subject performed a squat jump (SQJ) before and after the hopping routine. The minimum height for each hop during the fatigue regime was set at 60% of the maximum height achieved in the first SQJ. Motion capture was collected with an optoelectronic system of ten cameras (Oqus-300, Qualisys AB, Sweden) operating at 200 Hz. Three vertical displacements at the center of mass of each segment (thigh, shank, and foot) were also processed (Visual 3D, C-Motion, Inc., Canada). The trajectories at the centers of mass of each segment were collected ( Figure 4 indicates the surface marker location information). The electromyographic (EMG) activities of the tibial is anterior (TA), gastrocnemius medial is (GM), soleus (SOL), vastus lateralis (VL), and the biceps femor is (BF) were recorded ( Figure 4). The surface electrodes (Ambu Blue Sensor N-00-S/25) were placed with an interelectrode distance of 20 mm, in accordance with the SENIAM project recommendations [29] and placed as follows. TA: electrodes placed at 1/3 on the line between the tip of the fibula and the tip of the medial malleolus. GM: electrodes placed on the most prominent bulge of the muscle in the longitudinal direction of the leg. SOL: electrodes placed at 2/3 of the line between the medial condyle of the femur to the medial malleolus. VL: electrodes placed at 2/3 on the line from the anterior spina iliaca superior to the lateral side of the patella. BF: electrodes placed at 50% of the line between the ischial tuberosity and the lateral epicondyle of the tibia, in the direction of the line between the ischial tuberosity and the lateral epicondyle of the tibia. A ground electrode was placed over the C7 vertebrae. The EMG data were transmitted by telemetry (Biotel 88, Glonner, Germany) and collected at 1 KHz. The EMG signals were low-pass filtered using a fourth order Butterworth filter with a cut-off frequency of 6 Hz and normalized to peak activity recorded during the hop exercise. The EMG signal was first band passed with a 30 Hz high-pass and a 500 Hz low-pass filter. The EMG signals were then rectified and low-pass filtered at 6 Hz to build the linear envelope [30]. The RMS amplitude was calculated within a window of 125 milliseconds after the liner envelope. Statistical comparisons within and between subjects were made with commercial software (PASW Statistics v18, SPSS, USA). Results The leg stiffness parameters were found for a representative subject (001-F), showing a fast convergent rate, such that three iteration steps are sufficient to take parameters to converged values ( Figure 5). The rule of the policy-iteration method is quite simple, such that the optimal policy has been reached when the policies on two successive iterations are identical. A model analysis was then applied to the leg spring model. The modal analysis allows us to reconstruct the overall response of the 3 DOF leg spring within the stance phase as a superposition of the response of three single DOF modes of the system. As an elastic system, leg springs vibrate with the three distinct frequencies ( Figure 6). Thus, we postulate that the CNS can select the slowest component and we may assume that to represent the efference copy of the inter model from the available DOFs. The resulting mode shapes and vibrational modes can be compared with known physiological processes to identify the efference copy mode regimes, which we assume to be the first mode (6.5 rad/sec) that contributed 97.6% of total motion of the leg spring. High frequency modes that have little contribution to the system dynamics can be eliminated, here the second and third modes. We may build a "reduced" model where only the most significant mode, i.e., the first mode is retained. It must be observed that the first mode alone can reasonably represent the mechanics of hopping that is invariant over hopping frequencies [15]. The selected single DOF model, assuming an internal model, has another distinct feature. Consider an elastically suspended leg spring model (Figure 6). If a force is applied to the body, it will deflect. Clearly, the deflection and its direction will be different for different forces. It may be of interest to know the direction of the unit force which will cause the largest translation and that which will cause the smallest. Equivalently, the three available single DOF modes are presented ( Figure 6). Then, a decision maker (subject 001-F) would interpret this by perceiving that the largest deflection direction as the most compliant direction [31] and the smallest deflection is the stiffest direction. The performer would take the most compliant direction ratio as an optimal policy and copy its information into his/or internal model. We have demonstrated that an efference copy that was generated in terms of three numbers as a specific ratio during the stance phase can be used to explain the interaction between performer and external environment. Therefore, the shape of the compliant direction in the efference copy can be regarded as the minimal unit of analysis of the hopping pattern under all load bearing physiologic conditions. Our investigation demonstrated that for the range of the studied performers, there existed statistically significant differences (p<0.01) in the ratio over subjects, showing subject specific patterns (Figure 7). An important result from this investigation demonstrates an approach to separate 'afferent' sensory stimuli (Figure 8) from self-generated, 'reafferent' signals ( Figure 9) using an action-oriented perception system and dynamic programming. The reflex action used to neutralize the disturbances between reafferent and afferent stimuli represents their difference. EMG signals obtained from the five monitored muscles within two representative subjects (001-F and 006-M) represent the muscle activities during the stance phase of hopping and produced contrasting timing patterns ( Figure 10). The two subjects also produced contrasting mechanisms of limb stiffness as physical adjustments during the five-hop regime ( Figure 11). Discussion We have demonstrated that when the leg spring chain has been displaced during the first harmonic mode, the forces in the leg spring no longer equilibrate. They create resultant force modes, intensities of which are decomposed into muscle tuning activities about the respective leg spring components [32]. Hence, if the equilibrium is stable during the stance phase, the evoked forces within the leg spring will tend to create motion for the leg to spring back to the position of equilibrium, and thus produce oscillation about the same harmonic mode. Our study addressed that the mechanism of stiffness adjustment is done via motor synergy about the leg joints. Generating the subject specific internal mode by motor synergy can be used as a possible strategy to affect the natural shock absorption ability during hopping. Contrasting muscle activities were observed from two representative performers: namely pre-loading and post-loading (as indicated) muscle activity generated during the first and last hops (Figures 10 and 11). The subject 001-F produced the post-loading pattern at the first hop and the pre-loading at the last hop, along with increased stiffness in her leg spring; whereas, the subject 006-M produced the opposite pattern, along with decreased stiffness in his leg spring. The muscle activities that were observed at the first and last hop can be directly related to the mechanism of stiffness adjustment. We assume that the control forces within muscle activation simply added the new stiffness value to the existing leg spring system, which resulted in an increase in leg stiffness. In other words, the active control system with the control input is equivalent to the passive leg spring system with an increased stiffness (as illustrated in Figure 2). The results may be explained in terms of synaptic enhancement through an increased probability of synaptic terminals releasing transmitters in response to pre-synaptic action potentials [33]. Such synaptic enhancement can, as modeled during repeated trials here, tend to keep the performers' motor system on a customary path in a sensory feedback or corollary discharge system. Since memories are postulated to be represented by vastly interconnected networks of synapses in the brain, synaptic enhancement is one of the important neurochemical foundations of learning and memory [34]. Thus the efference copy reduces the order of the system where specific modes are selected as approximations. The selection of a small number of mechanical modes which approximate the assumed state variables are referred to as 'Bernshtein's reduction, of DOF [35]. The sensory system responds not only to stimuli from the environment but also cues generated by a performer's own behavior. This leads to problems in sensory processing, because self-generated information can occur at the same time as external sensory information is gathered. However, this sensory information can also desensitize the performer's own sensory pathways and they can become confused with external afferent information of the same modality. Although the afferent and reafferent signals have distinct causes, they are carried by the same neuronal channels. From a behavioral viewpoint it may be necessary to distinguish between signals originating from two causes especially when monitoring changes in the external environment separate from those resulting from self-movement [36]. Feed forward or corollary discharges are necessary to cancel reafferent inputs so that the self-generated, or reafferent sensory information is used to update or fine-tune the ongoing motor act. In an active sensory system this helps to monitor the environment. The efference copy of hopping performers drives the motor activities and produces the corollary discharges that estimate sensory feed back of the GRF during the stance phase. However, an efference copy cannot itself provide this information, as it is a motor signal predictive of muscle activation, rather than of the sensory input (GRF). By generating an estimate of the sensory consequences of a motor command, an internal forward model can be used to cancel reafferent sensory signals, and thus allow the external environmentrelated signals to be recovered. Here, the self-generated GRF was compared to the measured GRF. The estimated sensory response, however, was limited by the fact that it did not reproduce the first spike (impact force) accurately in the simulation of the GRF data ( Figure 9). Our work assumed that the parameters defined as the leg spring remain constant over the entire cycle of stance phase within the internal model. If additional motor components are identified, hoppers may respond not only to stimuli from the cues generated by their own behaviors but also to disturbances produced by the external surfaces. We call this phenomenon a 'corollary discharge inhibition of the proprioceptive system in striding performers.' As such, corollary discharge tends to influence an inhibition in the hoppers' proprioceptive and cutaneous receptors during the hopping activity. In this case the coordination of the motor components is 'hardwired,' consisting of fixed neuromuscular pathways as a reflex. There flex action is a brief stereotyped movement carried out in automatic fashion in response to some sensory stimulation. Performers avoid, escape from, or minimize the effects of noxious stimuli. It has also been perceived as a disturbance, which assumes the sensory discrepancy signals as a training model [37]. Differences between in the cutaneous receptors are evident here (Figures 8 and 9). This signal is regarded as a compensatory stabilizing reflex, which keeps the body fixed in space. These reflexes are regulated by sensory feedback from proprioceptors, which signal relative body segment motion. In addition, vestibular, visual, and tactile receptors signal head or body motion with respect to vertical position during hopping. Reflexes are typically characterized as automatic and fixed motor responses, and they occur on a much faster time scale with a higher frequency than what is possible for reactions that depend on perceptual processing [38]. Reflexes play a fundamental role in stabilizing the motor system, providing almost immediate compensation for small perturbations and maintaining fixed execution patterns that are predefined by the efference copy. Thus they do not require attention or conscious control. The concept of the motor unit that was first introduced by Sherrington [39] can be divided into 'fast twitch' (policy in Figure 8) and 'slow twitch' (policy in Figure 9) response types. We have demonstrated that performers continually need to make adjustments in order to maintain a predefined policy in the efference copy. These movements consist of a series of regulating reflexes aimed at restoring the body's equilibrium, brought into play whenever the body deviates from its desired orientation. In addition to regulation responses, there is a second class of equilibrium responses, termed compensatory responses. These responses do not correct deviation from equilibrium; instead they compensate or neutralize the influences of the external noise. This compensation is not a feedback path, since the value of the input is transmitted along it and not the output variable of the object. The rapid response mode may have some clinical implications as it directly affects the reflex proprioceptive system as a compensation mechanism. Therefore, we demonstrated that by separating self-generated motion from the external influenced motion, filtered information could be acquired for diagnosis. Recent research has identified neural pathways and their sensory processing as being highly dynamic, taking the behavioral state of the individual into account [40]. This indicates that the analysis of sensory pathways in anaesthetized or resting preparations might not provide the full picture of sensory processing. Therefore, our study suggests that future work should focus on unraveling the dynamics of sensory processing in active performers. Figure 1: A diagram of the feedback loop whereby the very deviation of the system from its desired performance is a restoring force to guide the system back to its proper functioning. An efference copy is used to generate the predicted sensory feedback (corollary discharge), which estimates the sensory consequences of a motor command. The actual sensory consequences of that motor command are compared with the corollary discharge to inform the central nervous system (CNS) about the external actions. Schematically, this is shown here as the actual leg spring system S with the state x(t), while S' is another hypothetical system with state z(t). The "driving action" z(t) is supplied as the input to the system S, representing the instruction of what will be the output variable x for the object. These two system, x(t) and z(t), are compared, resulting in a resulting forcing influence g(z-x), which is then exerted upon S. The three DOF leg spring model demonstrates how the system can perform the vibration suppression function. The model consists of SDOF of the foot-ground contact and the attached MDOF of the dynamic controller (two degrees). The equivalent active and passive system is illustrated (with parameters listed in Table 1). The control force simply adds the stiffness to the system. In other words, the active control system with the control input is equivalent to the passive system with springs of stiffness k i . In our approach, at each point, we seek a direction which is optimal; the solution is obtained in the form of a policy, a set of instructions for carrying out the process. Convergence of leg segment stiffness parameters as determined for a representative subject. Kim Mode shape schematics and plot for first vibrational mode (6.5 rad/sec), where all masses move in phase with less stress within the connecting springs (tissues). Specific deflection ratios of 0.16, 0.57, and 0.81 are shown in a three-dimensional space with Cartesian coordinates. The subject M-006 also produced similar patterns in the involved muscles with pre-loading at the first hop and post-loading at the last hop. The EMG signals were low-pass filtered using a fourth order Butterworth filter with a cut-off frequency of 6 Hz and normalized to peak activity recorded during the hop exercise.
2015-07-20T22:48:03.000Z
2013-07-23T00:00:00.000
{ "year": 2013, "sha1": "3e6d1f836ef5a58df064111ff6ceebc4eff75bed", "oa_license": "CCBY", "oa_url": "https://pdxscholar.library.pdx.edu/cgi/viewcontent.cgi?article=1025&context=mengin_fac", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "d88936e1f48130be15af37d3deaa2d14cec91fbc", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
118453865
pes2o/s2orc
v3-fos-license
The extremal genus embedding of graphs Let Wn be a wheel graph with n spokes. How does the genus change if adding a degree-3 vertex v, which is not in V (Wn), to the graph Wn? In this paper, through the joint-tree model we obtain that the genus of Wn+v equals 0 if the three neighbors of v are in the same face boundary of P(Wn); otherwise, {\deg}(Wn + v) = 1, where P(Wn) is the unique planar embedding of Wn. In addition, via the independent set, we provide a lower bound on the maximum genus of graphs, which may be better than both the result of D. Li&Y. Liu and the result of Z. Ouyang etc: in Europ. J. Combinatorics. Furthermore, we obtain a relation between the independence number and the maximum genus of graphs, and provide an algorithm to obtain the lower bound on the number of the distinct maximum genus embedding of the complete graph Km, which, in some sense, improves the result of Y. Caro and S. Stahl respectively. Introduction Graph considered here are all finite and connected. If the graph M can be obtained from a graph G by successively contracting edges and deleting edges and isolated vertices, then M is a minor of G. The minimum genus γ min (G) (or, simply, the genus γ(G)) of a graph G is the minimum integer g such that there exists an embedding of G into the orientable surface S g of genus g, and the maximum genus γ M (G) of a connected graph G is the maximum integer k such that there exists an embedding of G into the orientable surface of genus k. The difference between the maximum genus and the minimum genus of a graph G is called the genus range of G. A graph G is said to be upper embeddable if γ M (G) = ⌊ β(G) 2 ⌋, where β(G) is the cycle rank (or Betti number) of G. A one-f ace embedding (two-f ace embedding) ψ(G) of a graph G means that the face number of ψ(G) is one (two). An odd vertex is a vertex whose degree is an odd number. For n 3, the wheel of n spokes is the graph W n obtained from the n-cycle C n by adding a new vertex (called the center of the wheel) and joint it to all vertices of C n . For example, W 3 = K 4 . A subdivision of an edge e ∈ E(W n ) means inserting a vertex of degree two to e, where the inserted vertex is called a subdividing-vertex of W n . Let v be a degree-three vertex which is not in V (W n ), then the graph W n + v, which is called the near-wheel graph, means the connected graph obtained from W n by joining v to v i (i = 1, 2, 3), where v i may be a subdividing-vertex of W n or a vertex which belongs to V (W n ). Furthermore, the vertices v 1 , v 2 , v 3 are called the antennal-vertex of the graph W n + v. Surfaces considered here are compact 2-dimensional manifold without boundary. An orientable surface S can be regarded as a polygon with even number of directed edges such that both a and a − occurs once on S for each a ∈ S, where the power "−"means that the direction of a − is opposite to that of a on the polygon. For convenience, a polygon is represented by a linear sequence of lowercase letters. An elementary result in algebraic topology states that each orientable surface is equivalent to one of the following standard forms of surfaces: which are the sphere (p = 0), torus (p = 1), and the orientable surfaces of genus p (p ≥ 2). The genus of a surface S is denoted by g(S). Let A, B, C, D, and E be possibly empty linear sequence of letters. Suppose A = a 1 a 2 . . . a r , r ≥ 1, then A − = a − r . . . a − 2 a − 1 is called the inverse of A. If {a, b, a − , b − } appear in a sequence of the form of AaBbCa − Db − E, then they are said to be an interlaced set; otherwise, a parallel set. Let S be the set of all surfaces. For a surface S ∈ S, we obtain its genus g(S) by using the following transforms to determine its equivalence to one of the standard forms. In the above transforms, the parentheses stand for cyclic order. For convenience, the parentheses are always omitted when unnecessary to distinguish cyclic or linear order. For more details concerning surfaces, the reader is referred to [1] and [2]. Let T be a spanning tree of a graph G = (V, E), then E = E T +Ê T , where E T consists of all the tree edges, andÊ T = {ê 1 ,ê 2 , . . .ê β } consists of all the co-tree edges, where β = β(G) is the cycle rank of G. Split each co-tree edgeê Obviously, T is a tree. A rotation at a vertex v, which is denoted by σ v , is a cyclic permutation of edges incident on v. A rotation system σ = σ G for a graph G is a set {σ v |∀v ∈ V (G)}. The tree T with a rotation system of G is called a joint-tree of G, and is denoted by T σ . Because T σ is a tree, it can be embedded in the plane. By reading the lettered semi-edges of T σ in a fixed direction (clockwise or anticlockwise), we can get an algebraic representation of the surface which is represented by a 2β−polygon. Such a surface, which is denoted by S σ , is called an associated surface of T σ . A joint-tree T σ of G and its associated surface is illustrated by Fig.1, where the rotation at each vertex of G complies with the clockwise rotation. From [1], there is 1-1 correspondence between the associated surfaces (or joint-trees) and the embeddings of a graph. The joint-tree is originated from the early works of Liu [3], and more detailed information about the joint-tree can be found in [1]. Terminologies and notations not defined here can be seen in [4] for graph theory and [5] for topological graph theory. The following lemma is essential in the whole paper. Lemma 1.2 The minimum genus of a minor of a graph G can never be larger than γ(G). Proof Let the graph G be embedded in a surface S, then contracting an edge e of G on S can obtain an embedding of the contracted graph G/e on S. Moreover, edge deletion can never increase embedding genus. Thus, the lemma is obtained. Proof According to the Transform 4, it is obvious. The genus of the near-wheel graphs It is obvious that W n is 3-connected and γ(W n ) = 0. So, according to Lemma 1.1, W n has an unique embedding in the plane. We denote this unique planar embedding of W n by P(W n ). Lemma 2.1 Let P(W n ) be the planar embedding of the wheel W n with n spokes, v be a degree-three vertex which is not in W n , then the genus γ(W n + v) of the graph W n + v equals 0 if the three antennal vertices of W n + v are in the same face boundary of P(W n ). Proof Let v 1 , v 2 , v 3 be the three antennal vertices of W n + v, f 1 be the face of P(W n ) with v 1 , v 2 , v 3 on it, then we can get a planar embedding of W n + v by placing v in the interior of f 1 and jointing vv i (i = 1, 2, 3). Lemma 2.2 Let P(W n ) be the planar embedding of the wheel W n with n spokes, v be a degree-three vertex which is not in W n , then the genus γ(W n + v) of the graph W n + v equals 1 if the following two conditions are satisfied: (i) the three antennal-vertex of W n + v are in the boundary of two different faces of P(W n ); (ii) there is no face of P(W n ) whose boundary contains all the three antennal-vertex. Proof It is easy to find out that K 3,3 is a minor of W n + v. According to Lemma 1.2 we can get that γ(W n + v) 1. Let v 1 , v 2 , v 3 be the three antennal-vertex of W n + v. Because the three antennalvertex of W n + v are in the boundary of two different faces of P(W n ), without loss of generality, we may assume that v 1 , v 2 are in the boundary of f 1 , and v 3 in f 2 , where f 1 and f 2 are two different faces of P(W n ). Putting v in the interior of f 1 and joining vv i (i = 1, 2, 3), then we will get a torus embedding of W n + v if add a handle to the plane with the edge vv 3 on it. So γ(W n + v) 1. From the above we can get that γ(W n + v) 1 and γ(W n + v) 1. So γ(W n + v) = 1. Lemma 2.3 Let P(W n ) be the planar embedding of the wheel W n with n spokes, v be a degree-three vertex which is not in W n , then the genus γ(W n + v) of the graph W n + v equals 1 if any pair of the three antennal-vertex of W n + v are not in a same face boundary of P(W n ). Proof It is not difficult to find out that K 3,3 is a minor of W n + v. According to Lemma 1.2 we can get that γ(W n + v) 1. Case 1: The three antennal-vertex of W n + v are all subdividing-vertex of W n . Let v 1 , v 2 , v 3 be the three antennal-vertex of W n + v. For any pair of the three antennal-vertex of W n + v are not in a same face boundary of P(W n ), the vertices v 1 , v 2 and v 3 must belong to one of the following two subcases: (1) v 1 , v 2 and v 3 are in three different spokes of W n , furthermore, any pair of these three spokes are not in a same face boundary of P(W n ); (2) one of {v 1 , v 2 , v 3 } is on the boundary of the unbounded face of P(W n ), and the other two are in two different spokes of P(W n ), where the two spokes are not on a same face boundary of P(W n ). In the first subcase, the graph W n + v and one of its joint-tree are shown in Fig.2 and Fig.3 respectively, where we denoted the edge (v, v 2 ) by x, and (v, v 3 ) by y. In Fig.2, the edges of the n-cycle in W n , according to the clockwise rotation, are denoted by a 1 , a 2 , . . ., a n . The surface associated with the joint-tree in Fig.3 is On the other hand γ(W n + v) 1. Therefore, in the first subcase, γ(W n + v) = 1. In the second subcase, the graph W n + v and one of its joint-tree are shown in Fig.4 and Fig.5 respectively, where we denoted the edge (v, v 2 ) by x, and (v, v 3 ) by y. In Fig.4, the edges of the n-cycle in W n , according to the clockwise rotation, are denoted by a 1 , a 2 , . . ., a m−1 , b, a m , . . ., a n . The surface associated with the joint-tree in Fig.5 is According to the above, we can get that, in the Case 1, γ(W n + v) = 1. Case 2: The three antennal-vertex of W n + v consist of both subdividing-vertex of W n and vertices which belong to V (W n ). Because any pair of the three antennal-vertex of W n + v are not in a same face boundary of P(W n ), among these three antennal vertices, there is one and only one vertex belongs to V (W n ), and the other two are both subdividing-vertex of W n . It is not difficult to find out that the graph W n + v in Case 2 is minor of the graph W n + v in Case 1. So, according to Lemma 1.2 we can get that, in Case 2, γ(W n + v) 1. On the other hand, we can get that γ(W n + v) 1 because K 3,3 is a minor of W n + v. So, in the Case 2, γ(W n + v) = 1. According to the Case 1 and Case 2 we can get the Lemma 2.3. The following theorem can be easily obtained from Lemma 2.1, Lemma 2.2 and Lemma 2.3. Theorem A Let P(W n ) be the planar embedding of the wheel W n with n spokes, v be a degree-three vertex which is not in W n , then the genus γ(W n + v) of the graph W n + v equals 0 if the three antennal-vertex of W n + v are in the same face boundary of P(W n ), otherwise, γ(W n + v) = 1. Remark (i) From theorem A we can get that there are many planar or toroidal graphs whose genus range can be arbitrarily large; (ii) How does the genus of a cubic planar graph G change if we add a degree-three vertex v, which is not in V (G), to G? We believe its genus to be 0 or 1. So, the proof or disproof of the result will be interesting. Lower bound on the maximum genus of graphs A set J ⊆ V (G) is called a non-separating independent set of a connected graph G if J is an independent set of G and G−J is connected. In 1997, through the independent set of a graph, Huang and Liu [7] studied the maximum genus of cubic graphs, and obtained the following result. Lemma 3.1 [7] The maximum genus of a cubic graph G equals the cardinality of the maximum non-separating independent set of G. But for general graphs that is not necessary cubic, there is no result concerning the maximum genus which is characterized by the independent set of the graph. In the following, we will provide a lower bound of the maximum genus, which is characterized via the independent set, for general graphs. Furthermore, there are examples shown that the bound may be tight, and, in some sense, may be better than the result obtained by Li and Liu [8] , and the result obtained by Z. Ouyang etc. [9] . Theorem B Let G be a connected graph whose minimum degree is at leas 3. If Proof Without loss of generality, let H be the graph obtained from G by successively deleting v 1 , v 2 , . . . , v m from G, and ψ(H) be a maximum genus embedding of H. We first add the vertex v m to H. Without loss of generality, let d G (v m ) = 2i + 1, and x 1 , x 2 , . . . , x 2i+1 be the 2i + 1 neighbors of v m in G. According to the 2i + 1 neighbors of v m are in the same face boundary of ψ(H) or not, we will discuss in the following two subcases. Let f 0 , which is bounded by B 0 , be the face of ψ(H) that x 1 , x 2 , . . . , x 2i+1 are on the boundary of it. Firstly, we put v m in f 0 and connect each of {x 1 , x 2 , x 3 } to v m , and denote this resulting graph by H 1 . Through the manner depicted by Fig.7, where each vertex-rotation is the same with that of ψ(H) except v m , we can get an embedding ψ(H 1 ) of H 1 such that its face number is the same with that of ψ(H). From the equation V − E + F = 2 − 2g, it can be easily deduced that the maximum genus of H 1 is at least one more than that of H. Now connect each of {x 4 , x 5 } to v m , and denote the resulting graph by H 2 . Through the manner depicted by Fig.8, we can get an embedding ψ(H 2 ) of H 2 , which has the same face number with that of ψ(H). From the equation V − E + F = 2 − 2g, it can be easily deduced that the maximum genus of H 2 is at least two more than that of H. x 2 x 3 x 4 x 5 x 2i+1 x 2 x 3 x 4 x 5 x 2i+1 v m s s s s x 2 x 3 x 4 x 5 x 2i+1 v m s s s s Fig.9, we can get an embedding ψ(H 1 ) of H 1 whose face number is the same with that of ψ(H). If x 1 , x 2 , x 3 are in three different face boundaries of ψ(H), say f 1 , f 2 , and f 3 , then through the manner depicted by the right part of Fig.9, we can get an embedding ψ(H 1 ) of H 1 whose face number is two less than that of ψ(H). From the equation V − E + F = 2 − 2g, it can be easily deduced that the maximum genus of H 1 is at least one more than that of H. From Case 1 and Case 2 we can get that Similarly to that of v m , we can add v m−1 , v m−2 , . . . , v 1 , one by one, to H + v m . Eventually we will get an embedding of G, and it is not hard to obtain that the maximum genus of G is at least 1 Noticing that the upper embeddability of a graph would not be changed if adding an odd vertex to it, we can get the following theorem whose proof is similar to that of Theorem B. Theorem C Let G be a connected graph and A 1 , A 2 , . . . A s be a sequence of disjoint independent vertex sets which satisfy: In particular, if one of the graph sequence G 1 , G 2 , . . . , G s is upper embeddable, then G is upper embeddable. Remark In 2000, through the girth g and connectivity of graphs, D. Li and Y. Liu [8] obtained the lower bound of the maximum genus of graphs, which is displayed by the following table, where the first row and the first column represents the girth and connectivity respectively. Ten years later, Z. Ouyang, J. Wang and Y. Huang [9] studied this parameter too, and obtained that: Let G be a k-edge-connected (or k-connected) simple graph with minimum degree δ and girth g. . There are many examples showing that the lower bound in Theorem B may be best possible. Furthermore, it may be better than the result obtained by Li and Liu [8] and the result of Z. Ouyang etc. [9] . The following are two examples with girth 3 and connectivity 2, and girth 4 and connectivity 3 respectively. Independence number and the maximum genus of graphs Caro [10] and Wei [11] independently shown that for a graph G its independence number Later, Alon and Spencer [12] gave an elegant probabilistic proof of this bound. But, up to now, there is little result concerning the relation between the independence number and the maximum genus of graphs. Let N G (v) denote all the neighbors of the vertex v in G, the following theorem remedies this deficiency. Theorem D Let G = (V, E) be a connected 3-regular graph (loops and multi-edges are permitted) with A = {x 1 , x 2 , . . . , x γ M (G) } be a maximum non-separating independent set of G. Then its independence number Proof From Lemma 3.1 we can get that there exists a maximum non-separating independent set A = {x 1 , x 2 , . . . , x γ M (G) } which satisfies G − A is connected. Let I be an arbitrary independent set of G − N A . It is obvious that every vertex in A is not adjacent to any vertex in I. So, A ∪ I is an independent set of G, and the theorem is obtained. Remark In the graph G depicted in Fig.11, we may select A = {x 1 }. Then N A = {x 1 , x 2 , x 6 }, and α(G − N A ) = 2. Noticing α(G) = 3 and γ M (G) = 1, we can get that So, the lower bound in Theorem D may be best possible, and may be better than that of Caro [10] and Wei [11] in the case of cubic graphs. x 3 x 5 x 4 Fig.11 5. Estimating the number of the maximum genus embedding of K m The enumeration of the distinct maximum genus embedding plays an important role in the study of the genus distribution problem, which may be used to decide whether two given graphs are isomorphic. But up to now, except [13] and [14], there is little result concerning the number of the maximum genus embedding of graphs. In this section, we will provide an algorithm to enumerate the number of the distinct maximum genus embedding of the complete graph K m , and offer a lower bound which is better than that of S. Stahl [13] for m 10. Furthermore, the enumerative method below can be used to any maximum genus embedding, other than the method in [13] which is restricted to upper embeddable graphs. A 2-path is called a V-type-edge, and is denoted by V. If the V-type-edge consists of the 2-path v i v j v k , then this V-type-edge is denoted by V i,k j for simplicity. Let ψ(G) be an embedding of a graph G. We say that a V-type-edge are inserted into ψ(G) if the three endpoints of the V-type-edge are inserted into the corners of the faces in ψ(G), yielding an embedding of G + V. The following observation can be easily obtained and is essential in this section. Observation Let ψ(G) be an embedding of a graph G. We can insert a V-type-edge V to ψ(G) to get an embedding ρ(G + V) of G + V so that the face number of ρ(G + V) is not more than that of ψ(G). Lemma 5.1 Let ψ(G) be a one-f ace embedding of the graph G, v j , v i and v k be vertices of G. If the number of the f ace-corner which containing v j , v i and v k are r 1 , r 2 and r 3 respectively, then there are r 1 × r 2 × r 3 different ways to add the V-type-edge V i,k j to ψ(G) to get a one-f ace embedding of the graph G + V i,k j . Proof Let the graph depicted in the middle of Fig.12 denote a one-f ace embedding ψ(G) of the graph G. Because the number of the f ace-corner which containing v j , v i and v k are r 1 , r 2 and r 3 respectively, we can insert the V-type-edge V i,k j into ψ(G) so that there are r 1 different ways to put the edges v j v k and v j v i in the same f ace-corner which containing the vertex v j , r 2 different ways to put the edge v j v i in a f ace-corner which containing the vertex v i , and r 3 different ways to put the edge v j v k in a f ace-corner which containing the vertex v k . For any one of the r 1 × r 2 × r 3 different ways to insert the V-type-edge V i,k j into ψ(G), we can always get a one-f ace embedding of G + V i,k j by one and only one of the two ways which is depicted by the left and right of Fig.12. So the lemma is obtained. The following algorithm together with Lemma 5.1 provide a maximum genus embedding of K m and a lower bound of the number of the maximum genus embedding of K m . Step 1. Embed the tree v 2 v 3 . . . v m v 1 on the plane. Step 3. If the one-f ace embedding of the complete graph K m is obtained, then stop. Otherwise, go to Step 4. Using the above algorithm, we can get the maximum genus embedding of K m except that m = 1 + 8i or m = 6 + 8i (i = 0, 1, 3, . . .). Furthermore, for m 10, our result is much better than that of Stahl [13] . For simplicity, we give some symbols which are used below. Let E be a one-f ace embedding of a graph. Then the symbol (V i,k j : r 1 × r 2 × r 3 ) means that there are r 1 × r 2 × r 3 different ways to add the V-type-edge V i,k j to E to get a one-f ace embedding of E + V i,k j , and the symbol (e j,k j : r 1 × r 2 ) means that there are r 1 × r 2 different ways to add the edge v j v k to E to get a two-f ace embedding of E + v j v k . Result 1 The number of the maximum genus embedding of the complete graph K 8 is at least 2 26 × 3 11 × 5 5 . Proof Let V = {v 1 , v 2 , . . . , v 8 } be the vertex set of the complete graph K 8 . There is only one way to embed the tree T = v 2 v 3 . . . v 8 v 1 on the plane, which is a one-f ace embedding, and is denoted by E 1 . In E 1 , the number of the f ace-corner which containing the vertex v 1 , v 2 , v 3 is 1, 1 and 2 respectively. So, according to Lemma 5.1, there are 2 different ways to add the V-type-edge V 2, 3 1 to E 1 to get a one-f ace embedding of T + V 2,3 1 . Let E 2 be any one of the one-f ace embedding of T + V 2,3 1 . In E 2 , the number of the f acecorner which containing the vertex v 1 , v 4 , v 5 is 3, 2 and 2 respectively. So, according to Lemma 5.1, there are 3 × 2 × 2 (= 12) different ways to add the V-type-edge V 4,5 1 to E 2 to get a one-f ace embedding of T + V 2, 3 1 + V 4,5 1 . Similarly, we can get that for each of the one-f ace embedding of T + V 2, 3 1 + V 4,5 1 , there are 5 × 2 × 2 different ways to add the V-type-edge V 6,7 1 to T + V 2,3 1 + V 4,5 1 to get a one-f ace embedding of T + V 2,3 1 + V 4,5 1 + V 6,7 1 . Similarly, we can add V-type-edges, one by one in the following order, to T + V 2, 3 1 + V 4,5 1 + V 6,7 1 to get a two-f ace embedding of K 8 eventually. Result 2 The number of the distinct maximum genus embedding of the complete graph K 10 is at least 2 52 × 3 15 × 5 7 × 7 6 , which is obtained from the unique one-f ace embedding of the tree T = v 2 v 3 . . . Result 4 The number of the distinct maximum genus embedding of the complete graph K 5 is at least 432, which is obtained from the unique one-f ace embedding of the tree T = v 2 v 3 v 4 v 5 v 1 by successively adding the following V-type-edges: (V 2,3 1 : 1 × 1 × 2), (V 1,2 4 : 2 × 3 × 2), (V 2,3 5 : 2 × 3 × 3). The algorithm doesn't work for K 6 and K 9 . But the maximum genus embedding of K 6 and K 9 can be obtained by the following manners.
2012-03-05T11:20:00.000Z
2012-03-05T00:00:00.000
{ "year": 2012, "sha1": "b4504f18555862dad4e32bb685f0d39d75eff02b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b4504f18555862dad4e32bb685f0d39d75eff02b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
57423577
pes2o/s2orc
v3-fos-license
Automated Geometric and Computer-aided Non-Circular Gear Formation Modeling In non-circular gears of hydromachines used for transit of oil, fuel-oil residue and water, difference takes place not only between gear teeth profiles, but also between lateral profiles of the same tooth. Profiling of such gears requires application of complex mathematical apparatus. The solution to the inverse task of formation is even more complex in this case. This paper proposes a geometrical model of solution to the task of profiling a non-circular gear, the centroid of which consists of interconnected arcs. It is effective in case all the conditions of formation are met. The proposed automated solid computer-aided modeling of the direct and the inverse tasks of formation realized on virtual imitation level allows us not only to acquire the envelope of the profile, but also to observe the possible formation of transition curves and undercuts. It reveals constructive and technological conditions under which they appear, presents an opportunity to conduct the respective research in order to introduce the required corrections into the kinematic scheme of formation, which results in achieving a high-grade solution to the task of profiling. Automated solid modeling of the inverse task of formation allows us to validate the solution to the allocated task and, if necessary, to refine the initial data and therefore exclude the necessity to conduct the expensive full-scale experiments. The results of the study can be used in non-circular gear design in the field of hydromachines used for transit of oil, fuel-oil residue and water. Introduction Items comprising non-circular gears are applied in numerous fields of industry. They have found the widest application in looms, measuring devices, flow meters and various other mechanisms and machines [1][2][3][4][5]. The application of non-circular gears appears promising in planetary rotary hydromachines used in pumps that transit oil, fuel-oil residue and water, especially contaminated i.e. on drill sites, as well as in dosing pumps for various liquids [6,7]. The two main problems that restrain the spread of non-circular gear drives are the complexity of noncircular gear teeth machining and the complexity of tooth profile calculation. At the present time the first problem is effectively solved by the use of CNC machines, i.e. EDM machines. The solution to the second problem is acquired by the use of various approaches. One of such approaches is based on the classical theory of envelopes [1]. This approach is characterized by substantial complexity in case the centroid line is represented by an ellipse. In this case, difference takes place not only between all of the teeth of the same gear, but also between lateral profiles of the same tooth. The calculations require complex mathematical apparatus or the use of mathematical and design CAD [8][9][10][11][12]. In case the precision of tooth profile is not subject to strict requirements, the elliptic centroid line is approximated by arcs [6,[13][14][15] and the possibility of application of classical profiling methods emerges. However, in this case there are complications emerging from the fact that the centroid consists of a number of interconnected sections. At the present time the field of workpiece profiling based on computer-aided modeling of actual process of workpiece formation by an instrument [9][10][11], [16][17][18] with further integration into CAM system is successfully being developed. Since the teeth profiles acquired as a result of formation modeling do not always consist of the envelope of the corresponding family of profiles, but rather have undercuts and transition curves, it is required to verify the result. In this case, it is required to solve the inverse task of formation. As shown by the analysis of the existing scientific literature, such approach to non-circular gear formation has not yet been considered. Problem Definition The aim of this paper is the automation of the process of geometric and computer-aided non-circular gear teeth formation modeling. The objectives of the study are: acquiring a mathematical model of gear tooth profile as an envelope of a family of instrument profiles; development of an algorithmic kinematic formation model that would provide the solution to the direct and the inverse tasks of noncircular gear profiling as well as the possibility of acquiring their actual profile that would include the specific elements such as transition curves and undercuts; realization of said model in automated mode on the basis of computer-aided solid modeling methods in order to acquire the digital model of a workpiece adapted to CAM systems; establishment of automated solid modeling technology of solving the inverse task of formation, that would include the creation of removed stock model and allow us to exclude the necessity to conduct the expensive full-scale experiments. Formation of initial data for modeling. A number of applications [6,7] provide the ability to approximate the elliptic centroid with elementary geometric primitives -circles and lines. In this case the kinematic scheme of formation of a workpiece with an instrument is reduced to rolling motion of one centroid along the other. On the first stage the two tasks are to be solved: 1) to construct out of primitives a centroid that would sufficiently closely approximate the initial centroid; 2) to select the parameters of the primitives so that the total length of the centroid would be divisible by the pitch of teeth of the generating gear. Consider a centroid of a non-circular gear outlined by four coupled arcs of radiuses R 1 and R 2 ( fig.1). In order to acquire the points А of coupling of circles of radiuses R 1 and R 2 let us put down the equation of a circle with center in point О 2 and radius equal to The intersection of the circle (1) with the axis 0Y defines the point B, which is the center of the circle Then angle β can be acquired, for example, from the following correlation: This angle is further used to define boundaries of rolling motion of the centroid of the circular gear along the non-circular gear. Furthermore, the point of A coupling of two arcs is defined. Its coordinates are acquired from the following correlations: . cos , sin 2 Geometric modeling constitutes realization of the algorithm of kinematic formation scheme, in which the centroid of the generating gear is rolling along the centroid of a workpiece without slipping. One of such algorithms is performed in the following sequence: first, the generating gear turns around the axis of the modeled gear on the angle φ 2 , then around its own axis on the angle φ 1 . The angles φ 1 and φ 2 are linked with a the following corellation: This geometric formation modeling algorithm is described by formulas of transition from the moving coordinate system of the generating gear to the coordinate system of the workpiece. As a result of the mentioned turns, the family of instrument profiles appears. Its envelope represents the sought profile of the workpiece. The analytical solution of gear profiling comes to determination of correlation between the parameters of the initial profile and its family, which for the case under consideration is of the form of equation [19] , 0 ) cos( where the initial profile is defined by parametric equations of the form represents partial derivatives with respect to parameter t; AutoCAD environment in AutoLISP programming language. The algorithm of this technical sequence includes the following stages: 1) Initialization of the solid models of the generator instrument and the workpiece. 2) Execution of the direct task of formation in accordance with the developed modeling programs, which constitutes acquiring non-circular gear profile using the initial model of a cylindrical gear with intermittent teeth. 3) Solution of the inverse task of formation, which constitutes acquiring the generating gear model given the acquired non-circular gear model; this task is solved in case it is necessary to confirm the significance of the result of direct task solution. 4) In case of necessity, execution of removed stock modeling in order to appoint optimal technological parameters of formation. Contents of the suggested automated computer-aided modeling are demonstrated by the conducted experiments. Results of Experiments. In a computer experiment the parameters of the generating gear and the non-circular gear centroid have been selected according to fig. 3. The diameter of the centroid of the gear with intermittent teeth has been calculated so that the length of non-circular gear centroid is divisible by the pitch of teeth of the generating gear. In order to achieve this, the points of coupling of arcs of non-circular gear centroid have been calculated beforehand using the correlations (1) -(3), and their lengths have been acquired subsequently. Solid models of the generating gear and the workpiece with non-circular centroid have been created with said parameters. The solid models constitute the initial data for the automated computer-aided modeling of the direct task of formation. Figure 3. Centroids of circular and non-circular gears and their parameters On the subsequent stage of task solution, in accordance with developed programs, formation modeling of the workpiece upon rolling motion of the generating gear centroid along the non-circular gear is performed on the basis of boolean operations. First, the rolling motion occurs along the arc of radius R 1 , then along the arc of radius R 2 . Since in the considered example the length of the non-circular gear centroid is divisible by four lengths of instrument centroid, the modeling is performed only for one quarter of its length. In order to compare the acquired teeth profile of various sections of non-circular gear, fig. 4 The result of solid modeling of a non-circular gear by means of an instrument with intermittent teeth is rendered on fig. 6. The acquired digital model of workpiece can be used on a CNC-operated machine without further preparation in order to produce non-circular gears of required quality. The developed software allows us to solve the inverse task of formation, i.e. acquiring the instrument tooth model given the acquired non-circular gear model. Fig. 7 depicts a fragment of modeling of inverse formation task. It validates the acquired results and drawn conclusions. The developed software executes removed stock modeling in the process of gear teeth formation modeling, which allows us to assign optimal tool advance and number of passes as well as detect cutting edge workload on the basis of its attribute-based and quantitative parameters. Consideration of the Results The more complex the form of non-circular gear centroid, the more complex the geometry of teeth profile formation of such gear. In general, difference takes place not only between all the teeth of the non-circular gear, but also between lateral profiles of the same tooth. The solution to such task by means of analytic methods requires complex mathematical apparatus and presents significant difficulties. The reverse task of formation is even more complex in this case. Computer-aided solid modeling realized on a virtual imitation level in automated mode allows us to observe the formation of transition curves and undercut. It presents an opportunity to envision design-manufacturing conditions under which they appear, allows us to conduct the respective research and introduce the required corrections into the kinematic scheme of formation. Since complex form of non-circular gear centroids corresponds to complex kinematic formation scheme, it is essential to refine and develop new approaches and algorithms of solid computer-aided formation. 6. Conclusions 1. A geometrical model of solution to a task of profiling of non-circular gear, the centroid of which comprises interconnected arcs, is proposed. A calculation of generating wheel with intermittent teeth parameters as well as workpiece centroid parameters is performed. Such calculation is effective only in case the sought teeth profile consists exclusively of envelopes of families of circular gear profile. 2. The automated solution to the task of non-circular gear profiling is acquired on the basis of solid modeling. The profiling is executed on virtual imitation level on the basis of algorithms, which model the formation of a workpiece with intermittent teeth by means of a gear cutter. Such solution presents an opportunity not only to acquire the sought profile as an envelope of a family of curves, but also to model the undercut and transition curves, which allows us to achieve a high-grade solution to the task of profiling. 3. The proposed solid modeling also executes the solution to the inverse task of formation in automated mode. The inverse task of solution constitutes acquiring a profile of a gear with intermittent teeth given the previously acquired model of a non-circular gear. This modeling presents an opportunity to validate the solution of the assigned task, to adjust the initial data if needed and to exclude the necessity to conduct the expensive full-scale experiments. 4. The developed software presents an opportunity to create solid models of removed stock, analyzing which it is possible to solve technological tasks including the assignment of optimal tool advance and number of passes.
2019-01-23T16:43:17.730Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "539ce9c87e08c2dae7c28650887d4c3965b5cf3a", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1050/1/012049/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "2517ee04ddadf3deb8a91d8eb9923d7817dd03d6", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
235755259
pes2o/s2orc
v3-fos-license
RAM-VO: Less is more in Visual Odometry Building vehicles capable of operating without human supervision requires the determination of the agent's pose. Visual Odometry (VO) algorithms estimate the egomotion using only visual changes from the input images. The most recent VO methods implement deep-learning techniques using convolutional neural networks (CNN) extensively, which add a substantial cost when dealing with high-resolution images. Furthermore, in VO tasks, more input data does not mean a better prediction; on the contrary, the architecture may filter out useless information. Therefore, the implementation of computationally efficient and lightweight architectures is essential. In this work, we propose the RAM-VO, an extension of the Recurrent Attention Model (RAM) for visual odometry tasks. RAM-VO improves the visual and temporal representation of information and implements the Proximal Policy Optimization (PPO) algorithm to learn robust policies. The results indicate that RAM-VO can perform regressions with six degrees of freedom from monocular input images using approximately 3 million parameters. In addition, experiments on the KITTI dataset demonstrate that RAM-VO achieves competitive results using only 5.7% of the available visual information. I. INTRODUCTION Autonomous vehicles have attracted significant attention in the last few years. These vehicles require a proper perception and understanding of the world to determine their localization. In these scenarios, Visual Odometry (VO) methods provide a solution by estimating the egomotion using only visual changes from the input images. These methods require the environment to have sufficient light, the objects to have texture, and the subsequent images to overlap. However, traditional VO methods still present severe issues in real-world environments due to sudden changes in the agent's speed, changes in the scene such as illumination, shadows, occlusions, and simultaneous motion of numerous objects [1]. In recent years, Deep Learning (DL) appeared as a novel way to learn, directly from the data, the various nonlinear factors that influence scene generation and motion [2]. DL methods commonly outperform either direct or indirect hand-crafted solutions and traditional learning methods that usually suffer from non-linearities [3], [4], [5]. However, deep learning methods for visual odometry make extensive use of convolutional neural networks (CNN), which add a substantial cost when dealing with high-resolution images. Further, more input data does not mean a better prediction; on the contrary, the network may have to learn how to filter out useless information. Therefore, the implementation of computationally efficient and lightweight architectures, es- pecially for mobile devices, has attracted significant interest in approaching the problem from a new perspective. Though capturing only the necessary information is fundamental, learning where to look requires elaborating several cognitive concepts, such as attention. In this context, the Recurrent Attention Model (RAM) [6] have emerged as a novel architecture, which implements a recurrent attentional glimpse, incorporating the attention concept by incrementally selecting the essential pieces of information. One of the RAM's main advantages is employing Reinforcement Learning (RL) to guide the glimpse sensor through the image; the RL paradigm allows the model to learn a more robust and efficient policy by trial and error. However, RAM was introduced mainly as a concept proof, only implemented for classification tasks on the MNIST [7] dataset. RAM also uses the REINFORCE rule [8] to guide the glimpse sensor, but this algorithm presents convergence problems and slowness in challenging scenarios. Therefore, in this work, we propose a monocular endto-end visual odometry architecture -RAM-VO -that employs the reinforcement learning paradigm to train an attentional glimpse sensor over time. The proposed RAM-VO architecture extends RAM by introducing spatial and temporal elements to enable the 6-degree-of-freedom (6-DoF) pose regression in real-world sequences. Furthermore, RAM-VO is more computationally efficient than similar VO methods due to the addition of attentional mechanisms and the use of Proximal Policy Optimization (PPO) [9] to learn a robust policy. To the best of our knowledge, this is the first architecture to perform visual odometry that implements reinforcement learning in part of the pipeline. A. Contributions This work provides the following contributions: • A lightweight VO method that selects the important input information via attentional mechanisms; • The first visual odometry architecture that implements reinforcement learning in part of the pipeline; • Several experiments on KITTI [1] sequences demonstrating the validity and efficiency of RAM-VO. II. RELATED WORK The visual odometry field has seen a massive increase in architectures and publications in recent years. In 2015, Konda et al. [10] proposed the first architecture in the field, implemented as an end-to-end CNN model to estimate direction and velocity from raw stereo images. In the same period, extracting the optical flow was a common practice to initialize the models [11]. Subsequent architectures coupled LSTM layers to provide temporal representation [12]. In 2017, Wang et al. [13] proposed DeepVO, which is an endto-end monocular architecture capable of extracting visual features directly from the input raw images with a CNN; and determining temporal relation with an LSTM. After that, several supervised learning methods appeared to tackle distinct issues in the field. Peretroukhin and Kelly [14] proposed the DPC-Net architecture, which aims to integrate the representation capabilities of deep neural networks with the efficiency of geometric and probabilistic algorithms. DPC-Net implements a CNN-based architecture to learn the corrections for the pose estimator. Zhao et al. [15] propose the L-VO architecture, which predicts the 6-DoF pose from 3D optical flow for monocular VO. Valada et al. [16] proposed the VLocNet architecture, which is capable of estimating the 6-DoF pose in a monocular setup. Their architecture fuses relative and global RCNN-based architectures to improve accuracy. Saputra et al. [17] proposed to distill knowledge from a pose regressor employing concepts like Knowledge Distillation (KD) to train two networks jointly. Saputra et al. also proposed the first Curriculum Learning (CL) architecture, called CL-VO [18], aiming to learn the scene geometry for monocular VO by gradually increasing the task's difficulty. Several methods started to use concepts of attention in their pipeline in recent years, mainly for unsupervised learning. In 2020, Damirchi et al. [19] explored the concept of self-attention to extract meaningful features in complex scenarios, which usually have many moving objects and low textures. Also, Kuo et al. [20] proposed the DAVO architecture, which is a dynamic attention-based visual odometry composed of two attentional networks. Their first network can generate semantic masks for determining the weights that each piece of the input image should have, while the second network uses a squeeze-and-excite attentional block. Attentional concepts are also employed to support the salient feature extraction from the input images. Liang et al. [21] proposes the SalientDSO architecture, which applies attention in the Direct Sparse Odometry (DSO) [22] algorithm. The proposed method runs in two distinct modules, the first one detects the visual salience using SalGAN, and the second one performs visual odometry with DSO. The drift error has substantially decreased compared to the original DSO due to the improved sampling. Chen et al. [23] also proposed an end-to-end CNN+LSTM salientfeature attention and context-guided networks for robust visual odometry. Their architecture can be trained using only monocular images and aimed to decouple rotational and translational motion. Deep learning methods demand the adoption of large and representative datasets; conversely, lightweight and efficient methods are fundamental to the field. Reinforcement Learning (RL) and attention applied to visual odometry can provide a solution in such scenarios. The architecture becomes efficient by selecting only the necessary input data, and learning a robust policy can mitigate the drift error. However, we have not found any method that implements RL in any part of the visual odometry pipeline. A. The Recurrent Attention Model (RAM) The Recurrent Attention Model (RAM) [6] implements a hard attention mechanism similar to the biological visual system, which iteratively builds an informative vector through multiple observations in the input image. Hard attention forces the model to consider only the relevant elements, discarding the others entirely [24]. The observations are iteratively stored in a latent space, providing the knowledge to perform the task. The location of each observation is determined by a policy learned through REINFORCE [8]. RAM is composed of four distinct networks ( Figure 2). The glimpse network f g represents the attentional system and comprises a glimpse sensor to extract meaningful patches from the input images, and fully connected layers to encode the visual information. First, the glimpse sensor receives an image x t and a location l t−1 as input. Then, several patches are extracted in different resolutions, centered at the location l t−1 . This process builds a pyramidal-like structure ρ(x t , l t−1 ) similar to biological vision, representing what was observed on the image. Finally, the glimpse network concatenates ρ t and l t−1 to include the location where the information was extracted, resulting in the final vector g t . The core network f h stores the multiple observations by receiving the glimpse feature vector g t and the previous internal state h t−1 as input at every time step t. Through fully connected layers, the core network outputs the current internal state h t , which condenses all the sequential information provided by the glimpse network. The location network f l generates the location l t for the subsequent observation by sampling a Gaussian distribution with two dimensions (x, y) and a fixed standard deviation. For each subsequent observation, a novel Gaussian distribution is generated by using the internal state h t to parameterize the mean µ t . After all observations, the action network f a consumes the internal state h t to predict the class a t , which is the ultimate goal. The hard attention mechanism requires reinforcement learning strategies to train the model. Therefore, the RL setup is an instance of a Partially Observable Markov Decision Process (POMDP), in which the true state of the environment is unobserved; hence the model needs to learn a stochastic policy π (l t |h 1:T ; θ) that maps the environment history h 1:T = {x 1 , l 1 , ..., x t , l t } to a distribution over the actions in time step t, restricted by the sensor. In this sense, the glimpse sensor is the agent, the whole image is the environment, and the rewards are defined according to the success in the classification. From the ground-truth values, the agent receives r t = 1 if the class is classified correctly after T time steps, and 0 otherwise. The goal is to maximize the return G = T t=1 r t , which is sparse and delayed. The architecture parameters θ are optimized by maximizing the return G when the agent interacts with the environment. The agent's policy, in combination with the environment's dynamics, produces a distribution over the possible iteration sequences h 1:N , and the goal is to maximize the return under that distribution via is not trivial because it involves an expectation about highdimension iteration sequences, which may involve unknown environment dynamics. However, we can obtain an approximation of the gradient with the REINFORCE rule [8] as where h i 1:t are the sequences obtained by running the current agent policy π θ for i = 1, ..., M episodes, G i t is the accumulated reward obtained after executing action l i t , and b t is the baseline value, which reduces the variance for the gradient updates. The baseline value b t depends on sequence h i 1:t but not directly of the action l i t . As a result, the algorithm increases the log-probability of actions that generate a high cumulative reward and diminishes the probability of actions that generates a low cumulative reward. RAM is a hybrid architecture in which the location network is trained by the REINFORCE rule [8], while the other networks employ supervised learning. Although bringing innovative ideas from biology, RAM was proposed mainly as a proof of concept, lacking the necessary complexity to deal with high-resolution images and regression tasks. III. THE RAM-VO ARCHITECTURE The Recurrent Attentional Model for Visual Odometry (RAM-VO) was constructed by extending RAM [6] and changing the architecture's goal from classification to regression. We propose several modifications to learn a robust policy, deal with complex visual information, and enable 6-DoF pose regressions -the next sections detail each modification. RAM-VO is shown in Figures 3 and 4. A. Glimpse Network The visual odometry task requires processing two consecutive frames in the glimpse network, allowing the detection of the same features in both images and their correspondence. In this sense, the glimpse network receives two temporally consecutive images (x t , x t+1 ) and a location of interest l t as input. The images are stacked and cropped at the location provided, generating three patches of 32, 64, 128 pixels. An average pooling operation is performed in the larger patches to reduce their size to 32 × 32 pixels. Then, the three final patches P 1,2,3 are processed through convolutional layers, flattened, and concatenated into a temporary internal vector. The patches locations are encoded and multiplied by the internal vector, originating the final glimpse vector g t . In this sense, the glimpse network implements a top-down attentional mechanism by capturing small portions of visual information that we will fuse with the information locations Fig. 3: RAM-VO is composed of five subnetworks that perform specific functions. The Glimpse allows efficient data consumption by successively observing regions of interest l t on the input images. The Core sequentially integrates these observations g t in nested LSTMs, the Locator generates the next observation's location l t+1 by sampling a Gaussian distribution parametrized by the internal state h t , and the Regressor predicts the 6-DoF pose regression in the end. to generate a final vector with the correspondence between the two input images. Generating an efficient representation g t for the input images is closely associated with learning the geometry presented in the scene. Therefore, based on the FlowNetS [25], we proposed to apply the convolutional operations in both images simultaneously to learn the optical flow and considerably reducing the computational cost compared to extracting the features separately. In this sense, the glimpse network uses 6 CNN layers with 32 to 128 channels for the patch P 1 , and 4 CNN layers with 32 to 64 channels for the other two patches P 2,3 . We observed that convolutional operations with smaller kernels significantly improve the representation; therefore, we processed the patch P 1 with a kernel size of 3×3 pixels and the others with a 5×5 kernel. Padding values are defined as zero to avoid reducing the input size across the CNN layers, and we remove pooling operations entirely. We perform dimensionality reduction by varying the kernel stride between 1 and 2. Fig. 5: The glimpse sensor extracts three patches from the input image at a given location (top row). Each patch has an increased size, forming a pyramidal-like structure. The first patch P 1 is cropped with 32 × 32 pixels; the others P 2,3 are twice larger than the previous one (bottom row). Glimpse Scales. The extraction of the image patches is performed in three different scales, simulating the human visual system [6]. The first scale corresponds to a central, high-resolution region but with a smaller dimension, and the second and third scales present larger dimensions but lower resolutions ( Figure 5). This pyramidal-like structure provides a trade-off between the amount of information and computational cost. Therefore, the agent can observe the environment's details on the center and also the most salient elements presented on the boundaries. Also, the peripheral information helps the agent to determine the next location of interest l t+1 for subsequent observations. B. Core Network The core network is responsible for integrating all observations from the input images and providing the internal state h t for the pose regression and for generating the next location of interest l t+1 . Therefore, the observation g t received from the glimpse network is recurrently integrated, updating the internal state h t for every step. This process is repeated accordingly to the number of steps previously defined. In this sense, to track long-term dependencies and better represent the internal state, we adapted the core network to include two stacked LSTM layers with 1024 hidden units. LSTMs also diminished the problem of vanishing gradients during training, stabilizing the model. The ability to generate efficient RL policies is directly dependent upon the representation of the internal state h t , especially during the first epochs when high exploration is desired; hence, we initialized the weights orthogonally for h t . C. Locator and Baseliner Network The locator network provides the next location of interest l t+1 by sampling a Gaussian distribution, whose mean and standard deviation are parametrized by the internal state h t . The first location is defined randomly, and the others are sampled accordingly to the learned policy. Unlike the original RAM, the policy's standard deviation is also learned during training, promoting exploration in the first epochs. The locator network is detached from the supervised graph and trained separately by the REINFORCE rule [8]. We jointly train a baseliner network to provide the state value b t for each step, reducing the variance between the returns. Both locator and baseliner networks are composed of fully connected layers with 256 to 32 hidden units. D. Regressor Network The regressor network generates the pose prediction after the last observation is integrated into the internal state h t . The predictions are decoupled in rotational and translational components, which comprehends the orientation ϕ with the Euler angles as roll φ, pitch θ, and yaw ψ; and the position p, composed of the coordinates x, y, and z. The main goal is to regress the 6-DoF pose vector [φ, θ, ψ, x, y, z] T for every pair of frames. In this sense, the regressor network comprises three fully connected layers from 256 to 32 hidden units; the last layers provide linear outputs for the prediction. E. Loss and Reward Function RAM-VO is a hybrid architecture where the regressor, core, glimpse, and baseliner networks are trained in a supervised learning fashion, while the locator is trained by reinforcement learning. The supervised loss and the reward function are both defined in terms of the MSE to minimize the outliers as much as possible since only one poor prediction can harm the entire trajectory. Therefore, the supervised loss L is defined as wherep andφ are the position and orientation prediction, respectively; p and ϕ are the ground-truth values; and k is the constant factor weighting the two losses, favoring the rotational or the translational component; we kept the ground-truth values normalized and k = 1. The reward function R is defined as We prefer not to bias the RL agent towards a specific behavior; therefore, only the visual odometry error L is employed in the reward function. F. Proximal Policy Optimization (PPO) The REINFORCE rule [8] is known for presenting converge issues and slowness; this occurs by sudden updates on the policy's parameters, which can harm the entire training by converging to suboptimal solutions. Proximal Policy Optimization (PPO) [9] aims to attenuate these problems by updating the policy inside trusted regions. Therefore, PPO's surrogate function determines that the current policy must be close to the last one, avoiding large parameters shift. In practice, the PPO implementation consisted of replacing the locator and baseliner network with a similar structure in terms of layers and hidden units. We also use memory replay to enable the policy refinement with already sampled data, improving the architecture's efficiency. The policy refinement proportion is 20:1 concerning the supervised network -we want the best policy to control the input information flow. A. Dataset In this work, we used the KITTI dataset [1], one of the most popular datasets for evaluating visual odometry methods. The entire dataset consists of 22 sequences (39.2 km) of real-world traffic data captured by a car moving across urban and rural areas in Germany. However, only the first eleven (00-10) sequences have ground-truth information. Therefore, we used the grayscale images provided by the left camera, resized to 1200 × 360 pixels in resolution. In order to compare our results with other methods, we chose sequences 0, 2, 4, 5, 6, 8, 9 for training, sequences 10 for validation, and sequences 3, 7 for testing. We did not use sequence 1 due to the higher average variation for the translational component. The whole data used comprehends 18,990 images for training, 1,200 for validation, and 1,902 for testing. We did not use data augmentation. We preprocessed each image by equalizing the pixel intensity histogram in small windows of 8×8 pixels with the Contrast Limited Adaptive Histogram Equalization method. In this way, we can highlight the image's features without increasing noise. Also, we normalized the images with the z-score function before entering the glimpse network. B. Evaluation Metrics The most common evaluation metrics for visual odometry compute the agent's absolute trajectory error (ATE) and the relative pose error (RPE). These metrics are commonly reported for the entire trajectory by computing the root mean square error (RMSE) for all frames. Absolute Trajectory Error (ATE) is used to compute the method's global consistency. We compare the estimated pose with the ground-truth pose for each frame. However, the poses usually are specified in arbitrary coordinate frames and must be aligned to be compared. The absolute pose error at instant i is given by E i = G −1 i AH i , where G i is the groundtruth pose at instant i, H i is the estimated trajectory pose at instant i, A is the best alignment transformation. Relative Pose Error (RPE) computes the error using only a relative relationship between frames, which could solve the issues associated with a global frame comparison. In this sense, RPE measures the local consistency of the trajectory and is a reliable metric for the drift. The relative pose error at instant i is given by , where k is a fixed time interval, which determines the trajectory consistency accuracy; for visual odometry, k = 1 is usually used. We compute RPE by averaging all sub-sequences ranging from 100 to 800 meters in KITTI's sequences. C. Hardware and Hyperparameters We implemented this work in Python 3 with Pytorch. The hardware used for building and training the models was an Intel Core i7-10700KF @ 3.80GHz, Nvidia RTX 2060 with 6Gb, and Cuda v11.1. The model configuration consisted of 3 Fig. 6: The temporal sequence of eight observations on the first frame on sequence 2. The glimpse sensor exploits highgradient regions in most captures, such as corners and edges. image patches of 32×32 pixels, batch size of 128, supervised learning rate of 1 × 10 −4 , and RL learning rate of 1 × 10 −6 ; we employed the Adam optimizer for both networks. We trained the models for 400 epochs without early stopping or learning rate decay. The average training time was around 13 hours. The inference time is 35ms for a pair of frames. V. EXPERIMENTAL RESULTS We conducted several experiments (Table I) to provide a better understanding of the RAM-VO behavior on complex sequences in the KITTI dataset [1]. First, we validated the number of glimpses/observations on building a usable internal state h t for visual odometry. Thus, we varied the number of observations from 1 to 12 and also tested with random observations. Second, we replaced the REINFORCE algorithm with PPO to evaluate the impact of the policy on the generalization; finally, we varied the internal state h t capacity from 1024 to 256 hidden units to evaluate the impact on the drift error. A. Number of Observations The first experiment consisted of making a single observation at the image's center. The error metrics indicate that the model overfits the train sequences and cannot generalize, possibly by learning the appearance instead of geometry. The information captured from only a central region facilitates the learning of the scale since it tends to be constant for most frames; however, some frames show different behaviors (e.g., obstructions, reflections) and quickly degrades the trajectory prediction. Also, only one capture in the same location reduces the data diversity necessary to learn complex behaviors; the model tends to memorize the single observation instead of learning a general dynamics. The second experiment consisted of a single observation in a random location. Although a single random observation provides more data diversity during training, it still captures little and sparse information, affecting the scale learning, hampering robust predictions. Furthermore, good predictions with random observations require learning the general dynamics since the input space is ample and the model's capacity is limited. Finally, we highlight that these experiments did not use reinforcement learning since the location is already determined; and a single observation did not provide enough information for generalization. The subsequent experiments aim to determine the impact of the observations, and consequently, the learned policy to select informative patches and the core network's ability to integrate them. Therefore, we set the number of observations to 4, 8, and 12; all locations are determined by the policy. The model achieved the best generalization results with 4 and 8 glimpses; 12 glimpses have not provided better results, as more data not necessarily mean better predictions. More observations demand more from the core and locator networks since a single poor observation can harm the entire internal state and delay, even more, the sparse reward. Considering the experiment with 8 observations, the totality of input information is 5.7% of the total available. We conclude that the agent is retrieving high informative patches for most observations, which consisted of edges and corners ( Figure 6). Also, the learned policy displayed the traditional Gaussian pattern indicating a preference for observing the right-center portion of the image (Figure 7). B. Experiments with PPO The following experiments (Table II) evaluate the results achieved by replacing the REINFORCE algorithm with PPO. We also investigated the impact of reducing the internal state h t capacity from 1024 to 256 hidden units. The core network corresponds to most of the model's parameters; therefore, knowing the minimum capacity required to achieve good results is crucial for delivering lightweight models. For all experiments, we captured 8 glimpses and computed the statistics for three distinct executions. PPO with 1024 hidden units provided the best generalization, mainly due to the increased capacity. We observed that decreasing the number of parameters increases the relative error during training, and the generalization is strongly affected on average. Although PPO 1024 had the best performance, the difference in the results' quality may not justify a threefold increase in parameters. PPO algorithm is sensitive to the initialization; therefore, we selected the best models of three executions to predict the trajectories (Figure 8). PPO 256 has only 2.92 million parameters and provides results compatible with the PPO 1024 on the best execution. In conclusion, we asserted that the PPO algorithm provided a slightly better generalization capacity than the REINFORCE algorithm. PPO learned a more centered policy, although very similar to the one learned by the REINFORCE algorithm. C. Comparison with Literature Our best RAM-VO with PPO obtained competitive results (Table III) using less input information than similar methods, around 5.7% of the total available, considering 8 glimpses with the size of 32 × 32 pixels. While RAM-VO uses topdown attention to capture regions of interest, methods like ORB-SLAM [26] need to analyze an entire image to detect keypoints. Besides, ORB-SLAM is a geometric method that depends on high-texture regions for an accurate keypoint match between frames. DeepVO [13] and ESP-VO [27] are both based on the FlowNet [25] architecture and therefore have more convolutional layers and channels than RAM-VO. These architectures perform direct VO by determining the frames' correspondence from the pixel values; consequently, they are more robust to outliers than ORB-SLAM. However, direct methods are costly, especially DeepVO and ESP-VO, which use the entire image as input. In visual odometry, the motion is present in the whole image; hence, more data does not necessarily bring novel information. The model's capacity is also a relevant fact to be considered. Although DeepVO and ESP-VO may present similar results on average, RAM-VO with 256 hidden units in the core network achieves comparable results with only 2.7 million parameters -which is much lower than the 17 million parameters reached by RAM-VO with 1024 hidden units. Concurrent methods regularly pass 32 million parameters, especially when they are extensions of architectures like AlexNet [28], and FlowNet [25]. Also, these CNN networks are considerably deep and add a high cost in terms of trainable parameters, making the training slow and the deployment a complicated task for mobile devices. Learning-based methods tend to be slow and costly to run due to requiring high-end devices with large processing power, GPUs, and better batteries; these issues make adopting learning methods problematic. In the same way, the results presented by ORB-SLAM are worse compared to learning-based methods, but they are sufficiently fast to build online applications on mobile devices. In this context, RAM-VO represents an alternative method capable of providing results similar to large models but with a smaller cost in trainable parameters and data consumption. D. Limitations RAM-VO has difficulty in obtaining the world scale and determining the translational motion with greater precision. This issue probably happens due to the small image patches that, depending on the vehicle's velocity, can prevent the overlap between frames, compromising the regression since it relies on the features' correspondence. We aim to assess these problems in future formulations. VI. CONCLUSION In this work, we proposed the RAM-VO model for monocular end-to-end visual odometry. Our model was extended from RAM [6] and therefore implemented attention and reinforcement learning to optimize the selection of visual information for regression tasks. RAM-VO innovated with the addition of 6-DoF pose regression, a robust glimpse network to learn optical flow, an improved core network to store temporal relation between observations, and the replacement of REINFORCE by the PPO algorithm to learn better policies. To the best of our knowledge, RAM-VO is the first architecture for visual odometry that implements reinforcement learning in part of the pipeline. The experimental results indicate that RAM-VO can predict 6-DoF poses in the real-world KITTI [1] dataset with generalization for unseen sequences. The comparison with the literature indicated that RAM-VO could achieve competitive results using significantly less trainable parameters and input information. Similar learning methods consume the whole input image to determine the pose, while the RAM-VO uses a small fraction, around 5.7% of the total input data.
2021-07-08T01:16:27.807Z
2021-07-07T00:00:00.000
{ "year": 2021, "sha1": "7ca0c3dde736467ae0b578f31b6fb3539180e6fd", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7ca0c3dde736467ae0b578f31b6fb3539180e6fd", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
1779673
pes2o/s2orc
v3-fos-license
Canonical Wnt Signaling Ameliorates Aging of Intestinal Stem Cells SUMMARY Although intestinal homeostasis is maintained by intestinal stem cells (ISCs), regeneration is impaired upon aging. Here, we first uncover changes in intestinal architecture, cell number, and cell composition upon aging. Second, we identify a decline in the regenerative capacity of ISCs upon aging because of a decline in canonical Wnt signaling in ISCs. Changes in expression of Wnts are found in stem cells themselves and in their niche, including Paneth cells and mesenchyme. Third, reactivating canonical Wnt signaling enhances the function of both murine and human ISCs and, thus, ameliorates aging-associated phenotypes of ISCs in an organoid assay. Our data demonstrate a role for impaired Wnt signaling in physiological aging of ISCs and further identify potential therapeutic avenues to improve ISC regenerative potential upon aging. and human ISCs and, thus, ameliorates aging-associated phenotypes of ISCs in an organoid assay. Our data demonstrate a role for impaired Wnt signaling in physiological aging of ISCs and further identify potential therapeutic avenues to improve ISC regenerative potential upon aging. INTRODUCTION Aging is a complex process, ultimately leading to a decline in tissue regenerative capacity and organ maintenance. A decline in stem cell function upon aging might be one underlying factor for aging-associated changes in stem cell-driven tissues Rando, 2006). The intestine is a stem cell-based organ. Already in the late 1990s, Martin et al. (1998aMartin et al. ( , 1998b reported a functional decline in the regenerative potential of aged mouse small intestine during physiological aging and in response to irradiation. These studies reported delayed proliferation and increased apoptosis in aged small intestinal crypts (Martin et al., 1998a(Martin et al., , 1998b. However, at that time, a lack of markers for stem cells within the intestinal epithelium prevented more detailed analyses of the role of stem cell aging in aging-associated changes in the intestine. New marker systems now allow the prospective identification, purification, and analysis of intestinal stem cells (ISCs) upon aging. ISCs are located adjacent to differentiated Paneth cells at the base of cup-shaped invaginations called crypts. Above the crypt base is a highly proliferative transient amplifying zone that leads to protrusions called villi, which are primarily composed of enterocytes with intermingled secretary goblet cells and enteroendocrine cells (Barker et al., 2008). Evidence exists for a decline in regenerative function of intestinal epithelium upon DNA damage induced by short telomeres and reactive oxygen species (ROSs) (Jurk et al., 2014;Nalapareddy et al., 2010). However, the extent to which ISC function alters during physiological aging is still a matter of debate. Wnt signaling in the intestinal epithelium is well studied and critical for tissue homeostasis in young mice (Pinto et al., 2003;van der Flier et al., 2009b). Whether changes in Wnt signaling pathways contribute to changes in ISC function upon aging has so far not been determined. In this study, we show that aging results in a decline in ISC function and impaired regenerative capacity of the intestinal epithelium. Aged ISCs present with a decline in canonical Wnt signaling in ISCs and canonical Wnts themselves in both ISCs and stroma. This decline in canonical Wnt signaling is causative for the decline of ISC function, and further reactivation of canonical Wnt signaling ameliorates the impaired function of aged ISC, demonstrating that ISC aging is reversible. Aging Alters Small Intestinal Crypt and Villus Architecture and Crypt Cell Proliferation We first investigated changes in small intestinal architecture and histology upon aging, including crypt number, crypt size, and villus length. Histological H&E analysis of intestinal tissue from young (2-3 months old) and aged mice (20-22 months old) showed a decrease in crypt number accompanied by an increase in crypt length and width in aged compared to young intestine in both the proximal and distal regions ( Figures 1A-1H). Interestingly, the length of villi and the number of cells per crypt were also elevated in aged mice (Figures S1A-S1D). Aging thus results in changes in the architecture of the small intestine. We next evaluated the extent of differences in cell proliferation in young and aged intestinal crypts and ISCs. Changes in proliferative potential have been, for example, associated with aging in muscle and hematopoietic compartments (Nalapareddy et al., 2010;Rando, 2006). Analyses of the mitotic index by phospho-histone H3 staining, which marks cells undergoing mitosis, revealed a decline in the number of mitotic cells in aged compared to young crypts ( Figures 1I and 1J). To get additional insight into proliferative status upon aging, we performed bromodeoxyuridine (BrdU) tracing experiment. It takes approximately 4-5 days for a progenitor cell derived from an ISC division at the crypt base to reach the tip of the villus (Barker et al., 2008). The distance of migration is determined by the speed of cell migration and the proliferation status of the ISC as newly emerging crypt cells, upon ISC division, push older cells toward the tip of the villus. Our data revealed that the average distance of BrdU-positive cells that had traveled from the crypt base into the villi 72 hr post BrdU administration was larger in young mice compared to aged mice ( Figures 1K and 1L), consistent with fewer mitotic events of ISCs and/or reduced transit-amplifying cell proliferation rates upon aging. We further investigated changes in the expression of cell cycle regulators and the level of apoptosis upon aging, both of which might be linked to the decline in the number of mitotic crypts. Consistent with a reduced rate in mitotic progression, expression of CDKN1C (p57) was reduced in aged intestinal crypts, whereas other cell cycle regulators like CDKN1A (p21), CDKN1B (p27), and CDKN2A (p16), and Cyclin D1 showed no significant change upon aging ( Figure S1E) (all FAM labeled realtime PCR primers are listed in Table 1). We also observed an increase in terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL)-positive (apoptotic) cells in aged mouse intestinal crypts (Figures S1F and S1G). Taken together, these results indicate that aging alters the crypt and villus architecture and that aged crypts exhibit reduced cell divisions and reduced survival. Aging Affects ISC Markers and Canonical Wnt Signaling To investigate whether there are changes in ISC number upon aging, we analyzed intestinal tissue from young and aged Lgr5-eGFP-IRES-CreER T2 reporter mice (Barker et al., 2007), as Lgr5 is an established marker of ISCs (Barker et al., 2007). Interestingly, we did not observe a significant change in the number of crypts positive for Lgr5-EGFP (Figures 2A and 2B) as well as in the percentage of Lgr5 EGFPhi cells (as determined by flow cytometric analyses) ( Figure S2A) as well as for the total number of Lgr5EGFPhi ISCs ( Figure 2C) at the crypt base at positions 0 to +4 (Barker et al., 2007) in aged compared to young animals. On the other hand, our observation of no changes in ISC numbers upon aging is consistent with a previous report investigating aging-associated changes in crypts based on continuous clonal labeling approaches (Kozar et al., 2013). Because of the mosaic nature of the Lgr5-EGFP marker, which might hamper detailed quantitative analyses, we used another well-established ISC marker, Olfm4 (van der Flier et al., 2009a), to study ISCs upon aging. In situ hybridization experiments indicated that Olfm4 cells are present at the crypt base in both young and aged intestinal crypts. Olfm4 RNA expression levels were also similar in young and aged intestinal crypts ( Figures 2D and 2E). In addition, we determined the expression levels of other markers more recently assigned to be specific for ISCs in crypts. Expression analyses of published +4 quiescent ISC markers (Lrig1, Hopx, Sox9, Tert, and Bmi1) revealed that the expression of Lrig1 and Tert were reduced ( Figure 2F), whereas the expression levels of Hopx, Sox9, and Bmi1 ( Figure 2F) did not change upon aging. Lrig1 controls ISC proliferation , and Tert is involved in the maintenance of stemness of stem cells in the intestine and other stem cell compartments (Montgomery et al., 2011;Nalapareddy et al., 2008). Together, these data imply that aging does not result in a change in the absolute number of ISCs, though some intestinal stem cell markers (Lrig1 and Tert) present with a decrease in expression. To initially evaluate the extent of changes in ISC function upon aging, we performed short-term lineage tracing experiment on young (2-3 months old) and aged (22-24 months old) Lgr5-eGFP-IRES-CreER T2 : Rosa26 YFP mice ( Figures 1K and 1L). One week after yellow fluorescent protein (YFP) activation, Lgr5 EGFP-positive cells from crypts of young animals presented with YFP tracing into the villus, whereas the YFP-marked villus was subtle in aged mice ( Figures S2B-S2D). Analyses 4 weeks after YFP activation in ISCs showed similar results compared to that of the 1-week time point ( Figures 2G and 2H), indicating impaired ISC function upon aging. Similar results were obtained with 3 days of YFP activation followed by analysis 1 week after YFP activation ( Figures S2E-S2G). In aggregation, these data imply that, rather than the number, the function of ISCs might be altered upon aging. To delineate the molecular mechanisms of aging associated with ISCs or niche cells (Paneth cells), RNA sequencing (RNA-seq) analyses were performed on isolated Lgr5 EGFPhipositive ISCs ( Figure S3A) and CD24 hi-positive Paneth cells ( Figure S3B; Kim et al., 2014;Sato et al., 2011) RNA-seq analysis revealed changes in the gene expression profile in both Lgr5 intestinal stem cells as well as in CD24 hi Paneth cells ( Figures S3C and S3D). Molecular processes identified by Gene Ontology (GO) terminology that were downregulated in aged ISCs included, as anticipated, cell proliferation but also extracellular matrix, PPAR and SMAD signaling, and Wnt signaling pathways ( Figure 3A; Figure S4A). We subsequently focused on changes in Wnt signaling upon aging because of its prominent role in the regulation of young ISCs. As expected and known from young animals (Farin et al., 2012), the level of expression of Wnt ligands was higher in Paneth cells irrespective of their age, whereas the expression of Wnt target genes was higher in ISCs compared to Paneth cells irrespective of their age ( Figure 3B). Quantitative real-time RT-PCR analyses for expression of Wnt1, 2, 2b, 3, 3a, 8a, 8b, 10a, and 10b (Farin et al., 2012) in ISCs demonstrated reduced Wnt3 ( Figure 3C) levels in aged ISCs. Paneth (niche) cells also presented with reduced Wnt3 expression levels ( Figure 3D). However, expression of other Wnts (like Wnt1, 2, 3a, 8a, 8b, 10a, and 10b) in both ISCs and Paneth cells was below our threshold level (data not shown). Mesenchyme has been recently been identified as an ISC niche and support system for ISCs (Farin et al., 2012;Smith et al., 2012). Quantitative analyses on expression of canonical Wnts in mesenchyme (in the absence of crypt epithelium) from young and aged mouse small intestine also revealed a decline in Wnt3 but, for example, not for Wnt2b and Wnt2 (Figures 3E and 3F; Figure S4B), whereas the other Wnts tested were, again, below our threshold level of detection (data not shown). Finally, canonical Wnt signaling target genes and genes regulating ISC function, like β-catenin, Ascl2, Lgr5, Myc, Ephb2, and CD44 (van der Flier et al., 2009b) presented with a decline in the level of expression in ISCs but, interestingly, not CyclinD1, Axin2, or Olmf4 ( Figure 3G). Similar data were obtained when young and aged crypts were analyzed for levels of gene expression, like a reduction in the expression of Wnt3, β-catenin, Ascl2, and Lgr5, upon aging, except for Axin2, which was also down in whole-crypt analysis. (Figures S4C and S4D). At the protein level, Ascl2 and nuclear β catenin were reduced ( Figures S4E and S4F) in aged intestinal crypts. Together, these data imply a decline of canonical Wnt signaling in ISCs upon aging, which is linked to a decline in expression of canonical Wnts like Wnt3 in both Paneth cells and the mesenchyme as well as in ISCs themselves. Notch signaling, together with Wnt signaling, regulates ISC differentiation (Tian et al., 2015). We also observed a decline in the expression of Notch1 ( Figure 3H) (expression of Notch2, 3, and 4 was not detected in ISCs; data not shown) and an increase in Atonal homolog 1 (Atoh1) ( Figure 3I) gene expression. Atoh1 is a secretory-specific transcription factor described to control lateral inhibition through deltalike notch ligand genes in young crypts and also to drive the expression of numerous secretory lineage genes (Kim et al., 2014). Therefore, these data also suggest altered ISC differentiation upon aging. Aging Affects Differentiation in the Intestinal Compartment The aging-associated changes in canonical Wnt and Notch signaling might result in changes in the differentiation potential of aged ISCs. We thus quantified the number of goblet and Paneth cells in the aged intestine. Goblet and Paneth cells stem from ISCs. The number of Paneth cells per crypt, as determined by lysozyme or MMP7 staining, in both the proximal and distal mouse intestine was increased upon aging (Figures 4A and 4B;Figures S4G and S4H). The number of goblet cells (determined by Alcian blue staining), a secretory cell type, Nalapareddy et al. Page 5 Cell Rep. Author manuscript; available in PMC 2018 June 05. was also increased in aged compared to young mouse intestine ( Figures 4C and 4D). Our finding of an increase in the number of differentiated secretory cells upon aging is consistent with reduced Notch signaling along with an increase in the expression levels of Atoh1, which all favor ISC differentiation. Aging thus also results in an increase in the number of secretary lineage cells, including Paneth cells and goblet cells, most likely driven by changes in the differentiation potential of aged ISCs. This increase in Paneth cell number, though, does not compensate for the overall lower level of expression of Wnt3 upon aging because overall Wnt signaling is reduced in aged ISCs. Aging Attenuates the Regenerative Capacity of ISCs Aged muscle and hematopoietic stem cells present with reduced regenerative potential (Geiger et al., 2013;Rando, 2006;Rossi et al., 2008). Whether aging also results in a decline in ISC regenerative potential in vivo was addressed in assays determining the regenerative response to ionizing radiation. Irradiation in the intestinal epithelium is accompanied by crypt shrinking because of apoptosis, followed by a burst of proliferative response predominantly from existing/surviving ISCs (Metcalfe et al., 2014), usually leading to an increase in crypt depth followed by crypt fission. Analysis of the mouse small intestine 5 days after 10 Gy of irradiation ( Figure 5A) showed a higher number of Ki67-negative, or non-proliferative, crypts (Figures 5B and 5C) in aged compared to young intestine. These findings can be explained by both an increase in apoptosis ( Figures S5A and S5B) or a delayed regenerative response. To further delineate the extent of a change in regenerative function of ISCs upon aging, we employed two consecutive doses of 10-Gy irradiation 24 hr apart ( Figure 5D; referred to as 10+10 Gy) to model additional serial stress and induction of regeneration (Geiger et al., 2013). We observed a strong decline 3 days after 10+10 Gy irradiation in the number of viable crypts in the young intestinal epithelium compared to the non-irradiated control ( Figures S5C and S5D). We also detected a marked increase in crypt depth and crypt fission in young but not in aged intestines on day 5 after irradiation ( Figures 5E-5H). In addition, ~50% of aged mice died by day 5 in response to 10+10 Gy compared to only 12% of young animals (data not shown). There was no difference in the number of viable crypts between young and aged mouse intestines on day 5 after irradiation ( Figures S5C and S5D). These data support that young ISCs exhibit a greater regenerative potential compared with aged ISCs. Restoring Canonical Wnt Signaling Ameliorates Aging of ISCs To further investigate the extent of altered ISC regenerative potential upon aging, we determined the frequency of organoid formation of young and aged duodenal (proximal) crypts (Sato et al., 2009). The organoid system is an accepted ex vivo assay that is reflective of stem cell function in vivo (Boj et al., 2015). The ability to form organoids depends primarily on ISC function (Barker et al., 2007;Sato et al., 2009). Organoids derived from both young and aged intestinal epithelium were initially formed with equal efficiency ( Figure S6A). Organoids from aged mice, though, had a reduced rate of organoid formation after the third passage ( Figures 6A and 6B). In addition, the number of lobes or buds per crypt, another indicator of stem cell function, was lower in replated organoids from aged intestine ( Figure 6C). Finally, organoids derived from crypts of young mice were able to form organoids through the termination of the assay at the eighth replating, whereas organoids from aged mice showed a severe decline in replating efficiency after the fourth split. These data demonstrate a decline in the regenerative potential of aged ISCs and are reminiscent of the loss of repopulating activity of aged haematopoietic stem cells (HSCs) upon serial transplantation (Kamminga et al., 2005). If this decline in ISC function upon aging was a consequence of the decline in canonical Wnt signaling ( Figures 3C and 3G), then restoration of canonical Wnt signaling in aged ISCs might improve their regenerative potential. Addition of Wnt3a, an inducer of canonical Wnt signaling, to aged organoid cultures resulted in an increase in the number of organoids and increase in the number of lobes/buds in organoids derived from aged animals compared to non-treated aged control organoids ( Figures 6D-6F) almost to the level seen in cultures of young organoids. Addition of Wnt3a to aged organoid cultures resulted, as expected, in elevated levels of expression of the canonical Wnt target genes Axin2 and Ascl2 in aged organoids ( Figure S6B). Re-activation of canonical Wnt signaling in aged crypts thus reestablished a more youthful regenerative potential in aged ISCs. We finally investigated the extent of the aging-associated decline in human ISC function. Organoid cultures from young (12-16 years old) and aged (62-77 years old) human subjects showed, similar to the mouse, a decline in the frequency of organoid formation upon aging ( Figures 6G and 6H) that was ameliorated by addition of Wnt3a ( Figures 6G and 6H). These data demonstrate an important role of reduced canonical Wnt signaling in aged ISCs in mice and humans that is tightly linked to reduced regenerative potential upon aging. Enhancing canonical Wnt signaling to a youthful level might thus be one approach to restore the function of aged ISCs to a more youthful level. DISCUSSION Stem cell aging is one underlying cause of aging in tissues that depend upon stem cell activity in the adult (Nalapareddy et al., 2008;Rando, 2006). The laboratory of Chris Potten pioneered the field of intestinal cell biology with the finding that aging impairs regeneration of mouse intestinal epithelium (Martin et al., 1998a;Potten et al., 1974). Our studies further substantiate but also significantly extend these findings by demonstrating that, upon aging, there is decreased ISC function and regenerative potential. Although the number of ISCs was not reduced upon aging, aged ISCs showed reduced regeneration upon serial radiation exposure (10+10 Gy) and a decline in organoid formation. Phospho-histone 3 staining demonstrates that the number of cells ultimately entering mitosis (mitotic index) is reduced in aged compared with young mouse intestinal crypt cells. This conclusion is further supported by BrdU tracing experiments in which BrdU-positive cells from young intestinal crypts travel farther into the villus compared with aged ones. Ultimately, ISC lineage tracing experiments with young and aged Lgr5-EGFP-IRES-CreER T2 : Rosa26 YFP animals further substantiate a decline in stem cell function and turnover upon aging. Previous reports (Kozar et al., 2013) indicated that the number of ISCs and the number of stem cells replaced are age-independent by using a specific labeling technique of intestinal epithelial cells at a young age with a follow-up to 2 years. This is distinct from our experiments of ISC-specific labeling and tracing at a distinct age (8-10 weeks and 22-24 months of age). However, further experiments are necessary to evaluate changes in intestinal clonality upon aging in more detail, comparing whole intestinal epithelial cell-specific, ISC-specific, and niche-(Paneth cell) and enterocyte-specific labeling protocols. The exclusive decline in p57 levels among a panoply of cell cycle regulators in the aged crypt is somewhat surprising, although low levels of p57 have been associated with stem cell hibernation (Yamazaki et al., 2006) and low levels of p57 upon aging were also detected in aged hematopoietic stem cells (Florian et al., 2012). Besides a decline in Wnt signaling in aged ISCs, we also observed a decline in genes regulating extra cellular matrix proteins and genes that alter cell proliferation and changes in PPAR and SMAD signaling (Figure 3). Tissue damage upon aging affects the extracellular matrix, which further alters stem cell niches regulating stem cell-regenerative functions (Blau et al., 2015), so changes in extracellular matrix proteins might further contribute to changes in the niche, which still has to be determined though. However, downregulation of cell proliferation genes could also be a consequence of Wnt signaling deregulation in ISCs ( Figure 3G) and deregulation of genes affecting stem cell function, such as Lrig1 ( Figure 2F) and Ephb2 ( Figure 3G), in ISCs. Another possibility is that changes that affect cell adhesion and molecular changes affecting proliferation are actually directly linked. Whether Smad and PPAR signaling are linked to changes in Wnt signaling upon ISC aging or vice versa will also need further investigation. Our data imply that primarily reduced canonical Wnt signaling in ISCs causes impaired ISC function upon aging. We report here a decline in canonical Wnt signaling in ISCs and reduced canonical Wnt expression (primarily Wnt3) in both Paneth cells and mesenchyme, which might also contribute to the reduced activity of canonical Wnt/Wnt signaling in aged ISCs. Because it was recently reported that Wnt3 transfer requires direct cell contact and has only a very limited range (Farin et al., 2016), the most likely cellular source of Wnts that influence Wnt signaling upon aging might thus be mostly Paneth cells and, in part, mesenchyme. Our data also support a likely contribution of an ISC-intrinsic mechanism of changes in Wnt expression to reduced canonical signaling in ISCs upon aging. The precise molecular mechanisms that lead to reduced expression of canonical Wnts and Wnt signaling upon aging will require further investigation. It is interesting to note that Ascl2, and not primarily Axin2, seems to be the "aging" target gene of canonical Wnt signaling in ISCs because changes in the expression of Ascl2 are closely correlated to the aging and rejuvenation phenotype reported here. Although Axin2 is a strong prototype target gene of canonical Wnt signaling, as demonstrated in multiple publications, upon aging, in our study, Axin2 is not the gene linked to changes in Wnt Signaling in ISCs. Thus it is also a surprising finding that canonical changes in canonical Wnt signaling like β catenin, Myc, and Ascl2 present with little or no correlation to Axin2 expression. Our experiments reveal a critical role of a decline in Ascl2 expression at both the RNA and protein levels upon aging in both crypts and ISCs. A contribution of changes in Ascl2 expression to aging is consistent with a report demonstrating a central role for Ascl2 in determining ISC fate (van der Flier et al., 2009b). This finding is also in line with recent reports suggesting Ascl2 as a central Wntresponsive transcription factor (Schuijers et al., 2015). The increase in the number of secretory lineage cells, namely Paneth and goblet cells, also implies deregulation of stem cell differentiation pathways in aged ISCs, most likely as a response to deregulated Notch signaling, like elevated levels of Atoh1 expression (Kim et al., 2014;Koch et al., 2013;Tian et al., 2015). Additional studies are required to delineate the detailed interplay between Notch and Wnt signaling upon aging of ISCs. Wnt signaling plays a prominent role in ISC maintenance in young animals (van der Flier et al., 2009b), and our data provide evidence that changes in Wnt signaling upon aging are causative for aging of intestinal ISCs because a youthful level of organoid formation can be achieved by re-activating canonical Wnt signaling in aged ISCs. Because mutations leading to hyperactivation of canonical Wnt signaling are linked to intestinal tumorigenesis (Gregorieff and Clevers, 2005), lowering Wnt signaling upon aging might be, among others, a mechanism to counter hyperactivation in the case of mutations in aged intestine, though at the expense of an overall reduced regenerative potential of ISCs. Our finding of a lower level of Wnt signaling upon ISC aging is distinct from reports of aging in the muscle stem cell compartment (Brack et al., 2007), in which elevated levels of Wnts were reported to cause aging of muscle stem cells. Our findings are in line with mechanisms of aging in the HSC compartment , in which aging of HSCs is also associated with low canonical Wnt signaling (Reya et al., 2003). In summary, we demonstrate impaired ISC function in aged intestinal epithelium because of a decline in canonical Wnt signaling. Restoration of a more youthful phenotype of aged ISC function is achieved by reactivation of canonical Wnt signaling in both murine and human intestinal organoid cultures. These data suggest reactivation of canonical Wnt signaling to a youthful level as a potential therapeutic approach to restore youthfulness of ISC function to increase the regenerative capacity of aged intestinal epithelium. EXPERIMENTAL PROCEDURES Experimental Mice Young (2-4 months old) male and female C57BL/6 mice were purchased from Charles River Laboratories and aged (18-22 months old) female C57BL/6 mice from NIA. Lgr5 eGFPCreERT2 mice were purchased from Jackson ImmunoResearch Laboratories (C57BL/6x129/SvEv mice). Lgr5 eGFPCreERT2 mice were crossed with Rosa26 YFP mice to obtain Lgr5 eGFPCreERT2 Rosa26 YFP mice (both male and female) and aged for up to 2 years. All analyses were performed on proximal (8-9 cm from the start of the small intestine) or distal part (the last 5-6 cm) of the small intestine. Animals were housed under specific pathogen-free conditions and handled in accordance with protocols approved by the Animal Care and Use Committee of Cincinnati Children's Hospital Medical Center. Histology and Microscopy Histological analysis was performed using H&E staining using standard histological protocols. Measurements for crypt depth and villus height were taken using ImageJ software on pictures taken from an Olympus CX41 microscope with Qcapture software. For goblet cell analysis, paraffin-embedded, 6-μm-thick samples were rehydrated and stained with Alcian blue solution (pH 2.6) and counterstained with nuclear fast red. 2-to 3-month-old young mice and 20-to 22-month-old aged mice were irradiated by the Cincinnati Children's Hospital Research Foundation (CCHRF) Comprehensive Mouse and Cancer core Facility using the model mark I-68A cesium 137 irradiator (JL Shepherd & Associates) with one dose of 10-Gy and harvested 5 days after irradiation or irradiated with 10 Gy followed by a second 10-Gy dose 24 hr later and harvested 3 days and 5 days after irradiation. Immunohistochemistry and Immunofluorescence Tissues were fixed overnight in 4% paraformaldehyde at 4°C and were washed three times in PBS. Fixed tissues were dehydrated and embedded in paraffin by the Cincinnati Children's Hospital Medical Center (CCHMC) Pathology core. Immunofluorescence was performed on 6-μm-thick paraffin sections. Sections were deparaffinized, rehydrated, and permeabilized in 10 mM sodium citrate buffer. Primary antibodies were incubated overnight at 4°C: Ki67 (Thermo Scientific, SP6, 1:100 dilution in PBS), lysozyme (Dako, 1:400 dilution in PBS), MMP7 (R&D Systems, 1:250 dilution in PBS), anti-BrdU antibody (Santa Cruz Biotechnology, 1:100 dilution in PBS), phospho-histone H3 (Cell Signaling Technology, 1:100 dilution in PBS). This was followed by washes with PBS and incubation with secondary antibodies: anti-mouse fluorescein isothiocyanate (FITC) (Jackson ImmunoResearch Laboratories, 1:200) and anti-rabbit Cy3 (Jackson ImmunoResearch Laboratories, 1:200) for 1 hr at room temperature. For cryo-embedding, fixed tissues were incubated overnight in 30% sucrose in PBS at 4°C and then embedded in optimal cutting temperature (OCT) compound (Sakura); sections were cut at 7-μm thickness. Tissues were permeabilized in 0.3% Triton X for 10 min and washed three times with PBS, and then we followed same protocol as mentioned above. Crypt Isolation, Organoid Culture, ISC Isolation, and Gene Expression Analyses Mouse small intestine was dissected and washed in cold PBS. Villi were removed by scraping with glass slides. Intestinal pieces were transferred to 5 mM EDTA in PBS (pH 8), followed by three 1-min shakings by hand with a 10-min incubation at 4°C. Intestinal pieces were removed and centrifuged at 800 rpm for 5 min, and then the pellet was resuspended in PBS followed by centrifugation at 600 rpm for 2 min. Isolated crypts were used for organoid culture or frozen at −80°C for further experiments. 500 or 1,000 crypts/well were mixed with Matrigel and plated in a 24-well plate, polymerized in an incubator at 37°C for 15 min, and overlaid with 500 μL of intestinal stem cell medium: DMEM/F12 (Invitrogen), 2 mM Gluta Max (Invitrogen), 10 mM 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES) buffer (Sigma), 0.5 U/mL penicillin/streptomycin (Invitrogen), N2 and B27 supplement 1× (Invitrogen), 50 ng/mL mouse recombinant epithelial growth factor (Invitrogen), 100 ng/mL mouse recombinant Noggin (PeproTech), and 500 ng/mL human recombinant R-spondin1 (PeproTech). 100 ng/mL mouse recombinant Wnt3A (PeproTech) was added only to cultures for rescue experiments. For regular use, our intestinal stem cell medium did not contain Wnt3a. The medium was changed 2 days after initial plating. On the sixth day after initial plating, organoid numbers and number of lobes per organoid were counted. Organoids were passaged by taking out the medium and dissolving the Matrigel in ice-cold DMEM, and then the medium with the dissolved Matrigel was pipetted 25 times with a 200-μL pipette tip. Organoids were disrupted by passage through a 26G needle five times and replated in complete Matrigel. Crypts were counted the next day, and the number of organoids and lobes per organoids were counted on day 6. The percentage of organoids formed was calculated based on the number of enterospheres observed on day 1 after passaging. Crypts were passaged the same way every time, and the medium was changed every third day. Isolated crypt epithelial cells from the proximal part of mouse small intestine were used for gene expression analysis by qRT-PCR. For ISC and Paneth cell isolation, after crypt isolation, crypts were treated with 30 mL of 4% TrypLE (Invitrogen) for 30-40 min at 37°C, followed by centrifugation at 800 rpm for 5 min. The pellet was resuspended in 15 mL DMEM and centrifuged again at 600 rpm. We discarded the supernatant and sorted Lgr5eGFPhi cells by FACS. For Paneth cell isolation after centrifugation, cells were treated with CD24 antibody, incubated for 30 min on ice, washed with DMEM, and sorted by FACS. RNA was isolated using the QIAGEN RNeasy mini or micro kit. Fifty nanograms of RNA were used per well in a single-step Taqman assay. For quantification of Wnt1, 2, 3, 3a, 8a, 8b, 10a, and 10b, we used primers from Farin et al. (2012). Normalization was done using β Actin, Gapdh, or Hprt. All qRT-PCR data are from RNA isolated from crypts of the proximal part of mouse intestine. Differences were actually more severe in RNA isolated from distal crypts. Because the organoid experiments in this study were performed on the proximal part of the mouse intestine, unless otherwise mentioned, all data generated in this study are from the proximal part of mouse small intestine. Human Organoid Cultures Human organoids were grown as described in Mahe et al. (2015). For the medium without Wnt3a, conditioned medium was prepared without adding Wnt3a. Data are from the sixth day after initial plating of human organoids. All experimentation using human tissues described here was approved by an institutional review board (IRB) at CCHMC (IRB #2014-0427) and University of Cincinnati (UC) (IRB #2012-4147). Informed consent for tissue collection, storage, and use of the samples was obtained from the donors at CCHMC or UC. Young refers to 12-16 years of age, and aged refers to 62-72 years of age. TUNEL Staining A TUNEL (in situ cell death detection kit, Roche) assay was used to measure the rate of apoptosis on 6-μm paraffin sections. The number of apoptotic cells per crypt was counted from 15-20 low-power fields (10× magnification). BrdU Administration and Staining BrdU (Sigma-Aldrich) was injected at 100 mg/kg body weight, and the intestine was harvested 72 hr after BrdU injection. 6-μm-thick paraffin-embedded tissue sections were deparaffinized, rehydrated, permeabilized by heating in 10 mM sodium citrate buffer, stained with BrdU primary antibody (Santa Cruz, 1:100 dilution in PBS), incubated overnight at 4°C, and incubated with anti-rat FITC-conjugated secondary antibody (Jackson ImmunoResearch Laboratories, 1:200 dilution for 1 hr at room temperature). Pictures were taken under 10× magnification on an Apotome Zeiss microscope. The distance traveled by BrdU was measured using ImageJ software from the crypt base to the midpoint of BrdUpositive cells in a villus. RNA-Seq and Real-Time PCR on Isolated ISCs and Paneth Cells RNA from Lgr5eGFPhi-positive cells and Paneth cells sorted by FACS was isolated using QIAGEN RNeasy micro (#74004) following the manufacturer's instructions. Libraries for Lgr5eGFP RNA-seq were prepared using standard Illumina protocols. For Paneth cell transcriptome profiling, the SMARTer Stranded Total RNA-Seq Pico kit from Clontech Laboratories (#635005) was used. The kit generates Illumina-compatible RNA-seq libraries. The cDNA library construction was done as recommended by Clontech, which includes cDNA synthesis, addition of Illumina adapters and barcodes using only limited-cycle PCR, followed by depletion of ribosomal cDNA, further amplification, and purification. The generated libraries were quantitated using an Agilent Technologies bio-analyzer, pooled, and subjected to next-generation sequencing in Hi-Seq 2500 for paired-end 75-bp sequencing conditions. The data were analyzed with Strand NGS (Agilent). Following removal of primers and barcodes, raw reads were aligned to the mm10 mouse genome with annotations provided by University of California Santa Cruz (UCSC). Quantified reads were normalized using the differential expression analysis for sequence count data (DESeq) algorithm. Reasonably expressed transcripts (at least three reads per transcript under more than one experimental condition) were assessed for differential regulation using two-way ANOVAs (p < 0.05) and fold change (FC > 1.5). Ontological enrichments were identified through Database for Annotation, Visualization and Integrated Discover Gene Ontology (DAVID GO). For quantitative real-time PCR of Lgr5GFPhi cells, RNA was amplified, and cDNA was prepared using the NuGEN Ovation RNA amplification system V2 (#3100-12). For quantitative real-time PCR of Paneth cells, RNA amplification and cDNA were prepared using the quantitative real-time PCR SMARTer seq V4 Ultralow input RNA kit for sequencing (#634898). (L) Distance from the crypt base to the middle of the BrdU-positive stripe in the proximal part of young and aged mouse small intestine 72 hr after BrdU treatment. n = 3-4 mice/experimental group. *p < 0.05, **p < 0.01, ***p < 0.001. Error bars indicate SD. All qRT-PCRs were performed on RNA isolated from crypts of the proximal part of mouse small intestine. n = 3-5 mice/experimental group. *p < 0.05, **p < 0.01, ***p < 0.001. Error bars represent SD. (H) Representative pictures of organoids derived from young and aged human intestine. Scale bar, 100 μm. Young, n = 4; aged, n = 5. *p < 0.05, **p < 0.01, ***p < 0.001. Error bars represent SD.
2017-04-12T00:33:07.908Z
2017-03-14T00:00:00.000
{ "year": 2017, "sha1": "a133094c4391e54800251feb21c623d9cdec8780", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.celrep.2017.02.056", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a133094c4391e54800251feb21c623d9cdec8780", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
227033058
pes2o/s2orc
v3-fos-license
Co-creating Agroecological Symbioses (AES) for Sustainable Food System Networks Critics of modern food systems argue for the need to shift from a consolidated and concentrated, often monoculture based agro-industrial model toward diversified, post-fossil, and nutrient recycling food systems. The abundance of acute and obvious environmental problems in the agricultural sub-systems of the broader food system(s) have resulted in a focus on technological and natural scientific research into “solving” these point of production problems. Yet, there are many facets of food systems that are vital to sustainability which are not addressed even if the environmental problems were solved. In this article, we argue for agroecological symbiosis (AES) as a generic arrangement for re-configuring the primary production of food in agriculture, the processing of food, and development of a food community to work toward system-level sustainability. The guiding principle of this concept was the desire to base farming and food processing on renewable bioenergy, to close nutrient cycles, to break away from the consolidated food chain, to be more transparent and connected with consumers, and to revitalize the rural spaces where farms generally operate. Through a consistent and robust collaboration and co-creative process with transdisciplinary actors, ranging from food producers, and processers to policy actors, we designed a food system model based on networks of AES (NAES). The NAES would form place-based food networks, replacing the consolidated commodity chains. The NAES supports sustainable interactions from a biophysical and socio-cultural perspective. In this paper, we explain the AES concept, give an overview of the process of co-creating the pilot AES, and a proposal for the extension of the AES, as NAES, to create sustainable food systems. Overall, we conclude that the AES model holds potential for creating place-based food systems that further the sustainability agenda. INTRODUCTION Critics of the current dominant food system argue for the need to shift from a centralized, agro-industrial model toward diversified, post-fossil, and circular food systems (Pimbert, 2009;Monteleone, 2015). This type of shift would mean a reversal of the trend of globalization and consolidation in food systems in favor of (re)localization (IPES-Food, 2016, 2017. There are well-justified arguments for abandoning the productionist agricultural model (Lang and Heasman, 2004), which include environmental, public health, socio-cultural, and economic reasoning (Marsden and Sonnino, 2012;Willett et al., 2019). Along with the loss of important structural characteristics, such as local adaptations and diversity, the agro-industrialization of food systems has resulted in loss of the essential functional properties of stability and resilience. The excessive environmental impacts of these agro-industrial systems include the wasteful use of, and associated pollution and emissions from, the extracted natural resources, such as plant nutrients. In addition, the agroindustrial system contributes to loss of biodiversity, and loss of services from the ecosystems, such as pollination and carbon capture to soils. In addition, it contributes to the pollution and ecosystem impacts of plant protection chemicals. Globally, the current modes of food production are a major cause of exceeding the known planetary boundaries, particularly the ones of biological diversity, and nitrogen and phosphorus cycling (Steffen et al., 2015). The misconception of industrializing food and agriculture has resulted in extreme environmental degradation and destruction (Campbell et al., 2017;Willett et al., 2019). Failure to recycle the nutrients used in agriculture production is striking (Buckwell and Nadeu, 2016;Sherwood, 2020). The present system is highly dependent on external and excessive energy inputs, especially in the form of fossil fuels (Sherwood, 2020). From a socio-culture perspective, the agro-industrial model (Figure 1) contributes to the homogenization of food supplies and diets (Khoury et al., 2014), and the fragmentation and homogenization of rural landscapes (Jongman, 2002). The fundamental set up of the industrial agricultural model renders the products of primary production placeless, as they move through middlepersons and into vast storage facilities. Food produced through the processes of the industrial agricultural chain has been likened to being from "nowhere" as the links between producer, processer, and consumer are complicated and difficult to trace (Schermer, 2015). There are also externalized costs of agro-industrial food systems, as they do not serve public health and create imbalance and inequity in entitlement to food. On one hand these agro-industrial food systems are contributing to diet-linked, non-transmittable diseases, and on the other hand they contribute to hunger and malnutrition (Tilman and Clark, 2014;Willett et al., 2019). One is justified to ask if agribusiness and the consolidated food industry on their own can make the transformations needed to transition to more sustainability oriented systems. Global and national food policies seem to be needed, and at the same time, transformative initiatives formed at the grassroots level need to be enabled. These challenges appear to be as equally pressing as the need to reverse the food systems disproportionate contribution to and impact from global climate change (Wheeler and von Braun, 2013;IPCC, 2019). The globally shared commitment to every persons' entitlement to food and adequate nutrition is derived from Article 25 of the Universal Declaration of Human Rights (UN, 1948), which provides a clear goal for improving food systems. The stark failure of the conventional food chain in addressing human rights is well-documented, but largely ignored FIGURE 1 | Schematic model of the conventional food system wherein the production, processing, and consumption functions primarily as a "food chain," in which the product flows and economic exchange are the focus with little regard to externalities or contextual factors whether biophysical or socio-cultural. The size of the boxes symbolically illustrates the number of participants in that level of the system and the size of the arrows represents the volume of the flows. even in (food)policies, not to mention the commodity-based agribusiness (De Shutter, 2010). The dominance of consolidated food chains threatens food security and leaves the food system vulnerable, with little resilience to external disturbance. In the context of the Covid-19 pandemic, this concern was publicly brought up by news media, as the centralized meat chains in several countries stumbled (see for e.g., van der et al., 2020). The socio-cultural impacts of the globalized food system revolve around the homogenization of food cultures, the physical and cultural distancing of an increasing majority of "consumers" from the producers, and associated loss of sense of food. By the concept of sense of food, we mean a loss of understanding about the food one consumes in its full place-based context (Wilkins, 2005;Kneafsey et al., 2008;Spiller, 2012). These developments have had the alarming consequence of resulting in lack of public interest in food policy, or in insufficient policies. Calls for increased food sovereignty-food systems that are designed to accommodate the context and needs of the participants in the system (Rosset, 2008;Patel, 2009;Clapp, 2016)-and agroecology as a movement (Wezel et al., 2009) have emerged as a response, but often represent resistance and alternatives rather than full systemic transformation. From an economic perspective, the industrial food system and the "cheap food" it produces, creates imbalance and dysfunction (Patel and Moore, 2017). It has contributed to the decline of rural livelihoods, farmer incomes, and to a vicious cycle of an ever-increasing need for intensification to maintain yields from agricultural land (Tilman et al., 2002;IPES-Food, 2016). In the context of addressing the need for transformative change of the food system, many if not most of the scientifically well-founded analyses focus only on parts of the food system, which appears as only a partial optimization, or even redundant. This is especially true in attempts to improve sustainability of agriculture by tinkering around with the details of the agricultural system while taking the rest of the system for granted. In other words, agriculture cannot achieve sustainability separately from the wider food system, where it is a foundational building block. This understanding is emerging, even if it is still only partially addressed, in the ongoing debate about "sustainable intensification" of agriculture (Rockström et al., 2017). The abundance of acute and obvious environmental problems in the agricultural sub-systems of the broader food system(s) have resulted in a focus on technological and natural scientific research directed at "solving" these point of production problems. Within agricultural sciences, agroecology with its sustainability science orientation and multiple facets-that is as a science, practice, and socio-cultural movement-serves to address sustainability at the food system level (Francis et al., 2003;Helenius et al., 2019). Developing food system(s) to support sustainability is a typical "wicked problem." The problems of the food system cannot be directly "solved" by science alone (Rittel and Webber, 1973). The systematic integration of other types of knowledge is needed to begin to approach the sustainable transformation of food systems. There is also need for citizen led initiatives and scientific processes supported and augmented by food system participants at multiple levels. Involving persons living and working within the agricultural system carry knowledge about the system that cannot always be gleaned from top-down science and policy (Schillo and Robinson, 2017). Yet, the introduction of new actors and modes of collaboration has potential for creating tension and must be administered thoughtfully and in a way which respects the context of the transformation (Keune et al., 2015). Even with the introduction of co-creative processes and engagement of transdisciplinary actors and citizen scientists there are no simple solutions when it comes to food system redesign. Each facet of the food system has many sub-facets that must be taken into consideration when seeking transformational change. For example, this becomes obvious when looking at how the challenge of transforming almost any aspect of the food system links (FAO, 2018a) to the 17 sustainable development goals of the United Nations (SDGs: UN, 2015). Yet, there are some emergent and promising food systems models which speak to food system redesign and supporting a sustainable, holistic food system. Transformative change requires supportive policy mixes and governance (Geels and Schot, 2007;Diercks et al., 2019), which are outside of scope of this article. However, we witnessed this through a co-creative process with the involved non-science actors, for example the farmers, entrepreneurs, and consumers in place. All these parties came together and participated in the development of a food system model that, as we argue, deserves full attention for supportive and enabling policies and governance. In this article, we argue for agroecological symbiosis (AES: see Figure 2) (Koppelmäki et al., 2016(Koppelmäki et al., , 2019Helenius et al., 2017) as a generic model for re-arranging the primary production of food, from the agricultural and processing perspective, toward sustainability. Furthermore, we propose that using AES as the organizing principle to form networks of agroecological symbioses (NAES: Figure 3) would serve sustainable transformation at food system level. In this paper, we will: (1) explain the concept of AES; (2) propose a network of AES (NAES) as a foundation for a sustainable food system; (3) discuss the sustainability of NAESconcept based on analysis on Huber's (2000) generic framework of transformational strategies toward sustainability in context of industrial ecology; and ultimately, (4) we will describe the co-creation process from the first AES pilot case to the further implementation of the concept. AGROECOLOGICAL SYMBIOSIS (AES) By our definition, an AES is a food production and processing industrial symbiosis that runs on renewable energy derived from its own feedstocks (Figure 2). The term agroecological symbiosis (Koppelmäki et al., 2016) stems from the concept of industrial symbiosis, which-with extensions-we applied to the food chain. Chertow (2000) describes how mutually beneficial inter-firm cooperation, as an application of industrial ecology (Frosch and Gallopoulos, 1989;Graedel and Allenby, 2010), can be organized to form "industrial symbiosis, " such as eco-industrial parks. Chertow (2000) argues for the benefits of the spatial proximity of the industrial partners who seek to maximize resource efficiency from minimizing the waste of materials and energy through forming a symbiosis. In the pilot AES, described in section Co-creation in the Palopuro pilot project below, the biophysical range was within a radius of approximately 15 km, but this may vary widely from one agroecological region to another. As we describe in the following FIGURE 2 | Schematic model of an agroecological symbiosis (the AES itself is represented within the dotted box). It is a recycling, bio-energy self-sufficient industrial symbiosis of farm(s), an energy producer, and food processor(s). It produces contextual food identifiable to consumers, either directly or via the market, with an emphasis on localized production, processing, and consumption. The AES brings the people who eat to the community it creates, bolstering the creation of a food community. The arrows within the AES represent primary product flows, recycling of plant nutrients, and bioenergy. The arrows from the AES represent flows of products: food and any excess bioenergy to the market. sections, agroecology as a prefix refers not only to ecological outcomes of the redesigned food system model, but also to socio-economic and to cultural outcomes. Organizing Principles and Functions Food production inseparably relies on ecological primary production through photosynthesis of plants, and (not obligatorily) on secondary production of livestock fed with plants. Diverse food products are produced through the industrial processing of agricultural plant or animal "rawmaterials, " but the energy, the proteins, and the nutrients (some mineral or synthetic vitamin additives as exceptions) of food originate from farmed crop plants grown in farmland soil. From the ecosystem origin of food, it follows that all that is required for ecological sustainability of the use of ecosystems in general, applies to food production and agricultural ecosystems specifically. An essential condition to ecosystem functioning is ecological integrity, which depends on biological diversity within the ecosystem (Hooper et al., 2005). This integrity is, in principle, similar to what is required for the functioning of mechanical machines as systems with many subsystems and parts, for example engines or computers. The difference is that ecosystems-and life-are orders of magnitude more complex than anything humans have ever manufactured. The lack of understanding of the structural details, the role of species diversity, the feedbacks, and the fine-tuning that exist in lifesupporting systems, i.e., the ecosystems, must at least partly explain their neglect in decision making. The social psychology of continuous ecological destruction (Oskamp, 1995) is outside of scope of this article, but it must be closely linked to growing loss in increasingly urbanized societies of the sense of food and the understanding of the ecosystem as the origin of food. Awareness of place and the embeddedness of agriculture goes hand in hand with the concept of sense of food, and is a necessary component in developing a (re)localized production and consumption system (Murdoch et al., 2000;Feagan, 2007). Place is a concept that is essential to both the producer and consumer sides of food systems, as it transcends both the physical and the socio-cultural valuation of any specific food product (Feagan, 2007;Cresswell, 2013). Every single agricultural product that is grown in the world has a physical location, a discrete space where it came into being. In addition, every food item that is consumed in the world is also rooted in the physical action of biological primary production (that is growth), which takes place in a real physical space. Even as the ease of transportation has created a smaller seeming world; technology still has not created a provision to provide "wireless" calories, or "landless food." The social disconnection from food production continues to happen at multiple levels, including biophysical and social (Dorninger et al., 2017). This disconnection has been articulated as the metabolic rift (Foster, 1999;Wittman, 2009; FIGURE 3 | Schematic model of the agroecological symbioses (see this figure for a detail of an individual AES) forming a localized food production and processing system, an AES network (NAES). The NAES is an open system. The AESs can serve in neighboring NAESs, and together, the NAESs form a regional grid that connects to a national, and even a global meta-system. It represents a circular economy, runs largely on its own bioenergy with high climate-efficiency, and forms a foundation for a cyclical, adaptive, and resilient food system. In this system, the consumers become sovereign members of a food community created through the shared NAES. They gain an increased sense of food, and sense of place in the agroecological context of the NAES. Schneider and McMichael, 2010), which extends across both the biophysical and social metabolisms of food production, process, and consumption. The number one consideration for an AES is that while the agroecosystems are managed to serve the production needs, at the same time the needs of the system also must be served. In anthropocentric terms, serving ecosystems aims at maintenance of their ecological integrity, as an essential condition for continuous productivity. In AES thinking, ecosystem services are reciprocal rather than a one-directional concept (Comberti et al., 2015). The ecosystem has multiple functions in the mosaic that comprises the biosphere; while it is still used by humans to extract products and value, humans are obliged to return these services. The number two consideration is recognition of agroecosystems as subsystems in the wider food systems. City dwellers living solely in metropolitan areas may well-hold escapist illusions of being decoupled from agroecosystems, yet with every mouthful of food they most concretely, physically link upstream to the material and energy flow of the food from the farmland field ecosystems that comprise their foodsheds (in an analogy to watershed, Kloppenburg et al., 1996). Spiritually, if this aspect can be acknowledged, eating is an everyday sacrament, devoted to the food's ecosystems of origin. This sacrament includes acknowledging the work fellow-citizens do in the food chain, but essentially, it represents a personal and essential biophysical linkage to the ecosystems, and to the life-supporting integrity of the biosphere at large. Food systems need to be adaptive and resilient. It follows from their place-bound ecosystem foundation that adaptiveness and resilience must emerge at each place of production, down to the most local farm scale. From the local scale, these properties can then be expanded to wider system scales. From the above considerations we propose an AES maintains and as needed, increases and improves: 1. biological diversity, the ecological community essential for ecosystem function; 2. abiotic soil, water and atmospheric condition required by the ecological community; 3. recycling of elements, called plant nutrients that the process of primary production of crop plants take up, but need again for the next harvest; 4. energy-self-sufficiency of the system through its primary production by photosynthesis of solar energy; 5. psychological, socio-cultural (mental, spiritual) connection to the food ecosystem of the people who eat through fostering a sense of food and food citizenship. NETWORKS OF AES (NAES) AS A FOUNDATION FOR A SUSTAINABLE FOOD SYSTEM As complementary modules in an interacting network of AESs, the AESs form a foundation for a transformative food system. Conceptually, a network of agroecological symbioses (NAES), represents a distributed model for the food processing industry. It redefines the vertical integration between the processor and the primary producer: the farmers in the AES sell primary products directly to their processing AES partners, which increases the transparency in the production system as one can track the journey of particular primary products into production. The communication is direct. In the conventional system, the farmer usually sells the commodity to an anonymous commodity market, often to middlepersons running centralized storage facilities. Farm products are not often sold directly to a specific processor and often are mixed into a bulk of "commodity, " which results in losing knowledge about the origin of specific primary products during the journey through consolidated industrial processing. NAES also adds horizontal integration that is lacking in the conventional system. This integration is between the AES-units of production and processing. This can be visualized as working within the context of the rural landscape as the specific configure of the integrated entities is malleable within each AES. The key is spatial proximity and a scale consistent with requirements of the ecosystems' economy-not just the bioeconomy-and circular economy. In practice, spatial proximity is determined by the extent to which it is economical to transport biomasses such as manures, (other) recycling fertilizers, or feedstock for bioenergy. Within a NAES, each AES contributes, with its own food and energy production, to the total production of the NAES. The individual AESs specialize in seeking optimal roles within the reality of their individual production capacities. These capacities converge at the NAES level. By definition, a NAES is a network of many AESs. A NAES forms a foundation for a local food system, when it produces food products from its agroecological context to the market and to the people who eat those products (Figure 3). When forming a national and global grid, at the meta-NAES level, the NAESs are building blocks for a sustainable food system. Wezel et al. (2016) proposed "agroecology territories" as territorial sustainable food systems. We find the NAES would be a food system model for such a transformation. Wezel et al. (2016) criticize the narrow emphasis on sustainability of a single agricultural commodity production, or on a single food product chain. With its emphasis on adaptation of agricultural practices to local and regional agroecological conditions, and on embedded food systems, the agroecology territories concept is consistent with the NAES concept. Wezel et al. (2016) list within-territory conservation of biodiversity and natural resources as conditions for the biophysical adaptation. NAES adds reliance on renewable energy produced within-territory, and recycling of plant nutrients. Owen et al. (2020, p. 2) propose that "geographical indications" (GIs) as a rural development mechanism that can serve in delivering transitions to agroecology territories, to "quality-led, place-based food systems." In the GI scheme, a value-adding geographical indication can be administratively granted to a product (EU, 2020). Owen et al. (2020) cite Bowen's (2011, p. 326) definition of a territory as "a space that is socially constructed, culturally marked, and institutionally regulated." They call upon stakeholders adopting a territorial governance approach consistent with the Food and Agricultural Organization's "10 elements of agroecology" (FAO, 2018b). GIs are consistent with, and would serve in supporting, the transition to NAES. As an organizing principle for the food system, NAES contrasts with current industrial consolidation and the type of vertical integration, the monocultural concentration, characteristic to globalizing food chains. These treat food as a manufactured product, and the farmed products as commodities without recognition of food systems' unique biosphere-base in agricultural ecosystems, and their socio-cultural foundation in the rural landscape. Industrialization of the food system goes hand in hand with discourses of "feeding the world." The principle of adapting the food system to a safe operating space set by the (agro)ecosystem directly challenges the idea of feeding the world at any cost. This position is echoed in other strands of the discourse, for example, in the polarized debate concerning whether food security is only possible through further intensified industrial agri-business, or only through the widespread uptake of organic farming (Connor, 2013;Eyhorn et al., 2019). It is obvious that planetary boundaries exist, which sets a ceiling to how big a population can "be fed" (Rockström et al., 2017). Food policies need to be explicit about their positioning regarding the underlying balance between population size and quality of life, including the quality of food and nutrition. Population increase enforces drivers that may push toward tipping-points of the system, result in loss resilience, and generate reactive rather than proactive regime shifts (Pereira et al., 2020). In advocating the principles of circularity, reciprocity of ecosystem services, reliance on selfproduced non-fossil energy, and engagement of the people who form the food community, NAES suggest discourse of ensuring entitlement to food and nutrition, more a "right to eat, " rather than "right to become fed." With any combination of farming practices, diets, food cultures, and population size, there is a ceiling set by the carrying capacity of the biosphere. How the key questions are answered of who produces what, where, how, and to whom, there still looms a planetary boundary for increasing the production. This speaks to the far to future reaching vision of NAES for dynamic, but harmonic equilibrium between population and use of the biosphere for food production. It reinforces the idea of food sovereignty-but not individuality-as the NAES food communities define their own food, but are also entitled to their food production systems. In contrast to the conventional, increasingly delocalized or globalized, and centralized food production chain of the industrialized countries (IPES-Food, 2016, 2018Ellen MacArthur Foundation, 2019), NAES as a generic model would result in a "glocalized" (e.g., Quaye et al., 2010) and distributed system of food production. In terms of food cultures, it would result in diversification as opposed to the current trend of homogenization (Ritzer, 2013;Clapp, 2016). Such a reorganization would boost rural livelihoods, and would have implications to structural developments in the society, including the current unsustainable and fossil-fueled trend of urbanization toward metropoles. Without trying to explore the issue of urbanization further, we express our deep concern about the possibility to "feed the big cities" within any sustainable realm at the same time when people are abandoning the regions where food is produced. Without prior planning nor control, the cities simply mushroomed as products of the fossil fuel era. The metropoles are comparable to feedlots in animal farming, highly unsustainable, highly dependent on continuous feeding from the global rural. Food communities around NAES are best when local; the NAES-based food system offers a possibility to sustainably de-structure the big cities. To achieve this goal in addition to other supports for sustainable food systems, policies for "ruralization" need to link with food policies. NAES gives the promise for increased food sovereignty and resilience in terms of food security. It gives promise for transformative change from extractive food capitalism toward sustainable ecology-based food systems. This is a functional model of human-scale agriculture that is flexible to be adapted for the local contexts it inhabits (Condon et al., 2010). EFFICIENCY, SUFFICIENCY, AND CONSISTENCY OF NAES In the following sections, we use Huber's (2000) framing of efficiency, sufficiency, and consistency to explore the promises for sustainable transformation in the NAES food system model. We took the liberty to interpret what Huber presented as complementary strategies, as criteria for sustainable transformation. All three criteria need to be met to achieve a sustainable transformation in a production and consumption system. By consistency, Huber (2000) refers to coherence with the wider goals of environmental sustainability. We found this framing useful because it speaks to the viewpoints and driving motivations of multiple actor groups within sustainable transformations. In discussing industrial symbiosis, Chertow and Ehrenfeld (2012) point out the need for explicit recognition and institutional support as enabling factors, if such symbiosis is adopted as an organizing principle for sustainability transformation. There is the pitfall of eco-efficiency being a winning strategy for the business through financial savings, while ignoring the rebound effect and hence, not resulting in ecological savings (Hukkinen, 2001;Heikkurinen et al., 2019). In any case, technologies and policies enabling eco-efficiency are surely welcomed by industry. At the same time, there is a public interest in policies that control the rebound effects, ensure sufficiency as a ceiling to material growth, and govern for consistency-in both meeting societal goals and the grand planetary challenges. Efficiency of NAES In generic terms, ecological efficiency simultaneously allows further economic growth and ecological adaptation of industrial production (Huber, 2000). In the context of food production systems, increasing efficiency means producing more food per unit of resource used. In crop production, efficiency is commonly measured by a ratio of quantity of product (harvest) to area of agricultural land harvested. Emphasis on land productivity tends to leave other natural resource efficiencies unnoticed, even though water, nutrients, and energy efficiencies are equally important. For example, nutrient use efficiency (NUE) measures how well-crop plants use the available nutrients for the harvestable product (Reich et al., 2014). Similarly, in livestock production, the feed conversion ratio measures the ratio of feed inputs to food outputs (Garnett et al., 2015). Nevertheless, these all are efficiencies measured at process level, or at subsystem level within a system, rather than indicators of system level efficiencies. For understanding system-level efficiencies, it is essential to understand through what kinds of feedback the processes within sub-systems operate, and how the sub-systems are connected to other parts of the food system at different spatial and temporal scales. Field scale efficiency is not equal to farm scale efficiency. Similarly, farm scale efficiency does not guarantee efficient use of resources at regional or wider geographical scales. This disconnect is demonstrated by the following example. A crop farm using mineral fertilizers may produce high yields of cereals utilizing a relatively small fertilization. In other words, the ratio of outputs to inputs is high. A livestock farm, located next to the crop farm, produces moderate yields by applying high quantities of manure as a fertilizer, which results in a much lower ratio of outputs to inputs when compared to the crop farm. A simple conclusion is that the crop farm has a better NUE. However, when considering efficiency, it is essential to take also into account what happens after harvest. If the cereals harvested on the crop farm are used as feed on the livestock farm, the NUE looks different when considering both farms as a single continuous feed/animal production system. Furthermore, the origin of inputs and the quality of output varies on these farms. This implies that conclusions about efficiency cannot be derived by observing efficiencies at the sub-systems' level only, or only at a small spatial scale when the feedbacks reach larger scales. In the current conventional agricultural sub-system of the food chain, two trajectories have had a substantial impact on efficiency. First, a low-cost feed transport has enabled livestock farms to concentrate and to spatially disconnect the animal husbandry from local feed production and secondly, mineral fertilizers have enabled farms to increase crop per-unit-area productivity while simultaneously releasing farms from the need-or possibility-to recycle the plant nutrients in crop production. As a result of this specialization at the farm and regional levels, nutrients are concentrating spatially; nutrients are dislocated and recycling is disrupted (Buckwell and Nadeu, 2016;Schulte et al., 2019;Parviainen and Helenius, 2020;Koppelmäki et al., 2021). What has looked like increasing efficiency in crop and in animal production has in fact been a dramatic decline in efficiency of the use of plant nutrients at the food system level, and an inefficiency in producing food. Instead of increasing efficiency at the sub-system level, while sacrificing it at the whole-system level, the aim should be in system's efficiency. This is what NAES provides, it allows for explicit system level efficiency indicators and improvement (Koppelmäki et al., 2021, submitted manuscript). The requirement of circularity alone is a strong incentive for example, to the farms of the NAES to match the number of animals with the local feed production, in case NAES produces foods of animal origin. Feed imports from outside the agroecological region where the AES functions do not match with the concept, and if done, need costly arrangements for recycling the plant nutrients back to the feed producing farms. By-products from the food system, such as plant nutrients recovered from food waste and from municipal sewage, represent recycled resources within an NAES-based food system. The requirement of reliance on internally sourced bioenergy, linked with the system's property of biological nitrogen fixation makes NAES by far more climate efficient than systems that rely on fossil fuels and on industrial nitrogen fixation, such as present industrial farming. In addition, requirements for increased rotational diversity, increased share of leys in the rotation, and use of organic recycling fertilizer, such as the digestate, serve stocking carbon to soil and reversing the current loss of carbon from farmland. In the context of sustainability, efficiency as a system's output per unit of negative environmental impact generated also needs to be quantified, or at least qualitatively assessed. For example, at what rate per unit product does the food system cause biodiversity loss? Expressed this way, the expectation of increased biodiversity would return a negative value for a positive trend. We argue that redesigning the system of primary production and processing of food along the lines of the NAES concept increases efficiency at food system level. As a food web rather than a food chain, NAES can produce more food energy and protein per unit farmland area, with less nutrient loading and less atmospheric emissions per unit farmland, and per unit of food produced, than would be the case if the production continued conventionally. Compared to current conventional practice, agroecological benefits include increased organic matter input to farmland soil, diversification of crop rotations, maintenance of soil organic matter and soil fertility, increased or even full self-sufficiency on biologically produced nitrogen, practically full recycling of phosphorus and other mineral plant nutrients (Koppelmäki et al., 2021, submitted manuscript), and radically improved climate-efficiency per hectare of farmland and per unit product. NAES makes it possible not only to enhance ecosystem services to production, but also to serve the ecosystems in maintaining their biological diversity, integrity, and function. Huber (2000) argues that efficiency can only be an intermediate for sufficiency. The concept of sufficiency encompasses a strategy involving consumption patterns and lifestyle, explicitly asking the question, how much is enough? (Huber, 2000). The need to ask this question follows from the limited planetary operation space. In food systems, the most critical factors to what becomes "too much" are population and diet. Sufficiency of NAES Increasing efficiency in agricultural land use seems to give temporary relief, while simultaneously, global analysis already emphasizes the need for controlling diets (Foley et al., 2011), and even population (Crist et al., 2017). During the last decades, the area of agricultural land necessary to feed one person has deceased, but population growth and dietary change have offset the potential land savings from this increased productivity (Kastner et al., 2012). In NAES, the volume of primary production is limited by the agroecosystem's biophysical potential to produce biomass without substantially relying on external nutrient and biomass inputs. In the "feeding the world" discourse there is a lively and persistent side-stream, the land sparing vs. land sharing debate (Loos and von Wehrden, 2018). The proponents of land sparing argue for increasing productivity of the existing farmland as a means to save nature (which in this thinking, is found outside of farmland). The productivity would be increased by increasing input intensity. As a rule, this camp ignores the fact that the path of intensification has come to an end (Tilman et al., 2002), hitting the wall of ecological sustainability. The proponents of land sharing argue for farming that would allow wildlife to share the farming environments with crops and cattle. This sharing would aim to wider biodiversity goals than simply maintenance of the "ecosystem services" of farming (Zhang et al., 2018). Obviously, "sustainable intensification" (Rockström et al., 2017) would be sustainable, and wherever ecological space there is for it, it may push the population-times-diet limit further. In our theory of NAES, while we find that it provides means for sustainable intensification, we rely on the idea of sharing. As the human impact reaches all ecosystems in the biosphere, it is best to learn to live decently with our fellow species. With this thinking, the focus is on adjusting the intensity to ecological sustainability. For industrial, input intensive farming, this would mean lowering the intensity and even lowering productivity per unit land area for increasing productivity per unit other inputs, including biological diversity. In subsistence farming, in which the insufficiency of sustainable inputs, e.g., recycling fertilizers, coupled with a high rate of population growth often results in land degradation, there is space for agroecological intensification (Pretty et al., 2006). In terms of sufficiency, what is enough must not exceed what is too much for the ecosystems that the human species shares with other species, both presently and in the future. In the NAES thinking, agroecological contextualization brings a geographical dimension to sufficiency. What is sufficient in what place? NAES food systems would favor adapting diets to local ecological provisioning and limits (knowing that such an adaptive arrangement might not be politically achievable). This would ease the burden of the (still missing) global food governance in holding back the pressures that created the present commodified, agro-industrial system, which lacks inherent control other than destruction of land as a result of overexploitation. The idea of a food community in NAES implies participation by those who eat. Even though food production is localized (i.e., relying on local integrated nutrient recycling and energy production, local feeds in livestock production, and local food processing), food is exported from NAESs to other regions and also globally. Participatory governance by the food community should reach the production systems of origin of the exotic foods alike. Philosophically, these exotic foods may be geographically imported, but still not imported from outside of the NAES food community. Another diet related aspect of sufficiency is the share of exotic, imported foods. In many cases, local food production could provide foods with the same function. For example, in the Nordic countries several berries, as horticultural or non-wood forest products, are available to anyone willing to pick them. Reengaging with locally available foods would reduce the need of importing exotic fruits and berries. In the NAES thinking, local products rather than imported ones would add value, as the production system and its possible externalities would be internalized. Rather than merely seeing added value in local production, the efficient utilization of locally available resources should be seen as a value choice. The composition of diet is a sensitive cultural issue, but prone to value-driven changes. Some of the material flows in the industrial systems are incompatible with sustainability (Huber, 2000). This also applies to current food systems. This incompatibility is related to land use, food consumption, and inputs used in food production. From the land use perspective, food production must be compatible with the supply of other ecosystems services. For example, in peat lands the cultivation of annual crops produces greenhouse gas emissions in quantities substantially higher than use of these lands for perennial leys (Maljanen et al., 2007). In the NAES model, these peatlands would be used, for example, to produce grass to feed cattle or as a feedstock for biogas production instead of cereal production. In the NAES thinking, land use should not be incompatible with sustainability, but rather adapted to growing biomass that is suitable to that specific agroecosystem. Material flows are currently largely based on non-renewable resources (Haas et al., 2015). In the conventional food chains, agriculture relies heavily on external inputs such as mineral fertilizers and fossil energy. Many of these flows are related to intensive livestock production. This has created a need for massive biomass imports to feed cattle resulting in nutrient concentrations in livestock farms (Buckwell and Nadeu, 2016;Uwizeye et al., 2016;Spiegal et al., 2020). Food production that is so heavily relying on inputs from non-renewable resources is not compatible with sustainability. As such, this leads to the fundamental principle that sustainable food systems must be based on use and maintenance of renewable resources. Consistency of NAES In Huber's (2000) framing, consistency relates to the production processes in a system and their ecological functioning in support of the development of balance and compatibility between the natural and industrial metabolisms of the system in question. It should be noted that while Huber (2000) does not make a direct reference to Marx's concept of metabolic rift, the balance between the industrial and ecological metabolisms is in line with the academic work which revolves around healing the metabolic rift (Schneider and McMichael, 2010). The NAES model speaks to Huber's conceptualization of consistency through its development and implementation of new systems level materials flows, which serve to change the underlying qualities of the industrial ecology of the agricultural system, and the food system based on NAES. The innovative material flows in the NAES model are fundamentally aimed at the sustainable transformation of the overarching system, rather than simply minimizing the impacts of the traditional material flows within industrial farming. Within the NAES model the focus remains on integrated environmental solutions, rather than piecemeal solutions or a focus on solely downstream remediation measures. We argue that NAES is consistent with the goal of circularity, as each AES in it is designed to recycle, and within the network, the AESs can co-operate in recycling. The aim of NAES is not to mimic a natural ecosystem, as it remains a food production and processing system that does require inputs and produces outputs. However, it does bring the industrial and natural ecology into a more harmonious metabolism by respecting and working with the biophysical and socio-cultural realities of each individual place. In addition, the NAES model is not a top down or rigid interpretation of what constitutes a sustainable agricultural system. Rather, it is a cocreative model focused on utilizing the creativity and motivation of the people participating in the discrete system. Too often system models are designed in the academic or policy sphere with not enough deference to the challenges faced on the ground. The NAES model overcomes this problem through its flexible approach to the goal of creating local and regional food systems. An important aspect of the consistency strategy is to foster an innovation process that utilizes the productive capacity and creativity of modern society (Huber, 2000). We interpret the role of co-creation as an expression of citizen science, which fills this facet of consistency. In the next sections we will discuss the role of non-academic participants in the design and implementation of the pilot AES and the subsequent expansion to the NAES concept. While Huber (2000) refers to consistency within environmental sustainability, any suggested transformative food system needs to meet with wider sustainability goals. A framework through which integrated solutions are accessible and widely understood are the Sustainable Development Goals (SDGs) of the United Nations (UN, 2015). Each of the goals represents an approach to sustainability that transcends siloed approaches and seeks for holistic solutions to the wicked problems which are a barrier to transition (Rittel and Webber, 1973). We agree with the caution raised by Randers et al. (2018), and with their concern that the socio-economic goals in the SDGs are not compatible with the aim of not exceeding planetary boundaries. We find that the NAES approach to food systems is consistent with the idea underlying the SDGs, given that the socio-economic goals need to be consistent with the environmental goals, and that the systems operate within the planetary boundaries. CO-CREATION IN DEVELOPING THE AES AND NAES CONCEPTS Bringing industrial symbiosis to the food production arena creates some additional challenges and opportunities. The AES model asks not only for a transformation in spatially detached production systems, but a redevelopment of the physical spaces where the involved entrepreneurs live and produce food. This is because one feature of involving farms is that they often serve a dual purpose of being production spaces, but also human spaces where people live within the landscapes. All the farms in the pilot project were homes as well as being productive spaces. This dual use of the land requires a fundamental buy-in from the people that live within the symbiosis, this is one of the reasons why the co-creative model and the involvement of the farmbased entrepreneurs was so fundamental to our development of the AES and NAES concepts. For the food processing partners in an AES, the mental step is different, but equally big. In the present system, the agricultural products which they use to make food products are commodities from the general market, and location of their processing plants is not dependent on where these commodities are produced. In an AES, the food processor with their processing plant comes physically to the location of the agroecosystem. Co-creation in the Palopuro Pilot Project The term agroecological symbiosis (AES) was first used in the development of a redesigned production system in Palopuro village, Finland (Koppelmäki et al., 2016;Helenius et al., 2017). The co-creation process was integral to the Palopuro case and the expansion of AES into the NAES concept. The entrepreneurs in Palopuro came together naturally to figure out a model for integrating their operations for mutual benefit. This was a result of their everyday interaction and shared goals for the development of their respective businesses. At the start of the co-creative endeavor there were three farmers based in Palopuro village and a bakery owner from the Helsinki capital region. An energy company, represented by its CEO, joined at a later date. It was these entrepreneurs who developed the first proposal for what this cooperation might look like in practice and the entrepreneurs contacted the scientists at the University of Helsinki to assist with moving from idea to practice. The entrepreneurs and other transdisciplinary actors such as, civil servants from the relevant municipality and the ministries served as transformative agents in this project and were active in asking the scientific participants to investigate issues that were pertinent to their community (Shirk et al., 2012). In practice this project would not exist without the cooperation from both the academic and non-academic actors. Both types of knowledge were needed to identify the problems and solutions that went into designing the pilot project AES. It should be noted that the farmers and the other entrepreneur actors at the heart of the pilot had a base motivation of improving the livelihood of their lived environment. They were the initiators of the transformative process. The farms and the bakery were already practicing organic production when the pilot project was planned. Alternative production methods when implemented in isolation, like organic production, do not change the entrepreneurs' position in the food systems. In that sense the substantial change from the actors' perspective is re-designing the roles of the actors and their respective agency within the food systems. The entrepreneurs played a key role as food system innovators. A grain farmer living in Palopuro led the charge to develop a redesigned food and farming system as he was not happy in being an anonymous supplier to the industrialized grain supply chain, serving equally anonymous consumers. The development of the AES model could be characterized as taking back agency over the functioning of the local food system. This collaboration was also born in the idea of being able to add value to the grain produced, when sharing with other farmers the problem of increasing price margin between farm price of the grain agricultural products and price of food in the market for the consumers. This general phenomena in the commodity chain means decreasing share to farmers, and is the main cause of loss of farm income (Peltoniemi and Niemi, 2016). For example, the grain farmer saw that a shift from solely supplying a raw commodity to the grain food chain, to producing an added value local product with the bakery serves as insurance against the ups and downs of the global grain market. While there also was an economic aspect to the development of this idea, focusing solely on the economic component does not capture the scope of the motivation. There were considerations that extended beyond the financial, including quality of life and the development and maintenance of a vibrant local community. Additionally, in the co-creation of the AES model the producers sought for an avenue to step back from the fossilbased industrialized food system. After listening to the goals of the entrepreneurs in Palopuro, it was relatively straightforward for us as scientists to match their vision to the concept of a circular, localized bioeconomy. For example, our previous theoretical work on producing biogas from nitrogen fixing leys and using the digestate as recycling fertilizer (Tuomisto and Helenius, 2008) matched perfectly to the case. Neither farm scale biogas production or localized small scale food processing were novel ideas [for farm-scale biogas in the Nordic context see Berglund and Börjesson (2006); Raven and Gregersen (2007); Ahlberg-Eliasson et al. (2017)]; rather, what is unique in AES, it is the combination of existing ideas to develop a symbiosis that explicitly addresses several facets of sustainability. Existing spatial and social connections significantly lowered some potential barriers to this co-creative collaboration. The academic aspect of the co-creative endeavor served to support the actualization of the initial ideas of the entrepreneurs, rather than directing the project. Thus, the initial motivation and design ideas came from the bottom-up and were led by the persons in place. This helped in developing ideas that were appropriate for the place and people that would be implementing these ideas in practices. There was a mutual decision to apply for public funding to further explore the validity and feasibility of the proposed system, which led to the development of the Palopuro AES pilot project. It should be noted that the name agroecological symbiosis itself was coined by a policy actor who was invited into the grant writing process as an advisor. The inclusion of policy actors, for example from the municipal and ministerial level, was an important step in actualizing the pilot project as they were integral to accessing the funding mechanisms that made the implementation possible. In discussions of food system change there is a focus on consumer behavior, usually centered around on what consumers do and do not buy (for e.g., Kneafsey et al., 2008). Understanding this dynamic is important; however, the role of consumer behavior alone is not enough for systems level change, as the farmer and the food processor must be willing to participate in a system that steps back from the conventional system long before the food reaches the consumer. The role of farmer-level and food processor level buy-in is vitally important for designing contextually appropriate and actionable food systems. It is very difficult for policy players and other non-farm-based actors to design a place-based model to support food system redesign as if place-based, the food systems are intimately tied to the context of the individual place where they operate (Murdoch et al., 2000;Feagan, 2007;Woods, 2012). In addition, the dual role as farmers and residents of the physical space of the food system gave the farming partners in the symbiosis a unique insight into what would work for their iteration of the AES. There were parallel goals in the AES pilot of designing a sustainability-based production and processing model and revitalizing the surrounding rural area. In the face of other sociospatial changes in the area, opening a social space on the farm through the farm market and other activities filled a void in the fabric of the Palopuro community, as many of the publicly accessible social spaces in the area were defunct. The opening of social spaces within the production landscape of the farm served the function of bringing the "people who eat" quite literally to the farm. Please note that the widely used term "consumers" does not fully capture the range of roles that play out in a food system based on the principles of agroecology, however, for the sake of clarity we will continue to use this term in this paper as needed. Bringing non-farming actors into the food system in the AES pilot project served to lessen the distance between producers and consumers, both physically and mentally. It served in building the consumer side of the food community within the AES. The farmers of Palopuro AES specifically wanted their farms to be more than remote places, they wanted their farms to be more accessible, shared space where citizens can get in touch with their local food system. One of the goals in bringing the consumer participants to the farm was exposing functions of the food system that are not in the realm of the consumer experience in an industrialized food chain. For example, the baker was excited about the possibility of making concretely visible to the consumers how the grain flows from the farm to the bakery and is turned to bread through the use of transparent piping in a production area that was visible to visitors. This acquaintance takes place on multiple levels, both through a growing familiarity with the process of turning raw materials into retail food products and developing one-on-one social ties with their local farmers and food processors. The farmers and food processors are a central feature in the farm markets held in the farmyard of the grain farm in Palopuro. In addition to the strictly food system-based participants, these markets also support the participation of other types of food retailers and local craftspeople. Creating a consistent space where these various types of local makers could come together allowed the farm to serve as a point of connection where social relationships were formed, and information was shared. In addition to the farm markets, the social space has also served as an education space for information exchange hosting numerous visits of other farmers, academics, and policy players to learn about the AES model and share their own experiences in redesigning local food systems. Creating platforms for this level of knowledge exchange supports the ethos of continuing opportunities to engage in citizen science (Ryan et al., 2018). The way in which the scientific and non-scientific participants came together was both co-creative and contractual, as the members of the community in question were the drivers in identifying the key themes pertinent for their community (Shirk et al., 2012). For example, the food producers and processors decided that they wanted to change their positioning within the food system, rather than an entity outside the community indicating that there should be a change to serve a broader purpose. The level of buy-in in the pilot project was high, this most likely a result of the core ideas emanating from the participants themselves. "Science" in isolation can design a tight and interesting model, but if it is not functional for the people who aim to live with it, then ultimately it will not work in practice (Poulsen et al., 2014). The Palopuro AES has been a grassroots effort, rather than an innovation that came from the top down. While there were scientists involved in the process from very early on, they came to the table on an equal footing as the entrepreneurs. There were multiple forms of knowledge explored and respected in the formation of the AES model. Both the AES idea, the pilot AES, and to a lesser degree the subsequent networks extension for a food system model, are manifestations of citizen science in action. Regular people in place working with scientists to design a food production and processes system that served to improve the local foodscape, while fostering sustainability and livelihoods. Citizen science and knowledge co-production are the vital links between designing a sustainable food system in theory and practice (Poulsen et al., 2014). Co-creation in the NAES Concept The successful collaboration over the AES pilot project laid the ground for the continued co-creation of knowledge that has led to the expanded concept of NAES. It should be noted that both these concepts support the development of placed-based food systems that are biophysically, socially, and culturally appropriate for the area where they operate (Feagan, 2007;Woods, 2012). Having the entrepreneurs as the initial drivers of this relocalization driven transformation of the food system was vital to creating a robust buy-in to the project. In addition, by bringing many different types of actors to the table, each actor was able to lean into their strengths and expertise. This aided in bringing the system from initial concept to functioning pilot in a relatively short period of time. The NAES concept builds on the AES concept by proposing networks of AES forming the production-processing foundation for transformative change from food chain to sustainability. The continued development of the more generalized food system model moved beyond the direct work with the on the ground actors. The extension from AES to NAES, which addresses a higher system level, made it obvious that new stakeholder groups must be included in the co-creating process. We are working on this in our current project, "Eco-Industrial Symbioses for Food Production Chain-Feasibility for South-Savo" (2020-2021, Regional Council of South-Savo, Finland). We aim to engage key people representing regional administration, policymakers, marketing channels, food processing companies, and action groups among farmers committed to the creation process. Redesigning a food system beyond the local level is an endeavor that requires a range of actors, including those close or within the existing system to be able to accurately reflect the reality on the ground. It is necessary to have a sufficiently deep level of co-creation between the stakeholders to achieve systemic transformation. Transformative change is more than simply societal intervention, requires co-creation beyond citizen science, and involves contributions from, to, and between the micro, meso, and macro levels (Schäfer and Kieslinger, 2016). Our experience encourages such an endeavor even if enabling policies are not (yet) there. This is because scientists as public servants may rather underestimate than fully appreciate and tap to the skills, enthusiasm, and ability of, especially, the entrepreneurs to creatively solve any emerging challenges as they appear. The scientists' role becomes one of process facilitators, especially in regard to analytically crosschecking the system model proposal against sustainability criteria (Horlings et al., 2020). A system can be co-creative, yet still very linear and conventional in its manifestation. The motivation of the producers and processors revolves most directly around the economic sphere; an AES must ultimately allow the entrepreneurs to maintain, with a prospect of improving, a livelihood while making commitments to participate. For cocreating a NAES, it is important to find further support for maintenance and improvement of the livelihoods through the network. The scientific actors are more directly able to keep the detailed environmental and wider sustainability goals in mind and at play within the development of the system, while the non-academic actors are able to keep track of what is functional within their community. The co-creation is not about just different groups reporting what they want. Rather it is activation, enthusiasm, and personal involvement of the parties at each level-producers, policy players, science practitioners, and the citizenry-all working together in the interest of sustainability and local food. CONCLUDING REMARKS In this paper, we argued how rearranging farming, food processing, and energy systems to follow the concept of AES would result in a shift to sustainable food production at systems level. Such a transformative change would require networks of AES, NAES, which would serve as the foundation of an emerging agroecology-based, geographically, and culturally contextualized food systems. We propose NAES as a generic principle for a transformative change in food systems toward sustainability. The NAES concept offers a systems-level alternative to the industrial and globalized food chains. NAES are distributed rather than consolidated, and entrepreneurial rather than centralized agribusiness. NAES based food systems are adaptive and resilient, ecologically more efficient, inherently more sufficient, and more consistent with sustainability goals than the present conventional agribusiness-based food chains. We argue that food systems based on NAES grids are able to produce enough food for a healthy diet at the local level. This may require deintensification of farming systems in some regions, while intensifying food production in other regions. The NAES food system(s), like any other system, is explicitly not proposed for "feeding" any population at any cost; rather, we propose NAES for a transformative change in which the population times diet times sustainability equation is explicit. The AES model supports agency for the participating farmers, food processors, and energy producers engaged in developing place-based food production systems. At the wider system level, the NAES invites the food market and the people who eat the food from the NAES to participate in forming a food community, and in regaining an agroecosystem-based sense of food. There are benefits to the system from a biophysical and sociocultural perspective. As the AES and NAES, represent a circular bioeconomy, that runs on-and even in some cases can produce in excess-renewable bioenergy, the obvious environmental benefits include plant nutrient recycling and balanced nutrient flows, as well as unforeseen climate efficiency. We have not yet quantified the carbon sinks or offsets of emissions from our pilot AES to give an example. This needs to be done. Diversification of agricultural land use gives some benefits to biodiversity, but further guidelines need to be developed, following the principle of land sharing. An obvious danger is biofuel production supplanting food production; in the AES concept, the biofuel production is integrated to, and primarily serves the primary production, processing, and delivery of the food that the AES produces. From a social perspective there are benefits for the entrepreneurs through their direct involvement in the cocreation of the NAES. These include creating sustainable and viable livelihoods in place, while creating a food and energy infrastructure that supports a robust local food system. Under the NAES model both farming, and food processing can move away from the fossil-fuel based, industrial model. In addition, the producers are more able to develop food systems that speak to their own needs, rather than being solely at the mercy of the globalized market. In addition, the NAES concept allows for the potential of community development in the rural spaces as evidenced by the use of the social space in the AES pilot project. We based our concept development on co-creation of the first pilot AES, the Palopuro symbiosis (in Hyvinkää, Finland). It cannot serve as a universal model, rather we used it to propose design principles and a system vision. We have not studied the issues of the food market. For example, how to best organize the purchasing procedures for the distributed food production. We have no direct evidence of the higher (environmental and social) value of the products mirrored in the relative prices, compared to products from the conventional chains. How to meet the challenge of the food processing tending to industrialize and consolidate, rather than stay entrepreneurial at small and medium scales? We are aware that the bulk of food presently originates in only a small number of food industry giants. For the NAES model to be realized, it might be essential to get the present consolidated industries to get involved, and their production distributed to emerging NAESs. This requires a new business model for the industry. However, the Palopuro symbiosis grew from a grassroots effort, thus it appears that there is space for entrepreneurial food producers to initiate AESs and facilitate formation of NAESs. NAES based food systems seem to be able to grow parallel to, although competing with, complementary to the conventional consolidated chains. Finally, based on our experiences in developing the Palopuro AES pilot project, we conclude that co-creation is a productive and rewarding, if not essential mode of research for systemic transformations in the food sector. The farmers, the food processors, and the associated energy producers, as entrepreneurs, have the knowledge, the motivation and the vision for improving not only their own businesses, but especially, their lives and the livelihoods of their clientele, and their social communities. Our experience reflects the importance of reciprocity between non-science actors and scientists in the development of the AES model. There is an added value from an increase in buy-in from non-scientific actors that are invited and welcomed to the innovation process works in favor of sustainability transformation. However, it should be emphasized that the non-science actors welcoming the scientists into the space was highly important to the success of the project. The bottom-up design of the AES pilot served to build a foundation and is an important facet in developing place-based food systems redesign. The AES and by extension the NAES model are dependent on the local and context-based knowledge that the food systems entrepreneurs brought to the discussion. A robust localized food system cannot be designed by scientists and policy actors alone, it must be inclusive of the non-science actors living and working within that system. Based on the experiences we had in the co-creation of the Palopuro symbiosis, we find that there is a huge potential in tapping into co-creation as a method for transforming the food system. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article. Further inquiries can be directed to the corresponding author/s. AUTHOR CONTRIBUTIONS SH-A contributed by reviewing, editing and writing the consistency, and co-creation sections. KK contributed reviewing, editing and writing the efficiency, and sufficiency sections. KK also is a farmer in Palopuro symbiosis, which was to the benefit of the co-creation process. KK and JH contributed to the original biophysical AES model development, with the entrepreneurs since 2015, in the context of Palopuro symbiosis. SH-A has also been working with the AES concept and the pilot case since 2015, specifically looking at the social dimensions and implications of a relocalized food system. SH-A did the formatting for the article. All authors contributed to the article and approved the submitted version.
2020-11-19T14:16:35.420Z
2020-11-19T00:00:00.000
{ "year": 2020, "sha1": "75f8c166ecee20c7a5f22e4040e5eb2d70966d3c", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fsufs.2020.588715/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "75f8c166ecee20c7a5f22e4040e5eb2d70966d3c", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Business" ] }
266055195
pes2o/s2orc
v3-fos-license
A novel SMOC1 pathogenic homozygous variant in a fetus with mesomelia of the lower limbs, micrognathia and hypertelorism and an incidental finding of CYP21A2‐related congenital adrenal hyperplasia Trio exome sequencing was performed on a fetus with bilateral mesomelia of the lower limbs with significant angulation of the tibial bones, micrognathia and hypertelorism detected on ultrasound scan at 19 + 0 weeks gestation. The couple is consanguineous. A homozygous pathogenic frameshift variant in the SMOC1 gene (c.339_340del p.(Phe114Cysfs*40)) was detected and both parents were shown to be heterozygous. Pathogenic variants in the SMOC1 gene are associated with microphthalmia with limb anomalies which multidisciplinary team discussion determined to be causal of the scan anomalies detected. The fetus was also a compound heterozygote for CYP21A2 pathogenic variants, confirming a second diagnosis of non‐classical congenital adrenal hyperplasia, which was felt incidental to the scan findings. The risk that this couple's next pregnancy would be affected by either of these disorders is 1 in 4 (25%) and demonstrates the importance of genetic diagnoses for the family and implications for future pregnancies. | REPORT 1.| Fetal phenotype Ultrasound scan at 19 þ 0 weeks gestation showed lower limb mesomelia, abnormal right tibia and fibula and severe bowing of left lower limbs (Figure 1).Facial views suggested micrognathia and hypertelorism and the eyes appeared small.Both kidneys appeared present although the right kidney was more difficult to visualize. No anomalies were detected by fetal echo.The parents are consanguineous. | Diagnostic method Fetal DNA was extracted from amniocytes and genomic DNA was extracted from parental blood samples.Trio exome sequencing and analysis using the fetal anomalies panel 1 was carried out as previously described. 2QF-PCR for common aneuploidies and microarray analysis showed a female chromosome complement with no pathogenic copy number variants. | Diagnostic results and interpretation Trio exome analysis identified a novel homozygous pathogenic frameshift variant in the SMOC1 gene with biparental inheritance (Table 1).Biallelic pathogenic variants in SMOC1 are associated with microphthalmia with limb anomalies (MIM 206920) and prenatal presentations have been reported previously, with features including depression of the frontal bone, posterior fossa anomalies, cerebral ventricular enlargement, clefting of the sacral and lower-lumbar vertebrae, and bilateral microphthalmia identified sonographically, with micrognathia, oligodactyly and tibial bowing identified after fetal autopsy. 3The SMOC1 frameshift variant identified in this study is predicted to undergo nonsense-medicated decay causing loss of function, which is a known disease mechanism.This variant was also absent from the gnomAD population database, resulting in a pathogenic variant classification.To the best of our knowledge, this variant has not previously been reported in the literature or variant databases.SMOC1 is essential for ocular and limb development in both humans and mice. 4Multidisciplinary team discussions determined that this variant is likely to fully explain the imaging findings; angulated tibia suggesting limb anomalies and hypertelorism with underdeveloped globes, which could reflect microphthalmia. In addition, the fetus was found to be a compound heterozygote for two previously published CYP21A2 pathogenic variants (Table 2) that are associated with the non-classical form of congenital adrenal hyperplasia (CAH) 5 that does not present prenatally.Because of the presence of the pseudogene, long-range PCR testing was utilized to confirm that both variants were present in the real gene.This was therefore deemed to be an incidental finding which agreed to be of clinical significance following multidisciplinary team discussions and reported. Both genetic findings had implications for future offspring and provided the option of prenatal testing and carrier testing for appropriate family members following genetic counseling. | Pregnancy outcome The parents were committed to the pregnancy and a further scan at 24 þ 0 weeks showed bilateral severe micro/anophthalmia, mild micrognathia, cross-fused left kidney and lower limb shortening and bowing.Parents were offered counseling and continued ultrasound surveillance; a fetal brain MRI at a later gestation was recommended; however, the pregnancy ended with fetal demise.After birth, external examination confirmed significant issues with shortening and bowing of the left leg and possibly a missing toe, and the eyes could not be visualised.The parents declined full post-mortem. | Discussion The future offspring of this couple are at 1 in 4 risk of being affected by SMOC1-related disorder or CYP21A2-related CAH and 1 in 16 risk of being affected by both conditions.Following genetic counseling, the couple opted for invasive prenatal testing in their next pregnancy, in which the fetus was an unaffected carrier of both conditions.This case provides additional information on the prenatal phenotype for SMOC1-related disorders with features in common with the prenatal phenotype previously reported. 3The additional scan findings seen at 24 þ 0 weeks were consistent with the diagnosis as bilateral severe micro/anophthalmia and limb anomalies were detected.The cross-fused left kidney seen in this case has not specifically been reported in affected individuals, although horseshoe kidneys have been seen on imaging in 4/8 probands with biallelic SMOC1 variants. 6This is considered a similar anomaly to the cross-fused kidney seen in this case and is T A B L E 2 Genetic findings.for this condition in 25 families. 6This case also highlights that any type of biallelic variant can be seen in consanguineous couples and the importance of considering all inheritance patterns when analysing trio exome data.It also demonstrates how exome sequencing can reveal unexpected findings which may not be related to the scan features but can have other health implications for the baby and future pregnancies. F I G U R E 1 Scan images showing (A) short, nagulated right tibia; (B) short bowed left tibia and fibula; (C) small orbits and (D) micrognathia.[Colour figure can be viewed at wileyonlinelibrary.com] 165.1 cm and father 167.6 cm, limbs proportional with no deformities and no history of fractures.Pregnancy loss-fetal Doppler changes were seen on the ultrasound in the preceding week prior to demise and patient was seen a week later with reduced fetal movements and an intra-uterine demise diagnosed at 25 þ 1 with SMOC1-related disorders.In keeping with a significant genetic diagnosis associated with structural anomalies, the patient was counseled about increased risks of in-utero demise.The anomalies themselves do not directly explain the demise but this outcome was clearly a possibility in this context and perinatal or early postnatal death has been previously reported (10/35 cases) consistent
2023-12-08T06:17:08.734Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "ce95c2f7af715517271872399d0143c7a6338c03", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/pd.6485", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "38e1d58fde1e4f21b7544fb504afa3c1df4f8405", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
229405648
pes2o/s2orc
v3-fos-license
Solar Photocatalysis for Emerging Micro-Pollutants Abatement and Water Disinfection: A Mini-Review This mini-review article discusses the critical factors that are likely to affect the performance of solar photocatalysis for environmental applications and, in particular, for the simultaneous degradation of emerging micro-pollutants and the inactivation of microbial pathogens in aqueous matrices. Special emphasis is placed on the control of specific operating factors like the type and the form of catalysts used throughout those processes, the intriguing role of the water matrix, and the composition of the microbial load of the sample in each case. The interplay among the visible responsive catalyst, the target pollutants/pathogens, including various types of microorganisms and the non-target water matrix species, dictates performance in an unpredictable and case-specific way. Case studies referring to lab and pilot-scale applications are presented to highlight such peculiarities. Moreover, current trends regarding the elimination of antibiotic-resistant bacteria and resistance genes by means of solar photocatalysis are discussed. The antibiotic resistance dispersion into the aquatic environment and how advanced photocatalytic processes can eliminate antibiotic resistance genes in microbial populations are documented, with a view to investigate the prospect of using those purification methods for the control-resistant microbial populations found in the environment. Understanding the interactions of the various water components (both inherent and target species) is key to the successful operation of a treatment process and its scaling up. Introduction The current trends in water and wastewater treatment are focused in the development and exploration of environmentally friendly and low-cost technologies. The occurrence of emerging micro-pollutants in the aquatic environment, as well as the presence of various pathogenic microorganisms, impose the application of effective purification methods in order to maintain high hygiene standards and to act toward public health protection. In this context, advanced oxidation processes (AOPs) have been well studied during the last decades and have proven to be quite promising for the chemical treatment and disinfection of aqueous samples [1][2][3]. The beneficial action of AOPs is attributed firstly to the in situ generation of highly reactive oxygen species (ROS) like hydroxyl radicals (HO • , E 0 = 1.8-2.7 V), which have the potential to mineralize various organic contaminants contained in waters, classified as bio-recalcitrant [4]. Also, they are capable of causing oxidative stress to "target" microorganisms, exhibiting remarkable biocidal action, as they can lead them to irreversible inactivation [3,5,6]. Encountering the challenge TiO 2 Photocatalysis Emerging micro-contaminants such as pharmaceuticals and endocrine disruptors are treated ineffectively in conventional wastewater treatment plants (WWTPs), where they are only partially removed through sorption onto the activated sludge, hydrolysis, and biodegradation; because of the low concentration of these micro-contaminants at the ng/L to µg/L levels, though, WWTP operators have not paid particular attention in removing such compounds. Regarding drinking water supplying companies, the use of granular activated carbon alone or combined with ozone, a traditional technique for removing pesticides from waters, can also be effective for other micro-contaminants [17]. The vast majority of solar photocatalytic processes highlight the use of titanium dioxide as an effective catalyst in terms of the degradation and destruction of a wide range of emerging micro-pollutants and microorganisms, respectively [18]. The advantages of heterogeneous semiconductor photocatalysis using TiO 2 include its operation at ambient conditions, while among the assets of the catalyst are its low cost, photochemical stability, structural properties, and the fact that it is non-toxic [4]. However, the excitation of this semiconductor requires exposure under irradiation with energy greater than its high band-gap energy (~3.2 eV). This feature makes titania active mainly under the UV spectral range, which is a small fraction of the solar light [19]. Nevertheless, and despite this limitation, there are numerous studies that have investigated the efficiency of pure titania regarding the oxidation of chemical compounds and the inactivation of pathogens under solar light (Tables 1 and 2) [3,20]. Fanourgiakis et al. (2014) studied the simultaneous elimination of synthetic estrogen 17α-ethynylestradiol (EE2) and inactivation of Escherichia coli in wastewater, applying simulated solar light and TiO 2 . According to their findings, the removal rates of EE2 and the bacterium were quite satisfactory, underlining the possible use of pure titania for wastewater purification under solar irradiation [21]. Other attempts that have taken place at the degradation of emerging micro-pollutants have been referred to in investigations that deal with antibiotics. Carbajo et al. (2016), who studied the degradation of multiple antibiotics in water, recorded firstly, the effectiveness of TiO 2 upon exposure under solar light, and secondly, the dependence of the catalyst's activity on the concentration of organic pollutants. The total removal of various pharmaceutical compounds occurred in very short periods of time (t 30w < 35 min), revealing the beneficial use of titania under specific operational conditions [13]. Similarly, Méndez-Arriaga et al. (2009) used TiO 2 for the removal of ibuprofen from water, but they noted an overall enhancement of the process adding H 2 O 2 . Nevertheless, the degradation of ibuprofen was significant with TiO 2 alone, independently on the solar device employed [22]. * Light intensity is not provided. * Light intensity is not provided. The biocidal effect of UVA irradiation has been long documented and is attributed to its absorption by cellular components called chromophores, causing further damage and oxidative stress to the microorganisms [4,54]. During photocatalysis the progressive generation of ROS results in detrimental effects on microbial components and on the cellular layers, beginning from the cell wall and the outer membrane. Afterwards, the lesions expand toward the inner proteins (enzymes) and the nucleic acids (genetic material) [43,55]. The primary species responsible for microbial destruction is hydroxyl radical (HO • ) followed by superoxide radical anion (O 2 •− ), hydro-peroxyl radical (HO 2 • ), and hydrogen peroxide (H 2 O 2 ) [56]. Selected applications of TiO 2 for solar disinfection purposes may be seen in Table 2. The bactericidal action of titania proved to be significant when E. coli was the target organism, either in water or in wastewater [6,52]. Catalyst concentrations up to 0.5 g/L seem to be sufficient for the complete removal of the bacterium with an initial concentration of 10 6 CFU/mL. Parallel performance has been observed regarding other microorganisms as well, like fungi (Fusarium species) [45,52] and heterotrophic bacteria in dairy wastewater [43]. During recent years, though, the trend in the broad solar photocatalytic area has been to develop and explore newly synthesized materials that potentially could serve as efficient catalysts for the process. The general concept is to improve the activity of titania and to expand its absorption spectrum toward the visible light region. Screening the current literature, various materials have arisen that show promising performance in terms of the elimination of emerging micro-pollutants and waterborne pathogens (Tables 1 and 2). Many different strategies have been adopted for either morphological or chemical modifications of the catalyst [57,58]. Those include modifications of TiO 2 surface with noble metals or other semiconductors or incorporation of additional components in the catalyst structure, like non-metal or/and noble and transition metal deposition [57]. The performance of modified titania is highly improved under simulated and natural solar light, and better removal rates are achieved with various contaminants/pathogens. In this perspective several attempts have been made using doped-titania materials, in terms of the degradation of hazardous and emerging micro-contaminants in water and wastewater. For example Dimitroula et al. (2012) proceeded with the removal of bisphenol-A (BPA) and 17α-ethynylestradiol (EE2) from wastewater, using various TiO 2 photocatalysts doped with N, P, Ca, Ag, Na, K, and Pt. The overall photoactivity of modified titania under visible light was enhanced, but the treatment performance was not improved substantially [2]. This outcome verified the fact that modified titania do not always work well under the operating conditions in each case. Besides, there have been reports of many possible limitations to metal-doped titania materials, like photo-induced corrosion and promoted charge recombination at some metal sites [57,59]. On the other hand, more applications have been overviewed in recent studies, regarding the use of doped-titania and disinfection processes. For instance, various metal-doped TiO 2 , such as Fe-, Mn-, Co-or Al-TiO 2 catalysts, have been used successfully for the inactivation of bacteria (E. coli, Klebsiella pneumoniae, Staphylococcus aureus) and viruses (bacteriophage MS2) in water and wastewater [19,41,47]. In all those cases microbial inactivation was 2-3 times faster, compared with the respective occurring with the pristine P25 TiO 2 . The improved activity of metal-doped titania was credited to the optical absorption shifts toward the visible region and to the recombination delay of the electron-hole pair. Also, Sreeja et al. (2017) investigated the performance of Ag core-TiO 2 shell-structured (Ag@TiO 2 ) nanoparticles and found that those catalysts were quite efficient for the inactivation of E. coli in water under solar light irradiation [42]. Complete disinfection (8 Log reduction of the bacterial population) was achieved within 15 min of treatment applying 0.4 g/L Ag@TiO 2 catalyst loading. Interestingly, similar promising results were obtained testing other species such as Bacillus cereus with N-doped TiO 2 photocatalysts using various nitrogen precursors (urea, triethylamine-TEA, and NH 3 ) [9]. Although B. cereus exhibited high resistance, N-doped TiO 2 catalysts were more active than pure titania in water samples and under simulated solar irradiation. Generally, the use of modified titania, at least in the case of water/wastewater disinfection under solar irradiation, seems to be faster than other treatment techniques, highlighting the competitive nature of the proposed process against more conventional disinfection systems. Slurry or Immobilized Catalysts? One of the major concerns or debates refers to the choice of the catalyst form to be used for environmental applications. Catalysts in slurry phase are well-known for their effective performance and rather popular; however, such processes require further treatment steps so as to remove the catalyst from the treated sample (water or effluent). On the other hand, another option is to immobilize the catalyst onto appropriate surfaces, surpassing the need for post-treatment handling [37]. Nevertheless, even then, other issues are implicated in the oxidation process, like the decrease of the surface area of the catalysts that is available for the photocatalytic reactions [60]. This feature results in lower degradation rates of chemical compounds and slower inactivation of microbial pathogens in aqueous matrices when immobilized catalysts are employed, compared with the suspended systems [61]. The choice of the catalyst form should be weighed carefully, based on the treatment that is to be applied and on the special requirements in each case (type of pollutant/microorganism, initial concentration water matrix, etc.). Salaeh et al. (2016) investigated the possibility of removing diclofenac from water using immobilized TiO 2 -based zeolite composite photocatalyst (TiO 2 -FeZ) and simulated solar light. Diclofenac was removed by 80.1% after 15 min of exposure, with the adsorption of the pharmaceutical playing the most significant role in the overall treatment efficiency [26]. Also, in another study TiO 2 supported on glass beads was tested for tertiary treatment of residual pesticides, achieving rates over 90%, but only with the additional contribution of hydrogen peroxide as an electron acceptor [37]. Respective attempts have been made in the field of disinfection. Khan et al. (2012) worked with a thin-film fixed-bed reactor (TFFBR) for the inactivation of aquaculture pathogen Aeromonas hydrophila, demonstrating that high sunlight intensities (>600 W/m 2 ) and low flow rates (4.8 L/h) play key role in the inactivation of this fish pathogen [20]. achieved a 6 Log reduction of E. coli within 90 min, using TiO 2 immobilized on Ahlstrom paper in a compound parabolic collector (CPC reactor), highlighting that low flow rates contribute to a more efficient photocatalytic disinfection [50]. In an attempt to improve the photocatalytic activity when TiO 2 films are used and to counterbalance any loss that may occur, many researchers propose the application of an external electric bias. Dunlop et al. (2008), who worked with spores of Clostridium perfringens and TiO 2 /Ti films (working electrode), proved that applying an external bias of 1 V led to 60-70% higher inactivation rates, while when no bias was applied the disinfection efficiency was inadequate [62]. Based on their research, the potential gradient forces the electrons toward the cathode, thus minimizing the rate of electron-hole recombination. Photocatalysts Other than TiO 2 Titania nanoparticles and its composites show remarkable results during solar photocatalysis of water and wastewater. Especially the metal and non-metal-doped nanoparticles have been extensively used for multiple applications, demonstrating promising prospects of a "clean and green" aquatic environment. Nevertheless, we should not overlook some other semiconductors that have emerged as alternative approaches in this field of treatment and disinfection. Zinc oxide nanoparticles with a wide band-gap of 3.37 eV appear to be a nice option, considering some recorded assets, such as good optoelectronic, piezoelectric, and catalytic properties [28]. However, photo-corrosion may worsen the performance of ZnO, causing limited stability. Therefore, some researchers have tested the use of supplementary materials as support to ZnO nanoparticles. For example, ZnO-supported clays have been prepared for photocatalytic applications, like ZnO/sepiolite heterostructured materials. Akkari et al. (2018) used those composited for solar photocatalytic degradation of pharmaceuticals in wastewater. According to their findings, ibuprofen, acetaminophen, and antipyrine were readily degraded in wastewater, indicating the superiority of those materials compared to other catalysts used for solar photocatalysis [28]. ZnO nanocomposites have also been used successfully for disinfection purposes of various bacterial species like E. coli, Vibrio cholerae, and multi-drug-resistant Bacillus sp. [5,63,64]. Given that the solar photocatalytic activity of the metal oxide nanostructures is increased by formation of metal/metal oxide hybrid structures, Das et al. (2015) synthesized Ag@ZnO core-shell structure nanocomposites and tested their potential to inactivate V. cholerae in water. The results showed that this highly pathogenic bacterium may decrease up to 98% after 40-60 min of sunlight exposure with a catalyst loading of 0.5 mg/L [5]. The same group worked with Ag@SnO 2 @ZnO core-shell nanocomposites and Fe-doped ZnO nanoparticles, as well, studying their biocidal properties against Bacillus sp. and E. coli, respectively. In both cases the synthesized materials exhibited satisfactory performance in terms of the inactivation of pathogens in water (Tables 2 and 3) [63,64]. In all those cases catalysts had a stable structure and no silver leaching was observed. Further attempts have been made to explore more catalysts with acceptable solar performance. In this sense, cadmium sulfide (CdS) seems to be quite effective regarding the disinfection of aqueous matrices with high concentrations of E. coli and S. aureus under visible light [65]. Silver orthophosphate (Ag 3 PO 4 ) is a low band-gap photocatalyst with enormous potential in harvesting solar energy. What is important regarding this catalyst is that it is characterized by a low electron-hole recombination rate, but with low long-term stability, as it is decomposed in the absence of sacrificial agent [66]. In this case, leaching of silver in the liquid phase may contribute to disinfection through homogeneous reactions. This drawback may be surpassed by synthesizing various Ag 3 PO 4 -based composites. Ag 3 PO 4 and Ag 3 PO 4 /TiO 2 materials have the potential to achieve good inactivation rates of E. coli under solar irradiation, while other studies present the disinfection efficiency of several Ag 3 PO 4 /TiO 2 composites against multiple pathogens [67][68][69]. Among the numerous visible light active photocatalysts bismuth vanadate (BiVO 4 ) has received attention despite the fact that very few water disinfection studies have been reported. Its activity alone is not that significant, as the recombination rate of photo-induced electron-hole pair is really fast and high. Metal deposition on the surface of the catalyst seems to work toward overcoming this drawback, leading to enhanced activity under solar light. In this perspective, the silver deposition on the surface of BiVO 4 made this catalyst capable of inactivating three waterborne pathogens, namely, E. coli (Gram-negative bacteria), Enterococcus faecalis (Gram-positive bacteria) and spores of Fusarium solani (phytopathogen) under natural sunlight [12]. Finally, one more catalyst reported in current literature is Bi 2 WO 6 , which has the advantage of absorbing more solar photons. This catalyst has the potential to accelerate the bactericidal action of solar irradiation, given that a concentration of 0.5 g/L is sufficient for a 6 Log reduction of E. coli in water within 105 min [6]. Heterogeneous Photo-Fenton Systems Among the AOPs applied for water and wastewater treatment, the photo-Fenton process has become very popular as an eco-friendly choice for organics mineralization and microbial inactivation. This process takes place in the presence of ferrous or ferric salts and hydrogen peroxide in acidic media, and hydroxyl radicals are generated through the Fe 2+ /Fe 3+ redox cycle. The production of hydroxyl radicals is greatly enhanced under UV-vis irradiation, as transformation of Fe 3+ to Fe 2+ is promoted. The main challenge when applying this method is to operate at neutral or near-neutral conditions and not in the range of 2.5-3.5, which is optimum for this AOP [70]. This pH range is prohibited for environmental applications, and further actions should be taken post treatment and prior to the disposal of treated streams into the aquatic bodies (e.g., neutralization). In this view, current research studies have proposed heterogeneous Fenton-like systems, which operate well and efficiently at neutral or near-neutral conditions ( Table 2). New organic or inorganic supports have been tested for the catalysts used in photo-Fenton processes, especially biopolymers like sodium alginate, which is biocompatible, inexpensive, and can be easily assembled into spherules or beads. Barreca et al. (2015) synthesized iron-enriched montmorillonite alginate beads for the inactivation of E. coli and recorded a 7 Log reduction at pH 7 after 60 min under solar irradiation with 10 mg/L H 2 O 2 [44]. Also, other materials served as efficient catalysts for the removal of MS2 coliphage from water at neutral conditions [48]. This phage was inactivated successfully in water in the presence of hematite (α-Fe 2 O 3 ), goethite (α-FeOOH), and magnetite (Fe 3 O 4 ) and under solar light, and all materials exhibited stability with negligible iron leaching. Also, promising results have been derived regarding the wastewater treatment by means of the photo-Fenton process. De la Obra Jiménez et al. (2019), who worked with raceway pond reactors, observed total inactivation of total coliforms E. coli and Enterococcus sp. in wastewater secondary effluents in continuous flow and neutral pH within 60 min in the presence of 50 mg/L H 2 O 2 [53]. Similar studies may be overviewed regarding the degradation of emerging micro-pollutants (Table 1). Solar photo-Fenton reactions are capable of removing endocrine disruptors (EDCs) and various antibiotics from water and wastewater, either alone or combined with other processes [29,31,33,38]. For instance, Sirtori et al. (2009) investigated the degradation rate of nalidixic acid, which belongs to the quinolone group of antibiotics, by means of photo-Fenton and biological treatment. Photo-Fenton was found to be a successful enhancer of the biodegradability of wastewater, acting as a supplementary technique to an immobilized biomass reactor in order to achieve mineralization and detoxification of industrial wastewater [31]. Moreover, Soriano-Molinao et al. (2019) accomplished the removal of 80% of the concentration of chemicals of emerging concern from wastewater after 15 min of photo-Fenton at circumneutral pH in solar raceway pond reactors [38]. Based on all those results, heterogeneous photo-Fenton systems at neutral pH seem to be a feasible solution for water/wastewater treatment with acceptable results, without causing any disturbance or toxicity to the surrounding environment. Transformation By-Products Solar photocatalysis of contaminants may result in the formation of transformation by-products (TBPs) that are less biodegradable and/or more toxic than the original compound. This is more likely to happen if the experiments have been performed in environmental matrices rather than pure water (as this is mainly the case for the studies shown in Table 1), since less biogenic TBPs may also be generated from photocatalytic transformations involving the non-target species inherently present in the matrix (i.e., the effluent organic matter typically found in treated wastewaters and the natural organic matter found in groundwaters) [31]. The level of toxicity induced by the generation of by-products is often unpredictable and sometimes related to the duration of the process. The toxicity in short treatments usually decreases gradually in the course of the photodegradation [4]. The effect of photocatalysis on the properties of the effluent is usually assessed by means of biodegradability and/or toxicity tests. The standard BOD (biochemical oxygen demand) test is commonly employed as a measure of aerobic biodegradability, which is also assessed by means of shake flask tests, respirometry, and the Zahn-Wellens test [24]. Anaerobic biodegradability tests are less popular and usually measure the rate of biogas production. Acute toxicity is usually assessed against freshwater and marine microorganisms and the results are usually quoted in the form of EC50 values [26]. It should be pointed out that identification of TBPs, although conceptually advantageous, may not be feasible even when sophisticated analytical tools are available. This is due to the fact that the concentration of micro-contaminants may be 2-3 orders of magnitude lower than the organic and inorganic, non-target matrix components and, therefore, interferences mask the presence of TBPs in the matrix [31]. The Intriguing Role of the Water Matrix The water matrix that is mainly used in research studies dealing with AOPs and water/wastewater treatment is ultrapure water. This choice is based primarily on the need to gain fundamental understanding of processes such as degradation kinetics, mechanisms, and pathways without taking into account the impact and the interference of the water matrix effect. Notwithstanding, the latter may be extremely influential to the overall performance of each technique and has the potential to lead to an unreliable outcome. It is well established that a high level of the water matrix complexity causes deterioration of AOPs' efficacy. This occurs because the pollutants/microorganisms and the ingredients of the matrix (e.g., dissolved organic matter, inorganic constituents, etc.) develop a competitive action toward the generated ROS or the active sites of the catalysts/activators when heterogeneous processes are applied [71]. In this sense, for example, in a case of sulfamethoxazole degradation using solar photocatalysis over WO 3 /TiO 2 suspensions, the pseudo-first order kinetic constant decreases as the matrix shifts from ultrapure water to drinking water (DW: containing bicarbonates and other ions) and finally to secondary treated wastewater (WW: containing residual organics and various ions) [72]. On the other hand, the exact reverse behavior may take place under different operating conditions and when other contaminants are to be degraded, like bisphenol-A (BPA); the highest rates of BPA degradation are recorded when the sample is wastewater, compared with other matrices, like ultrapure water [73]. Apparently, the target micro-pollutants/microorganisms that are to be degraded/inactivated, the constituents of the matrix, the ROS, and the catalysts/activators, if they are present, develop tricky and challenging interactions among them with unpredictable results. Eventually, the nature of those interactions will define any reaction kinetics and mechanisms through a synergy or an antagonism, which may be generated. Moreover, the relative contribution of each individual effect may depend on the specific treatment system in question and, for a certain system, on the specific operating conditions. Nevertheless, some cases underline the fact that the effect of water matrix on photocatalytic disinfection/degradation is case specific. The mechanisms and kinetics of photocatalytic disinfection are highly affected by the presence of inorganic ions (e.g., bicarbonates, chlorides etc.), organics (e.g., natural organic matter (NOM)) and suspended solids. Those components aid in the resistance of microorganisms, considering that they act as physical shields that interfere in the whole process [74]. That is why wastewater has always been dealt with as an aqueous matrix of special attention with special complexity and intrinsic features. Zuo et al. (2015) presented the deterioration of photocatalytic disinfection of E. coli due to the presence of ammonia and nitrites in the matrix. The overall effect was attributed to the partial consumption of hydroxyl radicals during the conversion of inorganic nitrogen to nitrates [75]. Similar observations were made by Marugán et al. (2010), who recorded the unfavorable effect of carbonates, phosphates, and humic acid on the inactivation of E. coli [76]. However, they highlighted the positive effect of chlorides on disinfection, which may further contribute to the production of toxic organochlorinated by-products. The latter may counterbalance the loss of hydroxyl radicals, leading to an improvement of disinfection efficiency. What was even more surprising was that the same components seemed to slow down the photocatalytic degradation of dyes, making the whole issue of "the water matrix effect" rather a "brain teaser" with an unpredicted outcome. The main suggestion in the literature is the careful standardization of operating conditions in each case, based on the special features of the chemical pollutants and microbial pathogens contained in the sample. Type of Waterborne Pathogens Tested in Solar Photocatalysis Water and wastewater contain a remarkably extensive variety of microorganisms, belonging to different groups with diverse structures and features. The latter affect inevitably the microbial response and their overall behavior during a disinfection process, as well as the specific mode of their inactivation. According to the recent literature many studies have been conducted so as to provide insight about the principles and mechanisms of microbial inactivation. However, there is still a lot to be revealed and clarified. Screening indicative published data, it is quite obvious that most of the disinfection studies related to solar photocatalysis are focused on the investigation of bacterial species and spores (Table 2), leaving out other virulent pathogens, which are important to public health. What is more is that although multiple bacterial species are contained in water and wastewater, the one that is always mentioned in disinfection applications is the well-known E. coli [6,42,44,51]. Nevertheless, focusing on just one bacterial indicator poses the risk of extracting biased conclusions in terms of the effectiveness of solar photocatalytic applications. The extent up to which cell (or other) damages occur varies greatly, depending on the type of microorganism tested each time. Therefore, in the case of bacteria the level of damages and cell permeability caused by ROS are defined, among other parameters, by the thickness of the cell wall. The main differences are identified between Gram-positive and Gram-negative species, as the first ones possess a thick cell wall that contains many layers of peptidoglycan and teichoic acids. Those components provide the potential of preserving their viability during photocatalytic treatment, as the penetration of free radicals is rather obstructed [40]. However, the higher resistance of Gram-positive bacteria is not always confirmed, as the operational conditions and the bacterial indicators employed in each case may reverse this precedence order [9]. In this sense, there are cases where high catalyst concentrations may be required up to 300 mg/L for the complete inactivation of Gram-negative bacteria [77]. The role of cell wall structure and complexity in the overall behavior of bacteria during photocatalysis is still under investigation and many parameters are yet to be explored. It is commonly accepted though, that the disinfection efficiency of a process should be assessed using representative indicators of both groups of bacteria, in order to obtain reliable and accurate results and an objective overview of the process' limits. Another issue under consideration is the cellular form of the target microorganism. For instance, some pathogenic bacteria are found in the aquatic environment in the form of endospores, which are considered really resistant under the stressed conditions of disinfection. Endospores contain a thick coating made by proteins, which usually require prolonged treatment and exposure under solar irradiation. García-Fernández et al. (2015) studied the effect on the microorganism type of the solar photocatalytic treatment and found that vegetative cells are much more sensitive than spores. In that specific case Fusarium spores (fungus) were tested, which showed remarkable resistance to TiO 2 photocatalysis due to rigid structures composed of polymeric sugars, proteins, and glycoproteins. Also, their wall contains an outer xylan layer that confers significant resistance to oxidative stress [52]. In another case, Clostridium perfringens spores with a dipicolinic acid-calcium-peptidoglycan complex could be harmed only by hydrogen peroxide, which can be further activated by ferrous ion that is incorporated into the spore coating. This process is called in vivo Fenton reaction [62]. The waterborne protozoa constitute another group of pathogenic microorganisms that are found in the aquatic bodies in the resistant form of cysts/oocysts. Cryptosporidium parvum and Giardia lamblia are considered very virulent with extremely low infectious dose and yet they have not been mentioned frequently in the literature in relation to disinfection techniques. Generally, both protozoan species show significant tolerance during conventional methods, like chlorination, but also during many AOPs [74]. Oocysts of C. parvum require up to 5 h for a substantial decay and removal from distilled water during TiO 2 solar photocatalysis [10]. Moreover, the authors stated that because of the robustness of the oocysts, C. parvum's inactivation would probably ensure the elimination of other less resistant pathogens. Even if oocysts remain as residual microorganisms after treatment, they are not considered infective as excystation occurs with the subsequent generation of sporozoites. The combination of solar light with a catalyst causes destruction of the oocyst cell walls, and the final picture is empty cells characterized as "ghosts", which remain after the process [70]. Much less research has been conducted on the photocatalytic inactivation of viruses, whose significant presence in the aquatic environment verifies their resistant nature and tolerance during conventional disinfection methods. Up until now, studies demonstrated the existence of such viruses in treated effluents, highlighting the inadequacy of conventional purification methods [78]. Viruses are traditionally known to maintain their structural properties and infectivity when hostile conditions are induced in the surrounding area [79]. Upon application of a photocatalytic process, viral inactivation may occur only when substantial oxidizing power is provided, which is necessary for the deformation of their protein capsid and the development of lesions in their nucleic acid. The absence of any enzymes or other typical cellular structure leaves capsid and genetic material as the only targets of the ROS generated during AOPs techniques [47]. Viral adsorption and general adherence onto the catalysts' nanoparticles is the first step of their inactivation in photocatalytic processes, followed by the attack on the protein capsid and other binding sites of the viruses [80]. On the other hand, certain studies proposed a different mode of action and mechanism of photocatalysis against viruses. What mostly occurs is the interaction between free hydroxyl radicals in the bulk phase and the viruses, as electrostatic repulsion does not allow the interaction and close contact between the catalyst and the virus [46]. The application of a positive potential to an immobilized TiO 2 electrode may induce an electrostatic attraction between the catalyst and the viral capsid, which is mostly negatively charged. Also, Fenton's reagent and metal-doped titania seem to eliminate successfully MS2 coliphages, as reduction up to 5 Logs may occur within 60 min of treatment [47]. The final target of ROS in the course of photocatalysis is the genetic material of microorganisms and viruses (DNA or RNA). Nucleic acids are rather susceptible to the produced oxidative power through attacks either at the sugar or at the base [81]. All damages and lesions in the microbial genetic material are subject to restoration in the case of some bacterial species, according to their properties. This feature, the so-called "photoreactivation," is the main disadvantage of photocatalytic treatment and generally of the processes that utilize UV irradiation. Some bacteria have the potential to repair any destruction sites or "mistakes" on their genetic material through special enzymatic activity. Such enzymes mainly act under light (300-500 nm) and split the dimmers formed as a consequence of irradiation [82]. Although restoration activity takes place usually after exposure to UV-C irradiation, it has also been reported when UV-A is employed, involving not only bacteria but other microbes like protozoan cysts [83]. Therefore, having in mind that solar photocatalytic processes have no residual action, it is crucial to design properly such applications in order to ensure the durability of the disinfection and the inability of waterborne pathogens to proliferate post treatment. Catalyst loading, light energy, time of irradiation are some of the parameters that must be defined and standardized properly to cause irreversible damages in microbial components and structures. The possibility of microbial reactivation always remains, but it should be minimized, though, for public health protection and if solar photocatalysis is to be applied for water disinfection purposes. Antibiotic-Resistant Bacteria (ARB) and Antibiotic Resistance Genes (ARGs) A special group of microorganisms contained in water and wastewater is referred to the antibiotic-resistant bacteria (ARB), which have already attracted much scientific attention nowadays. The effective application of a disinfection process should always include this specific target microbial group, as it raises many concerns about human health. The uncontrolled use of antibiotics in medical, veterinary, or even agricultural practices and their incomplete removal in WWTPs has led to their uncontrolled excretion in the environment, resulting in an excessive rise of antibiotic resistance in various bacteria by the dissemination of antibiotic resistance genes (ARGs) [84]. ARB and ARGs seem to prevail in the aquatic environment, inducing further resistance within microbial communities, while they have also been documented as emerging contaminants. Many different kinds of ARGs have already been detected in aquatic systems, including WWTPs, and their effluents verify their persistence during treatment (Table 3). Water bodies and particularly WWTPs are extraordinary settlements for the proliferation of ARB and the dissemination of ARGs through horizontal transference of genetic elements, conferring resistance to multiple antibiotic compounds [16]. The main concern and question is whether current treatment processes and disinfection approaches are capable of removing all ARB and ARGs present in water/wastewater, prohibiting their revival in effluents. According to the current literature this is quite common and many multi-drug-resistant bacteria, as well as ARGs, have been detected in the end-streams of WWTPs [85,86]. Moreover, in some cases ARGs are increased in the course of treatment, resulting in extremely high concentrations in the effluents [87]. Therefore, what is mostly needed is the establishment and application of effective technologies toward the control of ARB and the elimination of ARGs from water/wastewater. Failure to limit their dispersion into the aquatic environment threatens public health and contributes to a further increase of resistant populations. The extent to which treatment and disinfection processes inactivate ARB and eliminate the genes relevant to resistance is still under discussion [88]. The question that arises is how far and under which operational conditions does disinfection eliminate ARB and ARGs. While the risk still exists, solar photocatalysis seems to work well in this direction, providing promising results regarding the inactivation of ARB (Table 3). As already mentioned, this method overcomes many disadvantages of conventional purification processes like the toxic by-products of chlorination or certain action limitations of UV irradiation, which have the potential to remove ARB from water and wastewater [89,90]. Doped-titania materials have the potential to inactivate sufficiently antibiotic-resistant E. coli or K. pneumoniae [15,91]. Metal and non-metal-doped TiO 2 under solar irradiation led to up to 6 Log bacterial reductions within 60 min of treatment of urban wastewater. Also, Venieri et al. (2016) studied the possible changes in the antibiotic resistance profile of K. pneumoniae post treatment and found out that in some cases residual cells after disinfection were more susceptible in specific antibiotic compounds [15]. The same authors documented the simultaneous loss of K. pneumoniae's ARGs in the course of photocatalysis. Fe-doped ZnO nanoparticles and Ag@SnO 2 @ZnO core-shell nanocomposites exhibited similar performance, adequately inactivating E. coli and Bacillus sp. in water, respectively [63,64]. Neither bacteria regrew after treatment and Bacillus sp. lost substantial resistance. Also, comparing the effectiveness of Ag@SnO 2 @ZnO core-shell nanocomposites with traditional chemical disinfectants and UV-250 nm, it was found that they had lesser impact on the resistance profile of the bacteria. The elimination of ARGs during solar photocatalysis has been underreported in recent'studies. Although there are data regarding their prevalence in water and wastewater (Table 3), more information is needed about their response in the presence of a semiconductor and solar light. Furthermore, given that ARGs are mostly carried in bacterial plasmids, special attention should be paid to the persistence of plasmids and their integrity level during treatment. According to Mao et al. (2015), the optimum removal of ARGs from wastewater requires high irradiation intensities or the combination of UV with a photocatalytic treatment [87]. Up until now, the point of agreement is that wastewater is an important repository of ARGs that needs more effective treatment than conventional applications. Pilot-Scale Application Although solar photocatalytic treatment of water and wastewater has successfully been tested in the laboratory, information regarding pilot-or large-scale applications is scarce (Tables 1 and 2). The pilot-scale applications that have been mainly tested are compound parabolic collectors (CPCs) and raceway ponds. Generally, both systems prove to be effective for the removal of persistent micro-contaminants of emerging concern and the elimination of waterborne pathogens. Special key aspects for successful water treatment applications are the design and configuration of the photo-reactor. CPC solar reactors are one of the best approaches in order to enhance the efficacy of solar photocatalytic purification and disinfection of water [12]. These reactors are easy to use, cost-effective, and appropriate for point-of-use applications, since they can be constructed in various sizes. Raceway pond reactors were originally developed for micro-algal mass culture and are applied for the degradation of emerging micro-contaminants like pharmaceuticals and disinfection via solar photo-Fenton process [38,53]. Although they have less efficient optics than CPCs, they have a low construction cost and a large volume/surface ratio, which make them a quite competitive option for the treatment of secondary effluents [53]. Recent studies highlighted the prospect of scaling-up solar photocatalytic applications for water and wastewater treatment, considering those pilot-scale reactors as a post-secondary treatment step in WWTPs. This trend was followed by Barwal and Chaudhari, who designed and tested a hybrid bio-solar system with a moving bed biofilm reactor and a CPC for the purification and disinfection of municipal wastewaters [94]. Based on the above, large-scale applications of solar photocatalysis can serve as advanced tertiary treatment of wastewater and as an effective disinfection step in the water industry, especially in cases where other techniques are not suitable or feasible. Future Perspectives Although several AOPs have demonstrated supreme performance on water/wastewater treatment and disinfection over the last decades, solar photocatalysis is a relatively new area and there is lot yet to be explore and developed. The challenges are still numerous and many problems have to be overcome; however, the prospect of using solar light and energy combined with newly developed materials stands out as one sustainable alternative for environmental applications. Environmental protection and the economic cost are among the most important driving forces for the development of new methods that will be preserved and feasible in the course of time. In this respect, solar processes have all the characteristics and potential to be applied on a routine basis as efficient disinfection/decontamination treatment technologies. Also, they offer an ideal set-up for the synthesis of new, environmentally friendly materials that will serve as photocatalysts. Finally, the process scale-up, which has already begun, is a challenging task that will add to the overall science of water/wastewater treatment in an era where public health and environmental protection are the ultimate values for human beings. In a nutshell, water and wastewater purification and disinfection are listed among the topics that are balanced in the interface of science and engineering, and different disciplines must cooperate to deal with them successfully and constructively. Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflict of interest.
2020-12-03T09:04:53.470Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "76e7215046c1df489404663f94f340c936c0b7d6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/su122310047", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "15e70ab6e9667ad40ccad79d0626bce8c7b9af86", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
261396170
pes2o/s2orc
v3-fos-license
Epigenetic reshaping through damage: promoting cell fate transition by BrdU and IdU incorporation Background Thymidine analogs have long been recognized for their ability to randomly incorporate into DNA. However, the precise mechanisms through which thymidine analogs facilitate cell fate transition remains unclear. Results Here, we discovered a strong correlation between the dosage dependence of thymidine analogs and their ability to overcome reprogramming barrier. The extraembryonic endoderm (XEN) state seems to be a cell's selective response to DNA damage repair (DDR), offering a shortcut to overcome reprogramming barriers. Meanwhile, we found that homologous recombination repair (HRR) pathway causes an overall epigenetic reshaping of cells and enabling them to overcome greater barriers. This response leads to the creation of a hypomethylated environment, which facilitates the transition of cell fate in various reprogramming systems. We term this mechanism as Epigenetic Reshaping through Damage (ERD). Conclusion Overall, our study finds that BrdU/IdU can activate the DNA damage repair pathway (HRR), leading to increased histone acetylation and genome-wide DNA demethylation, regulating somatic cell reprogramming. This offers valuable insights into mechanisms underlying cell fate transition. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s13578-024-01192-x. Introduction Thymidine analogs are often considered to incorporate randomly into DNA sequences, introducing inherent unpredictability and randomness in their mechanisms of action [1].Despite this inherent nature, they play a critical role in the precise regulation of cell fate [2][3][4][5][6][7].In the study conducted by Xie et al., the utilization of BrdU, widely employed to label proliferating cells in vivo, has demonstrated its significant potential in facilitating chemical induction of pluripotency [3].The study by Cao et al., sheds light on how BrdU can impact the reorganization of nuclear architecture and influence the cell fate decisions [5].IdU, which is also a thymidine analog was reported to cause stochastic fluctuations in gene expression to facilitate cellular reprogramming [8]. However, the precise mechanisms through which BrdU/IdU facilitates cell reprogramming are not yet fully understood.BrdU/IdU may be involved in cell reprogramming through mechanisms beyond transcriptional fluctuations.This could include the impacts of thymidine analogs on DNA structure, repair mechanisms, or other cellular processes, providing a potential shortcut for cells to overcome reprogramming barriers.As a result, thymidine analogs may play a more crucial and multifaceted role in regulating cell fate, extending beyond the induction of transcriptional fluctuations alone.This study links DNA damage repair with genome, epigenetics and cell fate regulation, providing us with new understanding and helping to advance the field of cell fate decision and regenerative medicine. BrdU and IdU: essential in overcoming two barriers during chemical induction of pluripotency To determine the optimal duration of BrdU treatment for chemical induction of pluripotency (CIP), a series of time windows were optimized and observed for their impact on CIP (Fig. 1A).Two main barriers were found in the CIP reprogramming process, and BrdU was found to play a crucial role in overcoming these barriers.BrdU dropout experiments showed that dropout of BrdU (removing BrdU while keeping all other culture conditions unchanged) at barrier I resulted in a failure to form colonies at Day 22, while dropout at barrier II between Day 12-22 resulted in a failure to form Oct4-GFP + colonies at Day 40 (Fig. 1B).Meanwhile, we have established a time gradient for each barrier (Additional file 1: Fig. S1A) in increments of two days to investigate the minimum duration of BrdU action and the effective time window of its action.Breaking through the barrier requires at least 6 days of BrdU incorporation, which must be iniated within the first 4 days (Additional file 1: Fig. S1B).Increasing the duration of BrdU treatment led to an increase in the number of Oct4-GFP + colonies, with a 0-22 days treatment resulting in over 240 Oct4-GFP + colonies at Day 40 from a starting population of 20,000 cells, achieving an efficiency of 1.2%.In contrast, a 0-12 days treatment only resulted in about 20 Oct4-GFP + colonies (Fig. 1C and Additional file 1: Fig. S1C).Furthermore, we attempted to use three base analogs, namely IdU, EdU, and 5Aza, but only IdU was found to be capable of replacing the function of BrdU during CIP reprogramming (Fig. 1D and Additional file 1: Fig. S1D).The combination of both thymidine analogs can increase the selectivity of cells, so that only cells that have been correctly reprogrammed can survive; hence accelerated the reprogramming process (Fig. 1D,E and Additional file 1: Fig. S1E).Thus, it can be concluded that BrdU and IdU are essential in overcoming two main barriers during CIP. Gene expression dynamics and chromatin accessibility dynamics during CIP with or without BrdU/IdU To identify the two barriers in which BrdU and IdU are involved during CIP reprogramming, RNA-seq and ATAC-seq was performed on MEFs undergoing CIP with or without BrdU, IdU and I + B (IdU + BrdU) at D8, 14, 20, 26 and 40 (Fig. 2A).It was found that loss of BrdU and IdU at barrier I resulted in incomplete reprogramming of MEFs to XEN-like cells (Fig. 2B).The expression of XEN-like genes such as Gata4, Sox17 and Aqp8 increased with increasing duration of thymidine analogs treatment and reached a peak at D20. Conversely, dropout of BrdU resulted in a failure to express XEN-like genes (Additional file 1: Fig. S2A, C), suggesting that MEFs undergoing reprogramming without BrdU /IdU follow a different fate path.Previous studies have reported that CIP goes through an XEN-like intermediate stage [2].Additionally, loss of BrdU and IdU at barrier II resulted in incomplete reprogramming of XEN-like cells to chemical induction pluripotent stem cell (CiPSCs).Pluripotency genes such as Oct4, Esrrb, Tfcp2l1, Nanog and Sox2 were highly activated in a BrdU/IdU dependent manner during stage 2. In addition, the incorporation of I + B showed a faster transition towards to iPSCs compared to BrdU/IdU treatment at the RNA level (Additional file 1: Fig. S2B, D). The function state of a cell is determined by its genome architecture.To investigate the role of BrdU/IdU in CIP, we mapped chromatin accessibility dynamics (CADs) during CIP with or without BrdU/IdU and found that many loci opened during CIP failed to open without BrdU/IdU.Moreover, detailed analysis of the ATAC-seq datasets revealed many similarities between BrdU and IdU treatments.Chromatin accessibility increased when MEFs undergoing CIP were treated with I + B compared to only BrdU or IdU treatment (Fig. 2C).In Fig. 2D, we analyzed the loci near fibroblast marker gene Twist2 and Fbn1 found that they were more open in DMSO treatment compared to BrdU/IdU treatment.Conversely, loci near XEN marker genes such as Gata4 and Sall4 were only open in BrdU/IdU treatment.Loci near the pluripotency marker gene Oct4 and Tfcp2l1 were also only open in BrdU/IdU treatment, and the chromatin accessibility was much higher when samples were treated with both I + B. We compared the peaks at each locus between these samples and classified them into three categories: closed in MEFs but open in ESCs (CO), open in MEFs but closed in ESCs (OC), and permanent open (PO).Further, we divided the CO and CO peaks into subgroups based on the day of reprogramming to illustrate the progression of cellular reprogramming associated CADs, as shown in Fig. 2E.To further investigate the molecular mechanism of BrdU/IdU during CIP, we performed motif analysis, as illustrated in Fig. 2F.Loci containing motifs for OCT2, 4, 6, and 11 gradually opened, peaking at CO6 when treated with BrdU/IdU.Loci with motifs for GATA1, 2, 4, 6, KLF3-6, and SOX2-4, 6, 9, 10, 15, 17 opened from CO1-6 when treated with BrdU/IdU.However, the TF motifs profile was similar with or without BrdU/IdU in OC loci, indicating that BrdU/IdU treatment tended to open up loci that are enriched with motifs binding to TFs from the GATA, KLF, and SOX families. We then statistically analyzed the number of total peaks in CO1-6/OC1-6, respectively, and presented the results in the form of a Venn diagram, as shown in Fig. 2G, H.The diagram showed that BrdU/IdU/I + B shared 25.2% (8240/32655) in CO peaks, but only 19.3% (7243/37575) in OC peaks.Additional file 1: Fig. S2E illustrates that a larger fraction of peaks in CO are located in the promoter regions.In conclusion, the function of BrdU/IdU is more biased towards opening chromatin and activating gene expression. Incorporation of BrdU/IdU leads to DNA damage repair According to the RNA-seq analysis we obtained in Fig. 2, we then performed Venn diagrams and enriched GO functions (Fig. 3A, B and Additional file 1: Fig. S3A, B).Interestingly, Venn diagrams for overlapping genes in the upregulated groups between BrdU, IdU, and I + B versus the control (without thymidine analogs) showed that DNA repair-related GO functions were highly enriched when MEFs were treated with BrdU/IdU during CIP (Fig. 3A, B).Specifically, DNA repair-related gene expression was highly upregulated in all three groups.The main repair pathways are base excision repair (BER), nucleotide excision repair (NER), mismatch repair (MMR), homologous recombination repair (HRR) and nonhomologous end joining repair (NHEJ).In our study, we observed that Apex1, a gene related to BER pathway, was significantly upregulated in the early stages of reprogramming after treatment with BrdU/IdU, but gradually decreased with longer exposure.The HRR related genes Brca1 and Brca2 were upregulated as Apex1 expression decreased (Additional file 1: Fig. S3C).Additionally, a combination of both thymidine analogs (I + B) accelerated the DNA repair process, as shown by the early upregulation of DNA repair genes expression at D8 in this group compared to the BrdU or IdU only groups (Fig. 3C). To verify the relationship between thymidine analogs and DNA damage repair, alkaline comet assays were performed to compare the olive tail moment difference between samples with or without thymidine analogs treatment (Fig. 3D, E).Interestingly, the BrdU treatment showed an increased olive tail moment indicating significant DNA damage.Furthermore, the degree of DNA damage was further promoted by a combination of BrdU and IdU. Figure 3F shows two DNA damage peaks during the CIP process at D14 and D24, which is coherent with the time window of the two main barriers discussed in the previous sections.This evidence illustrates a close association between DNA damage/repair and thymidine analogues assisting CIP through two barriers. To verify whether BrdU treatment directly causes DNA damage, immunofluorescent analysis was performed to observe the spatial colocalization of BrdU (in red) and γ-H2AX (in green), a novel biomarker for DNA doublestrand breaks (Fig. 3G).The dropout of BrdU showed a much weaker γ-H2AX signal, leading us to conclude that the accumulation of DNA damage during the CIP process is directly caused by BrdU binding to the DNA.Using CHIP-seq analysis targeting the DNA doublestrand damage marker γ-H2AX, a correlation was found between its level and the ATAC-seq for Gata4 and Sox17 (Fig. 3H).This suggests that the activation of the Gata4 and Sox17 genes is mediated through the DNA damage repair pathway. Mechanism of DDR in promoting somatic cell reprogramming ATM (Ataxia Telangiectasia Mutated) is a protein kinase that plays a critical role in the cellular response to DNA damage, particularly double-strand breaks (DSBs).When DSBs occur, the MRN complex (Mre11-Rad50-Nbs1) recognizes and binds to the site of break, recruiting ATM.Subsequently, ATM undergoes autophosphorylation, resulting in its full activation.Once activated, ATM phosphorylates several downstream targets involved in cell cycle checkpoint control, DNA repair, and apoptosis.Additionally, ATM activation is sustained by a phosphorylation-acetylation cascade.This cascade helps to maintain ATM activity and promote efficient DNA repair.Overall, ATM serves as a critical player in the cellular response to DNA damage, and its activation and downstream signaling are tightly regulated to ensure proper DNA repair and maintenance of genomic integrity [9]. In order to understand how DDR is involved in regulating somatic cell reprogramming, we hypothesize that DDR can upregulate the acetylation levels of genes at the site of DNA damage, thereby regulating gene expression, as shown in Fig. 4A.Firstly, we found that inhibiting the ATM signaling pathway with an ATM inhibitor KU-55933 prevented BrdU from functioning (Fig. 4B, C).Furthermore, we revealed that Gata4, Sox17 and Sall4's H3KC27ac and H3K9ac were significantly upregulated under the treatment of BrdU and I + B (Fig. 4D).To further verify this hypothesis, we increased the concentration of VPA (a histone deacetylase inhibitor) to further promote acetylation accumulation.We found that this significantly accelerated the process of chemical reprogramming, and its acceleration effect depended on the addition of BrdU (Fig. 4E, F). Therefore, we believe that BrdU/IdU induces DNA damage by incorporating into the genome, and recruits ATM to the site of damage.Subsequently, the acetylation levels of gene loci at the site of DNA damage is upregulated through a series of phosphorylation and acetylation cascades, thereby altering chromatin accessibility and activating gene expression. Combined with the results of RNA-seq, ATAC-seq, motif, γ-H2AX, H3K9ac, and H3K27ac analysis, we found that Gata4 has a high degree of specificity under the treatment of BrdU/IdU (Figs. 2D,F, 3H, 4D, Additional file 1: Fig. S2A).Therefore, GATA4 may be one of the specific downstream factors of BrdU/IdU involved in somatic cell reprogramming.Further Cut & tag analysis showed that the binding sites of GATA4 were significantly increased under the treatment of BrdU/IdU, and the sites enriched with GATA4 were accompanied by higher chromatin accessibility (Fig. 4G), including pluripotency gene loci such as Oct4, Sall4 as illustrated in Fig. 4H.This indicates that GATA4 can respond to DDR, thereby promoting somatic cell reprogramming. BrdU/IdU creates a more open hypomethylated environment for the transformation of cell fate BrdU and IdU have been demonstrated to play a crucial role in chemical reprogramming, exhibiting two distinct stages of action.Hence we postulate that BrdU not only activates intermediate XEN genes such as Gata4, but also potentially induces DNA demethylation, where DNA methylation is a major regulatory factor that limits GATA4 function [10].To confirm this, we measured the methylation levels of GATA4 downstream gene Aqp8 and XEN gene Pth1r.We observed a significant decrease in DNA methylation levels with BrdU/IdU treatment (Fig. 5A), indicating that BrdU/IdU not only activates gene expression but also induces DNA demethylation, thus creating a conducive environment for pluripotency network activation. We also integrated the aspect of DNA damage repair and hypothesized that BrdU and IdU act through double-stranded break excision and new chain synthesis during the process of DNA damage repair.Since new chain synthesis occurs without DNA methylation marks, it leads to DNA demethylation.To validate this hypothesis, we inhibited DNMT using CM272 and SGI1027 to prevent the re-methylation of new chains.The results indicated that this approach significantly accelerated the process of chemical reprogramming (Fig. 5B and C), and this effect was dependent on the presence of BrdU.To summarize, as the degree of BrdU damage increases, the frequency of double-stranded breakage and synthesis also increases, which leads to DNA demethylation, thereby creating a more open hypomethylated environment for the transformation of cell fate.In the global genome methylation analysis using Gmseq, we found that the methylation levels of CpG sites after treatment with BrdU, IdU, and I + B were mostly concentrated below 50%, while the distribution was more even in the untreated (DMSO) group, and in MEF it was biased towards above 50% (Fig. 5D).The average percentage of CpG sites with a methylated C base was below 30% after treatment with BrdU, IdU, and IB, 50% in the DMSO group, and 60% in MEF (Additional file 1: Fig. S4A).Furthermore, we used a circos plot to show the distribution of methylation density on chromosomes.The size of each bin was 1,000,000, and the number of methylated Cytosine in each bin with different sequence environments (CpG, CHG, CHH) was counted.The distribution density of CpG sites in different sequence environments throughout the genome showed that after treatment with BrdU/IdU, a significant global hypomethylation state was observed (Additional file 1: Fig. S4B). We performed methylation differential analysis between treated and untreated cells with thymidine analogs.Interestingly, we found that the pluripotency related gene Sall4, Oct4, and Prdm14 loci exhibited significant hypomethylation after thymidine analog treatment (Fig. 5E, G).This further confirms that BrdU/IdU creates a more open hypomethylated environment for the transformation of cell fate.In addition, we found that IdU exhibited a more widespread differential methylation distribution, followed by I + B, and BrdU exhibited the least, which may be related to its atomic size, where the atomic size of IdU > I + B > BrdU (Fig. 5F).Differences in atomic size lead to variations in the degree of DNA damage.Using IdU alone would cause severe damage, while using BrdU alone wouldn't have a significant impact.Therefore, I + B might be able to compensate for each other and generate a synergistic effect, BrdU/IdU can participate in cell fate regulation with negligible mutations As the substitute of thymidine, the safety of BrdU has always been a concern.Therefore, we performed mutation analysis on the samples treated with or without thymidine analogs and MEFs in our system, including SNP (Single Nucleotide Polymorphism), InDel (insertion-deletion), CNV (Copy Number Variant), and SV (Structural variation).We found that the mutation level of BrdU/IdU-treated samples was almost the same as that of MEFs (Fig. 6A-E), while no treatment with thymidine analogs led to more mutations.In addition, we found that IdU had some insertion mutations.Overall, this suggests that the rational use of BrdU/IdU can participate in cell fate regulation with negligible mutations. BrdU and IdU have functions in various reprogramming systems To investigate whether the effects of BrdU and IdU are applicable in other reprogramming systems, we found that BrdU significantly improves reprogramming efficiency in the OKS system (Additional file 1: Fig. S5A-D, Fig. 7A, B).Additionally, we found that BrdU can substitute for OCT4 and play a critical role in the KS system (Additional file 1: Fig. S5E-G, Fig. 7A, B).Incorporation of BrdU not only improves reprogramming efficiency, but also accelerates reprogramming speed (Fig. 7C).Interestingly, by comparing the effects of BrdU on OKS, KS, and CIP (Fig. 7A, B), we found that their dependence on BrdU treatment time gradually increases.Specially, in OKS system, Oct4-GFP + colonies reach their peak when BrdU is treated for 2 days.In KS system, Oct4-GFP + colonies reach their peak when BrdU is treated for 6 days.Furthermore, in CIP system, 22 days of BrdU treatment is required to reach maximum number of Oct4-GFP + colonies, as illustrated in Fig. 7B.This suggests that different degrees of DNA damage may be required to overcome various barriers to reprogramming. We discovered that the addition of BrdU in the KS reprogramming system can specifically activate Gata4, Sox17, and certain developmental stem cell pluripotency regulating genes, as shown in Fig. 7D, Additionally, we observed that the most significant difference occurred at D12, which was precisely the time point when genes such as Gata4 were highly expressed.We found that inhibiting the ATM signaling pathway with an ATM inhibitor also prevented BrdU from functioning in KS system (Fig. 7E).By using GATA4 Cut & tag, we discovered that GATA4enriched peaks were accompanied by ATAC opening (Fig. 7F).By overexpressing Gata4 in the KS system F Genomic panorama differential methylation regional distribution.From outermost to innermost: Outer circle represents chromosome ideogram; Second circle displays the methylation levels in the treatment group; Third circle shows gene density, where darker red indicates higher gene density; Fourth circle exhibits methylation differences between the treatment and control groups, with darker blue indicating greater differences; Innermost circle illustrates the methylation levels in the control group.G Differential DNA methylation gene and annotations under the presence of BrdU, the reprogramming process can be accelerated and the cell state can be improved (Fig. 7G).Remarkably, our results bear a resemblance to the phenomenon observed in chemical reprogramming. Discussion In summary, we found that the use of the thymidine analogs BrdU/IdU causes epigenetic reshaping.This potentiates cells to enter a plastic state and accelerates the fate transition of cells by activating DNA damage repair and causing significant H3K27ac, H3K9ac and DNA demethylation.Additionally, the study discovered that the XEN state can be specifically activated after BrdU/IdU treatment.This suggest that XEN state may be a selective response behavior of cells to DNA damage repair, providing a shortcut for cells to overcome reprogramming barriers, and that different degree of DNA damage may be required to overcome various barriers to reprogramming.Furthermore, rational dosage of BrdU/IdU can participate in cell fate transition with negligible mutations. Chemical small molecules interact in a complex and intricate manner during somatic cell reprogramming, forming a complex network of cell reprogramming.These molecules may exert synergistic or antagonistic effects on each other, with their influence varying across different stages of reprogramming, modulated by factors such as timing, concentration, and cell type.Investigating these interactions, and their comprehensive impact on the reprogramming process, is vital for advancing our understanding of cellular reprogramming mechanisms, enhancing pluripotency induction efficiency, and developing therapeutic applications.Our research on thymidine analogs sheds light on their synergistic effects with molecules targeting histone acetylation and DNA methylation. In our study, through gene expression patterns, immunofluorescence of gH2AX (a marker for DNA double-strand breaks), ChIP-seq, and Comet assays, we confirmed that prolonged incorporation of BrdU/IdU causes a more severe form of DNA damage.Importantly, we did not observe any significant mutations, ruling out a role for non-homologous end joining (NHEJ).Therefore, we concluded that the homologous recombination repair (HRR) pathway becomes more dominant with prolonged incorporation of BrdU/IdU.This emphasizes the importance of genomic stability in the reprogramming process and its potential impact on the overall efficacy and safety of iPSC generation.DNA single-strand break causes "Discordant Transcription through Repair (DiThR)" [8], while DNA double-strand break results in "Epigenetic Reshaping through Damage (ERD)".However, it is worth to investigate the roles of other DNA repair pathways, such as MMR, NHRJ and NER, in the process of cellular fate transition.Additionally, not all of thymidine analogs are equally effective in somatic reprogramming, and their specific mechanisms of action remain to be fully explored.Interestingly, although BrdU and IdU are structurally very similar, with only minor differences in their chemical structure, the difference in atomic size may lead to a more pronounced impact of IdU on the entire genome.This difference in impact may also explain why other thymidine analogs are not effective. It is worth to consider that the XEN-like intermediate state is not unique to chemical reprogramming, as cells may require multiple intermediate states to reach their final fate during the process of fate transition.Reprogramming barriers may limit the process of fate transition, and thus cells may choose different intermediate states as shortcuts.Gata4 and Sox17, among others, are representative of XEN states that have been selected as transitional states that facilitate cell fate transition because they promote both dedifferentiation and redifferentiation of cells, and play essential roles in embryonic development and cell regeneration.Moreover, DNA damage repair may also drive cells into a plastic state.When DNA damage occurs, cells need to repair the damage to maintain genome integrity.DNA repair processes may lead to reprogramming events, resulting in cells entering a plastic state and providing an alternative pathway for cell fate transition. The long-term use of thymidine analogs can have diverse effects on cells and organisms, contingent on the specific analog used, the dosage, and the exposure duration.For example, in our study, we observed that excessive application of BrdU can lead to cell death.However, this effect can be avoided by maintaining the dosage within a controlled range, thus effectively facilitating the reprogramming process. There is a significant issue with current induced Pluripotent Stem Cells (iPSCs) is epigenetic variation, including abnormal DNA methylation, chromatin conformation changes, or imbalances in epigenetic modification patterns.These variations can impact the quality and stability of iPSCs.Therefore, understanding epigenetic variation and reshaping in iPSC induction is crucial for ensuring their quality, stability, and safety.In our research, we discovered a new phenomenon of epigenetic reshaping through DNA Damage Response (DDR) and interactions between genetics and epigenetics, which can optimize iPSC generation and enhance their potential in regenerative medicine and disease modeling. Mice Beijing Vital River Laboratory supplied the 129 Sv/Jae and ICR mice, while the Jackson Laboratory provided the Oct4-GFP(OG2) transgenic allele-carrying mice (CBA/ CaJ X C57BL/6J).All animal experiments were carried out in compliance with the Animal Protection Guidelines of Guangzhou Institutes of Biomedicine and Health, located in Guangzhou, China and affiliated with the Chinese Academy of Sciences. Mouse embryonic fibroblasts (MEFs) To isolate Mouse Embryonic Fibroblasts (MEFs), E13.5 embryos were dissected and all internal organs, head, limbs, and tails were removed and discarded.The remaining tissue was sliced into small pieces and dissociated with a digestive solution (consisting of 0.25% trypsin and 0.05% trypsin in a 1:1 ratio from GIBCO) for 15 min at 37 °C to obtain a single-cell suspension.The cells from each embryo were then plated onto a 6-cm culture dish coated with 0.1% gelatin and cultured in DMEM (Hyclone) supplemented with 10% FBS (GIBCO), 1% GlutaMAX (GIBCO), and 1% NEAA (GIBCO), referred to as fibroblast medium. Chemical induction of iPSCs from mouse fibroblasts The 12-well plates were pre-coated with 0.1% gelatin and seeded with MEFs at a density of 20,000 cells per well in 10% fibroblast medium.The following day, Stage 1 chemical reprogramming medium was added and refreshed daily.After day 22, the medium was changed to Stage 2 chemical reprogramming medium.On day 40, the Oct4-GFP colonies were either counted or detected via FACS. Klf4 and Sox2 mediated MEF reprogramming Plasmids carrying murine Klf4, Sox2 cDNA were purchased from Addgene, and the fragments were then ligated to the PMX vector to obtain a recombinant plasmid.Plat-E cells were transfected with individual plasmids with the Ca3(PO4)2-method, a modified calcium phosphate transfection method was conducted as follows: for each factor, 1068 ml ddH2O, 25 mg plasmid, 156.25 ml 2 M CaCl2, 1.25 ml 2 3 HBS (total 2.5 ml) were added to a 15 ml tube.Mix it vigorously after adding 2 3 HBS, and incubate for 5 min at room temperature.Then, gently transfer the 2.5 ml mixture into the Plat-E cell dished with 7.5 ml fresh medium.After 48 h of transfection, the virus supernatant was collected and used to infect cells.Approximately 15,000 cells were seeded per well in 24-well plates containing DMEM medium supplemented with 10% FBS, and were cultured for 24-12 h prior to MEF infection.The Klf4 and Sox2 virus solution was filtered a 0.45 µm filter, and then 8 µg/ml of polybrene was added for cell infection.The cells were subjected to a second infection after 24 h.After 48 h of MEF cell infection, the cells were transferred to induction medium, with the day of medium change being designated as day 0. KS reprogramming stage 1 medium was used for the first 6 days, stage 2 medium was used from day 6 to day 10, and stage 3 medium was used from day 10 onwards.The medium was changed every 24 h. Immunofluorescence staining The coverslip-grown cells were washed twice with PBS and fixed using 4% paraformaldehyde (PFA) at room temperature for 30 min.Next, the samples were treated with 1N HCl on ice for 10 min, followed by 2N HCl at room temperature for 10 min and then 2N HCl at 37 °C for 20 min.After washing three times with PBS for 5 min each, the cells were permeabilized with 0.1% Triton X-100 for 30 min.Subsequently, the cells were blocked with 3% BSA for an hour and washed three times with PBS for 5 min each while shaking.The cells were then incubated overnight at 4 °C with primary antibodies, diluted in 3% BSA (1:250) in PBS.After four washes with PBS for 5 min each while shaking, the cells were incubated for an hour at room temperature with secondary antibodies, diluted in 3% BSA (1:200) in PBS.Later, the cells were incubated in DAPI for 1 min, washed twice with PBS, and finally, the coverslips were mounted on the slides for observation under a confocal microscope (Andor Dragonfly 200).The following antibodies were used: anti-BrdU (Sigma), antiγ-H2AX (Abcam), secondary antibodies Alexa Fluor 568 goat anti-mouse IgG (Invitrogen) and Alexa Fluor 488 goat anti-mouse IgG (Invitrogen). Comet assay During the chemical reprogramming process, the control and experimental group cells at D14, D16, D24, and D30 were digested with 0.25% trypsin into single cells, and resuspended in PBS at a concentration of 10,000 cells/ ml.The Oxiselect Comet Assay Kit (Cell-Biolabs) was used as a reference for the experiment.The cell suspension was mixed with agarose, and then incubated in the dark at 4 ℃ for 15 min.The mixture was placed on a slide and immersed in lysis buffer, then incubated in the dark at 4 ℃ for 60 min.The slide was transferred to an alkaline solution, and incubated in the dark at 4 ℃ for 30 min.The slide was then placed into an electrophoresis chamber, and alkaline electrophoresis buffer was added under the conditions of 300 mA and 15 min.After electrophoresis, the slide was transferred to water and soaked for 2 min.Then it was transferred to 70% ethanol and soaked for 5 min.The slide was placed in a 37 ℃ oven overnight.Vista Green DNA dye was added for 10 min, and the cells were observed and recorded using an inverted fluorescence microscope (ZEISS).The software cometscore was used to analyze the tail moment of the cells. FACS analysis The cells were rinsed with PBS and then digested with 0.25% trypsin at 37 °C for 5 min.The digestion was stopped by adding 10% FBS DMEM medium.After filtering with a sieve and centrifuging at 250g for 5 min, the supernatant was discarded and the cells were resuspended in PBS at a concentration of 100,000 to 1,000,000 cells/ml.The percentage of positive cells was detected using a BD Accuri ™ C6 Plus flow cytometer.The flow cytometry data was analyzed using FlowJo7.6.1. Bisulfite genomic sequencing Extract DNA template.After centrifuging the cells, add 600 μl of Nuclei Lysis Solution (Promega) and mix by pipetting.Add 200 μl of Protein Precipitation Solution (Promega) and mix by pipetting.Let it sit on ice for 10 min.Centrifuge at 12,000 rpm for 10 min.Take the supernatant, add 600 μl of isopropanol, and let it sit on ice for 10 min.Centrifuge at 12,000 rpm for 5 min.Discard the supernatant, add 600 μl of 70% ethanol.Heat in a metal bath at 60 °C for 5 min.Add RNAase-free water and heat in a metal bath at 60 °C for 30 min.Take 2 μg of template DNA and bisulfite treat according to the Epi-Tect Bisulfite Kit (QIAGEN) manual.Amplify the Aqp8 and Pth1r promoter regions by PCR.Clone and sequence using the pMD18-T vector (TaKara). RNA-seq and data analysis Cells were treated with TRIzol to extract total RNA, which was then converted into cDNA using Rever-Tra Ace (Toyobo) and oligo-dT (Takara).The resulting cDNAs were analyzed using Premix Ex Taq (Takara) in qPCR experiments.For library construction, the TruSeq RNA Sample Prep Kit (RS-122-2001, Illumina) was employed, and RNA-seq was performed using the Miseq Reagent Kit V2 (MS-102-2001, Illumina). The original sequencing data was quality-controlled using FASTQC, and low-quality bases and sequencing adapters were removed using trim_galore.HISAT2 [11] was used to align the filtered clean reads with the mouse mm10 reference genome.Samtools [12] was used to filter out unaligned or unpaired sequencing fragments and obtain the bam file.featureCounts [13] software was used to quantify gene expression levels.EdgeR was used to identify differentially expressed genes.ClusterProfiler [14] was used to perform GO or KEGG functional enrichment analysis to determine the molecular functions of the differentially expressed genes. ATAC-seq The ATAC-seq procedure was carried out following the previously published protocols [15,16] and TruePrep DNA Library Prep Kit V2 for Illumina.Briefly, 50,000 cells were washed with 50 ml of cold PBS suspended in 50 ml of lysis buffer containing 10 mM Tris-HCl pH 7.4, 10 mM NaCl, 3 mM MgCl2, and 0.2% (v/v) IGEPAL CA-630.The cell suspension was centrifuged at 500 g for 10 min at 4 °C, followed by the addition of 50 ml of transposition reaction mix from the TruePrep DNA Library Prep Kit V2 for Illumina.The resulting samples were subjected to PCR amplification and incubated at 37 °C for 30 min.VAHTS DNA Clean Beads (Vazyme #N411) were used for purification and recovery of the DNA fragments.The purified product was amplified.VAHTS DNA Clean Beads were used for further purify and separate 200-700 bp fragments.The ATAC library was finally sequenced on a NextSeq 500, using a NextSeq 500 High Output Kit v2 (150 cycles) (FC-404-2002, Illumina), following the manufacturer's instructions. Cut&tag According to the protocol of the hyperactive universal Cut&Tag assay Kit for Illumina, library construction was performed.100,000 cells were collected and incubated with processed ConA beads.The sample was incubated with primary antibody overnight at 4 ℃, followed by incubation with secondary antibody at room temperature for 1 h.The sample was then incubated with transposase pA/G-Tnp at room temperature for 1 h, and the DNA was fragmented for 1 h before extraction.After amplifying DNA fragments according to concentration, VAHTS DNA Clean Beads (Vazyme #N411) were used for purification and recovery of the DNA fragments. ATAC-seq and Cut&tag bioinformatics analysis The original sequencing data was quality-controlled using FASTQC, and low-quality bases and sequencing adapter sequences were removed using trim_galore.Bowtie2 [17] was used to align high-quality sequencing fragments with the mouse mm10 reference genome.Samtools was used to retain only paired and uniquely aligned sequences, exclude mitochondrial sequence fragments, and obtain the bam file.Picard (http:// broad insti tute.github.io/ picard/) was used to remove duplicate fragments caused by PCR amplification during library preparation.Deeptools [18] was used to convert the bam file to a bw file, and the results were visualized using IGV (https:// igv.org/).The peak calling software MACS2 [19] was used to detect enrichment regions of open chromatin or DNA fragments.HOMER was used to identify transcription factor binding motifs enriched in chromatin regions.CHIPseeker [20] was used to annotate genomic features (such as genes, promoters, enhancers, and transcription factor binding sites) of peaks intervals.Bedtools [21] was used to analyze differential binding sites, and deeptools was used to generate signal matrix files and visualize the results.Other analyses were performed using glbase [22]. GM-seq The Gm-seq procedure was carried out following the previously published protocols in cooperation with Gene plus company [23].We utilized Hieff NGS ® Ultima Pro DNA Library Prep Kitfor Illumina (Yeason, cat.on.12201ES96) for end repair, A-tailing, and adaptor ligation.After magnetic beads were purified, 5-methylcytosine (5mC) and 5-hydroxymethylcytosine (5hmC) were oxidized to 5-acylcytosine (5fC) or 5-carboxyl cytosine (5caC) by TET enzyme, then 5fC or 5caC was treated with pyridine borane and reduced to dihydrouracil (DHU).Finally, PCR amplification is performed and barcode is introduced to obtain the sequencing library.After going through this series of processes the methylated cytosine would be identified as thymine (T) by MGI T7 sequencing platform.The samples for whole genome sequencing directly were also going through the same procedures that were used for GM-seq but without oxidation and pyridine borane reduction preparation.Whole genome analyses were performed by the sequencing library we constructed above. GM-seq bioinformatics analysis The raw fastq and bam files were evaluated using BWA and SAMtools software.Alignment and analysis of DNA sequencing data were performed utilizing asTair tools in automated mode from raw fastq files to finalized genotyping data.This pipeline utilized GATK tools and Exo-meDepth tools to perform base quality score, genotype calling and variant annotation, including SNP and Indel.And ichorCNA were employed to analyze genome-wide CNV and calculate insert size.Sequencing data were analyzed by three computational pipelines in order to understand the reliability of the detection of fungi, bacteria, and viruses: Kraken, Kaiju and SNAP.BLAST was used to compare the sequencing results with the high-risk microorganisms.For methylation analyses, diagrams for CpG coverage were generated using methylKit 1.4.0.The GC bias plot was generated using Picard's CollectGCBi-asMetrics.Correlation analysis was performed using methylKit version 1.4.0 with default settings except for a minimum coverage threshold of one read. Quantification and statistical analysis Sample size was not predetermined using statistical methods, and the experiments were not randomized.The investigators were not blinded during the experiment or outcome assessment.The data was presented as mean ± s.d.Statistical analysis was conducted using either two-tailed unpaired student's t-test or one-way ANOVA with GraphPad Prism 6, and a P-value of < 0.05 was considered statistically significant.The level of significance was indicated as * P < 0.05, * * P < 0.01, * * * P < 0.001 and * * * * P < 0.0001.The relevant figure legends provided information on the statistical test, precise P-Value, exact sample sizes, and independent experiments.• support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year • At BMC, research is always in progress. Learn more biomedcentral.com/submissions Ready to submit your research Ready to submit your research ?Choose BMC and benefit from: ? Choose BMC and benefit from: ( See figure on next page.)Fig. 3 Incorporation of BrdU/IdU leads to DNA damage repair.A Venn diagrams for overlapping genes in the upregulated groups between BrdU, IdU, and BrdU + IdU versus the control.B Enriched GO functions in upregulated groups.C Heatmap of DNA-repair genes related RNA-seq for CIP at D8, 14, 20 and 26 with or without BrdU or IdU.D Alkaline comet assays of samples with or without BrdU or IdU.Scale bar 50 µ m.E Statistical analysis of olive tailmomer from D. F Average Olive Tail Moment from E. G Immunofluorescent analysis to observe the spatial colocalization of BrdU (in red) and γ-H2AX (in green).Scale bar 40 µ m.H Representative peaks from ATAC-seq aligned with γ-H2AX Cut & tag signals for Gata4 and Sox17 with or without BrdU or IdU Fig. 4 Fig. 4 Mechanism of DDR in promoting somatic cell reprogramming.A A model for DNA damage repair (DDR) mechanism during BrdU/IdU incorporation.B Morphological changes during induction of CiPSCs treated with or without ATM inhibitor (KU-55933).Scale bar 100 µ m.C Number of Oct4-GFP + CiPSC colonies generated under different treatment conditions.n = 3, * * P < 0.01.D Representative Gata4, Sox17 and Sall4 peaks from H3K27ac Cut & tag aligned with H3K9ac Cut & tag signals for CIP at D20 with or without BrdIr IdU.E Morphological changes during induction of CiPSCs treated with different concentration of VPA.Scale bar 100 µ m.F Number of Oct4-GFP + CiPSC colonies generated under different treatment conditions.n = 3, * * * * P < 0.0001.G GATA4 Cut & tag analysis and ATAC-seq analysis for CIP at D20 and D26 under different treatment conditions.H Representative Oct4, Sall4 and Aqp8 peaks from ATAC-seq aligned with H3K27ac Cut & tag and GATA4 Cut & tag for CIP at D20 with or without BrdU or IdU ( See figure on next page.)Fig. 5 BrdU/IdU creates a more open hypomethylated environment for the transformation of cell fate.A The methylation patterns of Aqp8 and Pthr1 when treated with or without BrdU or IdU.B Morphological changes at D30 during induction of CiPSCs treated with DNMT inhibitor CM272 and SGI1027.C FSC analysis of Oct4-GFP colonies generated under different treatment conditions.D Histogram of percentage CpG methylation generated under different treatment conditions.E Methylation differential analysis between treated and untreated cells with IdU/BrdU. Fig. 6 Fig. 6 BrdU/IdU can participate in cell fate regulation with negligible mutations.A Mutation analysis-SNP (Single Nucleotide Polymorphism) on the samples treated with or without BrdU/IdU and MEFs.B InDel (insertion-deletion) analysis.C CNV (Copy Number Variant) analysis.D SV (Structural variaIn) analysis.E Genomic panorama variation analysis.The outermost circle displays chromosomal information: lengths and karyotypes of each chromosome.The second circle shows gene density as a blue heatmap, with darker shades indicating higher gene numbers in that chromosomal area.The third circle exhibits a red heatmap of sequencing coverage, with deeper colors showing higher average sequencing coverage within the chromosomal region.The fourth circle is a scatter plot of SNP density against a light blue background: red dots above the average density, and blue dots below.The fifth circle represents InDel density with a similar scatter plot: red dots above the average density, and green dots below.The sixth circle displays CNV distribution as a line plot: upward lines indicate amplifications (red), and downward lines indicate deletions (blue).The seventh circle signifies SV (Structural Variations) in different colors: green for insertions, red for deletions, blue for duplications, purple for inversions, and orange for translocations Fig. 7 Fig. 7 BrdU and IdU have functions in various reprogramming systems.A Images of GFP + colonies taken by fluorescence microscope in situ.Scale bar 5 mm.B Number of Oct4-GFP colonies generated under indicated conditions.n = 3, * P < 0.05, * * * * P < 0.0001.C FSC analysis of Oct4-GFP colonies generated under different treatment conditions.D Heatmap of RNA-seq for KS at D4, 8, 12 and 16 with or without BrdU or IdU.E Morphological changes during induction of KS treated with ATM inhibitor and corresponding statistical analysis.Scale bar 50 µ m. n = 3, * * * P < 0.001.F GATA4 Cut & tag analysis and ATAC-seq analysis for CIP at D4, 8, 12 and 16 under different treatment conditions.G Morphological changes during induction of KS when overexpressing Gata4 in the system.Scale bar 50 µm • fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance
2023-09-01T13:11:10.671Z
2023-08-28T00:00:00.000
{ "year": 2024, "sha1": "26a009dfcd53481b8dc52d7221ab1d8e097a5614", "oa_license": "CCBY", "oa_url": "https://cellandbioscience.biomedcentral.com/counter/pdf/10.1186/s13578-024-01192-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8e3ef0941f22d6683af02d8981ace54d5f51b625", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
8405393
pes2o/s2orc
v3-fos-license
Evaluation of differences in health-related quality of life during the treatment of post-burn scars in pre-school and school children Objective. The aim of the research was an assessment of the differences in the self-evaluation of health-related quality of life during the treatment of post-burn scars on the upper limbs of pre-school and school children. Materials and method. A group of 120 children were examined – 66 boys and 54 girls, divided into a pre-school group of 60 children (average age 4.3 ± 1.7) and a primary school group of 60 children (average age 10.4 ± 1.2). The structured interview and an adopted Visual Analog Anxiety Scale and Visual Analog Unpleasant Events Tolerance Scale were used to evaluate the level of plaster tolerance, and anxiety caused by the removal of dressings during treatment. Results. In the first test, In both groups, a low tolerance was noted to the pressure plaster, with the pre-school aged children obtaining worse results (x=18.9 ± SD 10.16) than those of school age (x=33.65± SD 13,21), regardless of gender. Preschool children were afraid (x=47.5 ± SD 24.26), while school-aged children were not afraid of having the plaster removed (x=20.5 ± SD 9.46). The differences between the groups were statistically significant. In the fourth and final test on preschool aged children, the tolerance of plasters had improved (x=23.24 ± SD 15.43) obtaining a value somewhat lower than for school-aged children (32.4 ± SD 6.45), as well as a noted fall in the anxiety level (30.83 ± SD 23.38) with an average value insignificantly higher than that recorded for the children of school age (15.83 ± SD 6.19). Conclusions. The tests confirmed the appearance of differences in the self-evaluation of health-related life quality in preschool and school-aged children. INTRODUCTION It is difficult to find academic articles within the worldwide subject literature devoted to an evaluation of health-related quality of life for pre-school and school children treated for burns to the upper limbs which take into consideration an evaluation of the young patients themselves [1,2,3,4,5,6]. The authors, in describing the quality of life of these children, have based their study largely on the observations and perceptions of researchers and parents, rather than asking children or young people themselves [7].Cohort tests covering large groups of children show that children with hand burns have significantly worse outcomes than children with burns in other areas [1].Post-burn scars can be devastating and disfiguring, because they are clearly visible, stigmatizing, and permanent reminders of the initial accident [2].Van Loey et al. [6] described how scars might contribute to impact on social anxiety and increase post-traumatic stress syndrome, since pressure garments or red and disfiguring scars can attract too much attention from other people, which may induce feelings of shame.Authors presenting the subject of the long-term treatment and psycho-social rehabilitation of children with burns most frequently analyse the body picture, changes in the social functioning, as well as the raising of psycho-social rehabilitation standards [3,4], while at the same time on clinical observations they show that pre-school and school children complain about: 1. Problems resulting from the need for the long-term and arduous treatment of burn scars; here, chiefly the states of anxiety and fear brought about by the pain experienced during the process of being burnt, as well as the treatment of the burnt wound, particularly the changing of plasters, 2. A lack of tolerance for the treatment methods employing pressure plasters. In the opinion of these young patients, as well as in the view of their parents and guardians, these problems are the main cause for a worsening in the children's quality of life during the course of the treatment of burn scars on the upper limbs.Therefore, there is a need for a more detailed evaluation of the significance of these problems for the quality of life of a burn patient during the course of the complicated and time-consuming process of treatment. Although it is at times difficult, there is a need to assemble such information to allow the raising of treatment standards.In testing such children, it is essential to realise that a small child is often frightened and untrusting as a result of the stress caused by the treatment of burn wounds [2], and that it is difficult to gain sufficient trust in order to obtain reliable relevant information.An additional problem is the two-year minimum period for post-burn scar treatment, which is often not understood and difficult for the child to accept.This period is often connected with an inability for the child to function in a normal active way, which results in frustration and impatience [5].Attempts to explain the necessity for a series of activities connected with the long-term treatment of post-burn scars often brings no effects and heightens the sense of a deterioration in the quality of life [7]. There is no universal agreement among researchers as to the way to define 'quality of life'.Studies involving adults have examined, for example: • independence in activities of daily living; • behavioural problems; • social competence; • academic performance; • parental stress. Studies on children have focused on the child's mental status, examination and developmental history, pain and symptoms, motor functioning, autonomy, cognitive, emotional and social functioning [2,8,9]. The concept of Health Related Quality of Life (HRQOL) refers to the state of health broadly understood as well as the specifics of a given disease.Such an approach has been proposed by Pąchalska et al. [10] who conjecture that the starting category for the conducting of tests in relation to quality of life is a description of the specifics of a given disease (determined not merely on the basis of clinical tests, but also on the evaluation of post-burn scar maturity).Of significance here are biological factors, the process of scarring is dependent on the surface area, the depth and location of the burn wound, psychological (including coping with the consequences of a deep burn), as well as the social consequences of the injury for the young patient (social isolation and the fear/anxiety associated with this).This model therefore enables the quality of life to be combined with the pain and other negative emotions which appear following a burn chich, as a state, is adopted as the main factor shaping quality of life [1,2,10].All these aspects create the profile by which the child perceives the scar, they shape the process of reaction to having a scar and its evolution over time, as well as the strategies for coping with the psychosocial consequences which come with burning. In the presented work, the above model has been adopted for the quality of life, narrowing down the evaluation of health related quality of life in the case of burns to the upper limbs to the two most important parameters complained about by children, evaluation of the following: 1. the treatment method employing a pressure plaster; 2. the fear associated with plaster removal. Clinical observations also indicate differences in the reactions connected with the treatment of pre-school children and school children, as well as varied evaluations in the range of the above-mentioned parameters for the quality of life. The aim of the study was to evaluate the differences in health-related quality of life self-evaluation during the course of treating burn scars of the upper limbs, as perceived by pre-school and school children. MATERIALS AND METHOD The study encompassed a group of 120 children -66 boys and 54 girls, divided into two equal groups according to age, treated at the University Children's Hospital in Kraków.The first, preschool group, consisted of 60 children (30 boys and 30 girls), average age 4.3 years ± 1.7.The second, primary school children group, consisted of 60 children (36 boys and 24 girls), average age 10.4 years ± 1.2.(Tab.1). The children from both groups were suffering after deep upper extremity burns.The burn depth was classified as deep 2 nd degree burns.The inclusion criteria for both groups Source: own work on the basis of Pąchalska [10] was conservative wound treatment with a healing time of between 16-21 days.The exclusion criteria was delayed wound treatment with the necessity for conversion from conservative to surgical treatment (a split thickness skin graft), and a healing time shorter than 16 days (minimal risk of scaring process). Method of treatment. All the children, both from the pre-school and school-aged groups were treated by means of pressure therapy using a pressure plaster, which was commenced two months after the healing of the wound itself, and stopped after 18 months of treatment.The pressure plaster was changed once a week, removing the old and placing a new one on the scar together with the adjacent healthy skin, which ensured pressure being exerted on the immature scar; this did not limit the children in the performance of everyday activities, particularly in taking a bath. A clinical interview was used in the tests, an adapted 100 mm Visual Analog Unpleasant Events Tolerance Scale, and a Sad-Happy Face Scale: 0 mm on the scale represented a lack of tolerance of the treatment method by means of the pressure plaster, while 100 mm on the scale -a total tolerance of the pressure plaster, as well as an adapted 100 mm Visual Analog Anxiety Scale, and a Peaceful Face-Fearful Face Scale: 0 mm on the scale represented an absence of fear, while 100 mm -intense fear of removal of the dressing. Each child was examined 4 times: the 1 st examination was conducted 3 months after applying the pressure plaster and then for the 2 nd time -6 months, 3 rd time -12 months, and the 4 th and final time, 18 months after the pressure plaster was initially applied. Test procedure.The children were asked to show tolerance of the method of healing by the use of a compression plaster or the level of anxiety prior to the removal of the plaster by indicating a point on the adapted scales: Visual Analog Unpleasant Events Tolerance Scale and Visual Analog Anxiety Scale. For data analysis, the Excel 2007 Statistics package (Microsoft Office) was used.Descriptive statistics were used where the results in the respective groups were shown as the mean, standard deviation, and minimal and maximal values.To detect statistically significant differences, Students t° -test for independent samples and paired samples was used.The consent of the patients, as well as the approval of the local Bioethics Commission was obtained to carry out this study. RESULTS Level of tolerance to treatment methods with a pressure plaster.The average level of tolerance to treatment with a pressure plaster for both groups tested is presented in Table 2. At the beginning of treatment, in the first test, there was noted in both groups and regardless of gender, a low tolerance to treatment with a pressure plaster.The average level of tolerance for pre-school children was (x=18.9 ± SD = 10.16), while in children of school-age it was close to twice as high, although it was still low (x=33.65±SD = 13.21).The differences obtained between the groups were statistically significant (p=0.001). The tolerance to treatment in the form of a pressure plaster improved somewhat during the course of treatment, although it still remained low, while in school children it remained constantly at a similar level. The difference obtained between the 1 st test and the 4 th in the group of pre-school children was significant at the level of p = 0.001, while in the group of school children this difference was not significant (p=0.23). In the 4 th (final) test, no significant differences were noted between the pre-school group and the group of school children. Level of fear/anxiety before plaster removal.The average level of fear/anxiety on the removal of the plaster in both groups is presented in Table 3. In the first test, at the commencement of treatment, there was noted among the pre-school children an average level of fear/anxiety on the removal of the plaster (x =47.5 ± SD = 24.26),as well as a low level of fear/anxiety in children of school age (x=20.5 ± SD = 9.46).It can be noted that in pre-school children this level was more than twice the value recorded in school children, regardless of gender.The differences obtained between the groups was statistically significant (p=0.001). In the 4th (final) test, a reduction was confirmed in the fear/anxiety prior to application of the pressure plaster.The level of fear in pre-school children was relatively low (30.83±SD 23.38), while for school children, it was extremely low (x= 15.83 ± SD = 6.19). The difference obtained between the 1st and 4th test in the pre-school group was significant at the level of p = 0.001, while for children of school age this difference was not significant (p=0.23). A comparison of the results obtained in the 4 th (final) test with the 1st test showed an absence of significant differences between the tested groups of pre-school and school children (p=0.27). DISCUSSION The presentation in this article of pioneering tests on the evaluation of the differences in self-assessment of an improvement in quality of life for pre-school and school children, provides an answer to a hitherto gap in both the Polish and international subject literature [11].It indicates the correctness of the general adoption of a biopsychological model of health-related quality of life of children following heat burns.Differences were noted in the evaluation of health-related quality of life in the case of pre-school and school children with burns to their upper extremities.These referred to the two most important parameters complained of by the children, namely: 1. evaluation of the tolerance of pressure plaster treatment; 2. evaluation of fear/anxiety associated with the process of scar treatment (in particular removal of the dressing). The differences in the self-evaluation of health-related quality of life in the course of treating burn scars of the Upper extremities as presented by pre-school and school children involved: 1. the average level of toleration of treatment methods with a plaster were noticeably lower in pre-school children than those of school age, regardless of gender, while the average level of fear/anxiety about removal of the plaster was much higher in pre-school children than in children of school age, regardless of gender; 2. the dynamics of the changes in the treatment process covers an improvement in the tolerance of treatment with a pressure plaster.The two-fold improvement noted in the tolerance shown towards treatment with the plaster displayed by pre-school children may be chiefly connected with the process of growing up.This proved to be an important factor differentiating these children -their awareness of the burn understood as the level of knowledge about burn scars, as well as of the necessity for the longterm treatment of these scars [1,10].There was no total awareness of burns in pre-school children, or of their consequences for the human organism.In accordance with microgenetic theory [12,13] an individual with even an incomplete awareness/ consciousness is deprived of the possibility to create their perception of a temporally changing burn scar, and the associated need for the treatment of these scars.As a result of this, they fear the temporally unknown procedures for the treatment of these scars.Fear/anxiety, therefore, concerns the activities which take place around these scars.Consequently, the small child does not fully activate the nervous system, while the process of adaptation may last for weeks, months, and sometimes even years.In turn, an older child does not feel fear/anxiety in connection with the care and treatment of scars, but experiences displeasure in having the scar itself [14,15,16].From this, evolves the conclusion that although the quality of life deteriorates after a burn equally for both the pre-school children and those of school age, and the causes of this deterioration are varied. In the course of the treatment of a scar, both pre-school and school-age children pass through various mental states in their comprehension of the significance/meaning of their treatment process.It is worth adding that although the scar is viewed during the course of dressing changes, which initially is red and, with time, visibly undergoes a process of becoming paler until it finally merges into the colour of the skin, it is the psychological reactions and the quality of life connected with the scar that lead to differences in both age groups, and not only in these age groups.This happens because every person has an individual model of the world which is connected with their identity, personality, knowledge and life experience.Of significance is the level of the different awareness of symptoms dependant on the patient's age. It is worth remembering that awareness within the concept of the majority of the theory is linked first and foremost with knowledge on the subject of the surrounding Word, and the knowledge of one's self.However, clinical practice clearly shows that consciousness does not represent simply knowledge, but involves man's relationship with the reality that surrounds him [12,13], that is, the relationship of I -the world [10].An extremely important role is played here by cognitive and emotional processes.These elements present the model of disturbances illustrated by Figure 2. This model in its essence relates to the structure of 'I' in relation to the brain, mind, and the surrounding world, as well as constituting a synthesis and discussion about the approach proposed by Pachalska and others.[10].The central place in this model is occupied by 'I', where the relation of 'I' to the brain, mind, and the world fulfills an important role in the 'sculpting' of conscious and sub-conscious processes.The grey circles present cognitive processes, such as: attention, memory or thought, as well as drives, needs and emotions.The processes shown in the model can occur both consciously and subconsciously, while the elements are dynamically presented without marking the direction of their course, for the cognitive and emotional processes may combine in a variable and dynamic way which induces a variable level of consciousness. This model therefore explains the difference obtained in the tests conducted In the presented study into health-related quality of life in the course of the treatment of burn scars of the upper extremities in pre-school and school children. In summing-up, it is worth emphasising that the results of the tests brought an extremely important observation for modern treatment, that care of the scar should be blended with care for the patient's psychological condition in order to improve the quality of life after the burn [14,15,16,17,18].The child's parents or guardians should be included into the care programme [19]. The observations made by other authors [17,20] were confirmed.It was found that if we include innovative clinical thinking and combine new methods in scar treatment with an intimate knowledge of the child, and incorporation of the family within this care programme, we are able to significantly improve the quality of the medical services offered. CONCLUSIONS Result of the tests conducted confirmed the appearance of differences in the self-evaluation of health-related quality of life during the course of scar treatment of the upper limbs, as presented by pre-school and school children.Therefore: 1.In the 1 st test, the average level of tolerance for pressure plaster treatment methods was noticeably lower in preschool children than in those of school age, regardless of gender.Equally, the average level of fear/anxiety on removal of the dressing was noticeably higher in pre-school children than in those of school age, regardless of gender.2. Towards the end of treatment, in the 4 th and final test, the differences in the level of tolerance to the pressure plaster treatment method in both of the tested groups, as well as the anxiety experienced on removal of the plaster, diminished.3. The changes obtained in health-related life quality improvement following burns in pre-school children may be connected with the process of growing-up, and the gaining of an awareness of the need for long-term treatment of the burn scar.These changes may be explained by the model of consciousness in accordance with microgenetic theory, which places emphasis on a knowledge of a disease in the process of the perception of symptoms, and therefore the discomfort experienced by having a pressure plaster and the fear/anxiety suffered before its removal. Figure 1 . Figure1.A general biopsychological model of quality of life connected with a child's state of health after heat burns.Source: own work on the basis of Pąchalska[10] Table 1 . Characteristics of the examined groups. Table 3 . Average level of fear/anxiety on plaster removal Table 2 . Average level of tolerance to treatment with a pressure plaster
2017-06-18T02:35:57.414Z
2014-11-26T00:00:00.000
{ "year": 2014, "sha1": "a374d922131bf2b46ccd2aeecb206693ec374dfd", "oa_license": "CCBYNC", "oa_url": "https://www.aaem.pl/pdf-72212-9439?filename=Evaluation%20of%20differences.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a374d922131bf2b46ccd2aeecb206693ec374dfd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11008148
pes2o/s2orc
v3-fos-license
Predictors of Individual Response to Placebo or Tadalafil 5mg among Men with Lower Urinary Tract Symptoms Secondary to Benign Prostatic Hyperplasia: An Integrated Clinical Data Mining Analysis Background A significant percentage of patients with lower urinary tract symptoms (LUTS) secondary to benign prostatic hyperplasia (BPH) achieve clinically meaningful improvement when receiving placebo or tadalafil 5mg once daily. However, individual patient characteristics associated with treatment response are unknown. Methods This integrated clinical data mining analysis was designed to identify factors associated with a clinically meaningful response to placebo or tadalafil 5mg once daily in an individual patient with LUTS-BPH. Analyses were performed on pooled data from four randomized, placebo-controlled, double-blind, clinical studies, including about 1,500 patients, from which 107 baseline characteristics were selected and 8 response criteria. The split set evaluation method (1,000 repeats) was used to estimate prediction accuracy, with the database randomly split into training and test subsets. Logistic Regression (LR), Decision Tree (DT), Support Vector Machine (SVM) and Random Forest (RF) models were then generated on the training subset and used to predict response in the test subset. Prediction models were generated for placebo and tadalafil 5mg once daily Receiver Operating Curve (ROC) analysis was used to select optimal prediction models lying on the ROC surface. Findings International Prostate Symptom Score (IPSS) baseline group (mild/moderate vs. severe) for active treatment and placebo achieved the highest combined sensitivity and specificity of 70% and ~50% for all analyses, respectively. This was below the sensitivity and specificity threshold of 80% that would enable reliable allocation of an individual patient to either the responder or non-responder group Conclusions This extensive clinical data mining study in LUTS-BPH did not identify baseline clinical or demographic characteristics that were sufficiently predictive of an individual patient response to placebo or once daily tadalafil 5mg. However, the study reaffirms the efficacy of tadalalfil 5mg once daily in the treatment of LUTS-BPH in the majority of patients and the importance of evaluating individual patient need in selecting the most appropriate treatment. Introduction Lower urinary tract symptoms (LUTS) secondary to benign prostatic hyperplasia (BPH) are a common problem, affecting more than 50% of men aged 50 years and older [1]. Medical treatment has focused mainly on the use of α-blocking agents and 5-α reductase inhibitors, either alone or in combination, and aims to alleviate symptoms as well as alter the course of disease progression and prevent complications [2]. Treatment options for LUTS-BPH have since increased with regulatory approval of tadalafil 5mg once daily, a longacting phosphodiesterase type 5 (PDE-5) inhibitor, initially in the US in 2011 and subsequently in the EU and other major territories in 2012 [3]. Treatment of LUTS-BPH, either alone or with coexisting erectile dysfunction (ED), with PDE-5 inhibitors and notably tadalafil 5mg, has recently been added to EU-wide treatment guidelines for non-neurogenic LUTS [4]. The efficacy of once daily tadalafil 5mg in LUTS-BPH has been demonstrated in four randomized controlled trials (RCTs) [5; 6; 7; 8]. At a lower dose of 2.5mg per day, tadalafil did not consistently alleviate symptoms of LUTS-BPH while higher doses of 10 and 20mg per day provided only minimal additional improvement over the 5mg once daily dose [5]. Assessment of treatment response (primary endpoint) was based primarily on the International Prostate Symptom Score (IPSS), a validated, self-administered, 1-month recall questionnaire that has good reliability for recall of obstructive and urinary problems and their global impact on quality of life (QoL). The IPSS is the most widely used instrument to assess the severity of BPHrelated LUTS-symptoms and gauge response to treatment [9; 10]. An integrated analysis of the four RCTs confirmed that tadalafil 5mg achieved significantly greater improvements in total IPSS score, IPSS voiding subscore, IPSS storage subscore and IPSS QoL Index score versus placebo [11]. A separate analysis of IPSS storage and voiding subscores, showed both were significantly improved in the active treatment arms compared with placebo (p<0.001) and that both storage and voiding subscores made a nearly linear contribution to total IPSS in a 4:6 ratio that was maintained from baseline to endpoint [12]. A further post-hoc integrated analysis of the data from the four RCTs showed that approximately two-thirds of tadalafil-treated patients achieved a clinically meaningful improvement (CMI) in LUTS-BPH symptoms, as defined by a total IPSS improvement of !3 points or !25% from randomization to endpoint at Week 12 [14]. Moreover, tadalafil 5mg once daily, demonstrated increasing benefit over placebo as the efficacy threshold was raised from !25% to a demanding !50% and !75% improvement in IPSS [14]. Being able to identify which individual patient is most likely to respond well to treatment with placebo or tadalafil, rather than just knowing its average benefit to a subgroup of patients, would be clinically useful and consistent with the growing trend towards more patient-tailored treatment [15]. Treatment directed at patients most likely to achieve CMI would help address the problem that for too many patients with LUTS-BPH, medical therapy achieves only a fairto-good improvement in symptoms [16]. In this integrated clinical data mining analysis, we set out to identify the factors associated with response to placebo or tadalafil 5mg once daily in an individual patient with LUTS-BPH. Implicit in a study of this nature was the need to carefully estimate the true prediction performance of a factor for unknown patients. Study design This clinical data mining analysis was based on the Knowledge Discovery in Databases (KDD) process and was set up to be consistent with the underlying principles of data mining [17]. Applied data mining algorithms were considered suitable only if a graphical presentation could be obtained that could be followed by practicing physicians. We therefore focused on models that were easily visualised or those expected to yield good predictive outcomes. Our aim was to produce an output that could be displayed on paper and used by clinicians and so we decided at the outset to adopt the simplest model first. This can be seen by the inclusion of single decision rules (SDRs). These models consider just one clinical variable at a time to predict one response variable, without any additions, and they perform well. Rigorous care was taken to evaluate the prediction error for unknown data. Every effort was made to control for potential data mining biases (i.e. those induced by applying too flexible data mining algorithms or those stemming from the desire to achieve 100% accurate predictions). To this end we adhered to a pre-specified statistical analysis plan (SAP), which did not allow for removal of data points. We set out our experience first, wrote down our approach, and kept to it without deviation. We did not intend to optimize prediction performance further than what had been pre-specified. To do so would only bias results for models that are adapted and optimized for a specific combination for the training algorithm and evaluation method, and which are thereby unlikely to capture the clinical information that is predictive in clinical practice. More extensive methodological details not covered here are provided in Supporting Information. Data sources and pre-processing Data for this clinical data mining analysis were pooled from four, randomized, placebo-controlled clinical studies (NCT00384930, NCT00827242, NCT00855582, NCT00970632), all of which had a broadly similar design and enrolled patients with LUTS-BPH (Fig 1) [6; 7; 8; 16]. Common inclusion criteria for all four studies were age !45 years, LUTS-BPH duration of >6 months, total IPSS !13, and maximum urinary flow rate (Qmax) !4 to 15ml/s prior to the placebo lead-in period. Patients were excluded if PSA was >10ng/ml (or for PSA 4-10ng/ml, prostate malignancy had to be excluded), if post-void residual (PVR) urine volume was !300ml, or if they had used finasteride or dutasteride within 3 or 6 months (12 months in one study), respectively. Following screening and, if needed, a washout period for LUTS-BPH or ED medications, patients entered a 4 week placebo lead-in period. On completion, patients were randomized to study treatment with tadalafil 5mg once daily for 12 weeks. Minor differences between the studies included the following: one enrolled patients with BPH and concomitant ED [7]; one was a dose-finding study in which tadalafil was administered at doses of 2.5mg, 5mg, 10mg, 20mg once daily [16]; one included a tadalafil 2.5mg treatment arm [7]; and one included an additional tamsulosin 0.4mg treatment arm [8]. For the purposes of this clinical data mining analysis the study population (N = 1,499) consisted solely of subjects in the intent-to-treat (ITT) population who had been allocated to tadalafil 5mg once daily or placebo irrespective of an IPSS baseline assessment (Table 1). Data from the tadalafil 2.5mg, 10mg and 20mg once daily treatment groups did not form part of the data mining analysis, as these doses are not approved for the treatment of LUTS-BPH. IPSS, IPSS QoL, and BPH Impact Index (BII) were assessed in each of the four studies at baseline (after the 4 week placebo lead-in period following randomization) and after 12 weeks treatment (primary endpoint). Patient Global Impression of Improvement (PGI-I) was evaluated at baseline and endpoint in three of the four studies so as to assess the impression of change in urinary symptoms [6; 7; 8]. Overall, 107 baseline characteristics were included in the clinical data mining analysis (Table 1). Baseline characteristics were categorized as key or supportive and selected on the basis of clinical input from study authors that was derived from knowledge of the published literature and clinical experience. All IPSS, IPSS QoL and BII baseline scores and their subscores were key characteristics, in addition to age (<65 or !65 years), previous LUTS therapy and a history of ED (Table 1). Key characteristics were expected to be predictive for a response to treatment. Two primary and 6 secondary definitions of response were used ( Table 2). The primary responder definitions were considered of equal importance and both were based on Minimal Clinically Important Differences ('overall' or 'severity MCID'), a concept validated using an anchor-based approach [19]. MCID is a threshold that represents a CMI in patients' healthrelated QoL as perceived by the patient [24]. 'Overall MCID' was defined as an improvement in IPSS total score of !3 for all patients (overall response) and 'severity MCID' defined as an improvement in IPSS total score of !2 for patients with mild-to-moderate LUTS and of 6 for those with severe LUTS [14; 19]. Secondary definitions of response were ranked in order of decreasing validation, although to the best of our knowledge they have not been subject to formal validation. Implementation Bias stemming from the desire to achieve 100% prediction accuracy was controlled by following the pre-specified SAP as described earlier, which was approved by all study authors and peer reviewed by Lilly data mining experts prior to programming. A non-clinical benchmark data mining dataset was used for program development. Results from the clinical dataset were produced after program peer review, which was carried out by an independent statistician. All modifications of the analysis after this run were reported as post-hoc. LR and DT models were selected as our data mining models as both can be presented visually and translated into easy decision rules or scores for practical use in medical applications [25; 26; 27] (S1 Technical Appendix). To avoid bias from an overly complex prediction model when a simple one would suffice [17], we compared all models against SDRs. These were implemented using the DT algorithm that was allowed to generate a single decision. In addition, SVM [28] (S2 Technical Appendix) and RF classifiers [29] were applied to obtain estimates for best prediction accuracy (S3 Technical Appendix). The split set evaluation method was used to estimate prediction accuracy on unknown data. To this end, the database was randomly split into training (60% of the database) and test (40% of the database) subsets (Fig 2). Then LR, DT, SVM, RF and SDR models were generated on the training subset and used to predict the response of patients in the held-out test subset. Prediction models were generated for the tadalafil 5mg once daily and placebo groups. Prediction accuracy was measured by sensitivity (true positives) and specificity (true negatives), for which 95% confidence intervals were calculated. Sensitivity and specificity were calculated as follows: Table 2. Definition of treatment response on the IPSS, BII and PGI-I after 12 weeks treatment with tadalafil or placebo as used in the clinical data mining analysis. Instruments Primary objectives IPSS Reduction of !3 points in overall IPSS score [19; 20] Improvement of !2 points in patients with IPSS baseline score <20 and of !6 points in patients with baseline score !20 [19] Secondary Objectives BII Total score of <9 Reduction of >1 point [19] PGI-I Any improvement from baseline [23] BII, BPH Impact Index; IPSS, International Prostate Symptom Score; PGI-I, Patient Global Impression of Improvement; QoL, quality of life. In the equation, TP and TN denote the true positive and true negative predictions and FP and FN denote the false positive and false negative predictions on the test split. Receiver Operating Curve (ROC) analysis was used to identify optimal prediction models lying on the ROC surface [30] (Fig 2). For ROC curve interpretation we adopted a systematic approach in which models on the ROC surface were first documented by their respective sensitivity and specificity, after which the model on the ROC surface that gave equal weight to false positive and false negative errors was discussed in detail. For the primary objectives, the resulting sensitivity and specificity was then compared to the Q1-Q3 range of 1,000 repeated runs of the 60:40 split set evaluation to ensure consistency (non-random data) (S4 Technical Appendix). Additionally, these results were compared with results obtained from 1,000 repeated runs with a randomly permuted response variable (random data). Finally, sensitivity and specificity findings were compared against an 80% cut-off, representing a performance threshold suitable for routine clinical use. Post-hoc sensitivity analyses were conducted to determine whether or not excluding a minimised combination of characteristics affected primarily by missing data would allow the generation of improved models (S5 Technical Appendix). Again, emphasis was placed on those models being optimal when false positive and false negative errors were of equal importance (i.e. a sensitivity and specificity threshold of >80%). Overall findings Analyses were based on pooled data from four randomized, placebo-controlled trials that primarily compared the effect of 12 weeks treatment with tadalafil 5mg once daily with placebo on symptomatic LUTS improvement in men with LUTS-BPH. Baseline characteristics of patients in the two treatment groups were well balanced (Table 1). There was negligible heterogeneity across the four studies. The complete ITT population was used in all our models. However, depending on the algorithm, there were exclusions due to missing response or incomplete data from the run. LR, SVM and RF implementation could not be used with incomplete patient records, whereas DTs were able to handle missing predictor, but not missing response variable information, by using 'surrogate splits', for which we allowed 5. Post-hoc sensitivity analysis was used to explore the influence of missing data on the primary result. The set of predictors was reduced such that a sufficient number of complete records were available to the logistic regression, SVM and RF training algorithms. In the end, all patients included in the ITT population were available for inclusion in the data mining algorithms and no patient was excluded for reasons other than technical ones. Based on these data, the output from our clinical data mining analysis did not find sufficiently good predictors of treatment response to placebo or tadalafil. None of the 107 preselected baseline characteristics achieved a combined sensitivity and specificity of >80% that would enable reliable allocation of an individual patient to either the tadalafil responder or non-responder group. As the detailed results presented below demonstrate, IPSS baseline (mild/moderate vs. severe group) for both placebo and tadalafil 5mg once daily was found several times on the ROC surface and generated the highest combined sensitivity and specificity of 70% and~50%, respectively, for all analyses. Significance of outliers Outliers were assessed in this clinical data mining study but were not removed for the reasons described earlier. The assessment of outliers led to relatively few observations. It is worth noting that 3 baseline characteristics had skewed distributions. These were maximum urinary flow rate (Q max ), body mass index (BMI), and frequency of alcohol intake, all of which had >23 outliers in the upper range of their respective scales. Full outlier results are given in the accompanying Supporting Information (S6 Technical Appendix). Primary Objectives In our ROC curve analyses, models on the ROC surface represented an optimal trade-off between prediction errors (false positive vs. false negative predictions). Here we describe results from the model in which we observed an equal trade-off between both errors as determined by ROC curve analysis. Only SDR models were obtained for the pre-specified analyses predicting 'severity MCID' and 'overall MCID' response. A reduction of !3 points in overall IPSS score, or improvement of !2 points in patients with IPSS baseline score <20 and of !6 points in patients with baseline score !20 were the primary objectives. Prediction of 'severity MCID' response in the tadalafil 5mg once daily group produced SDR models on the ROC surface for IPSS severity group (mild/moderate vs. severe) and IPSS voiding subscore only ( Table 3). The model with equal importance for FP and FN error was based on IPSS severity group. These results (using this model) were supported by repeat evaluations, which lay within the Q1-Q3 ranges for sensitivity and specificity of 68-72% and 45-50%, respectively. Q1-Q3 ranges for random data were 34-66% for sensitivity, and as such did not overlap with the runs on non-random data, and 34-66% for specificity. For subjects in the mild/moderate group, this model predicted a positive 'severity MCID' response. 'Severity MCID' response in the placebo group was predicted by six SDR models lying on the ROC surface that included bioavailable testosterone, ED etiology, IPSS severity, cluster of lipid-lowering medications, antidepressants, and use of 5-α-reductase inhibitors (Table 3). Again, IPSS severity achieved the combination of best sensitivity and specificity when positive and negative prediction errors were of equal importance. The Q1-Q3 range for all evaluations was 71-74% for sensitivity and 39-44% for specificity, while random data yielded sensitivities of 32-65% and specificities of 36-68%. Again, there was no overlap with evaluations on nonrandom data, increasing confidence that the effect was not simply due to random effect. This model also predicted a positive 'severity MCID' response for subjects in the mild/moderate group. SDR models predicting 'overall MCID' response in the tadalafil 5mg once daily group were based on ethnicity, IPSS severity, and IPSS voiding subscores (Table 3). Here, the IPSS voiding subscore SDR model achieved optimal predictions when false positive errors were assumed to have the same importance as false negative errors. Q1-Q3 ranges were 77-96% for sensitivity and 13-29% for specificity. For random data these were 8-89% for sensitivity, respectively, and 11-91% for specificity, respectively. 'Overall MCID' for the placebo group was best predicted by SDR models that included cluster of anti-diabetic drugs, IPSS severity, alcohol usage, and IPSS voiding subscore (Table 3). Giving equal importance to false positives and to false negatives, IPSS voiding scores obtained the best predictions. Subjects with an IPSS voiding subscore >5.5 were predicted to have a higher likelihood of 'overall MCID' response. Q1-Q3 ranges for this model in all evaluations were 93-95% for sensitivities and 19-23% for specificities. The corresponding results for random data were 10-88% for sensitivities and 13-90% for specificities. The IPSS severity categories (mild/moderate vs. severe) based on a cut-off of 20 were part of the ROC surface regardless of MCID definition and regardless of treatment group (i.e. tadalafil 5mg once daily or placebo). IPSS voiding subscore was found on the ROC surface for 'overall MCID' prediction. Secondary Objectives Estimates for sensitivities and specificities for each of the secondary objectives for the two treatment groups are presented in Tables 4 and 5. The SDR models achieving optimal prediction performance when false positive predictions are given the same importance as false negative predictions are marked with a star ( Ã ) and are the results on which we have focused. A reduction of !1 point on the IPSS QoL question was the first secondary objective. SDR models found on the ROC surfaces included number of anti-hypertensive medications for the tadalafil 5mg once daily group, and ED etiology (mixed or psychogenic) for the placebo group to predict improvements. A reduction in the IPSS total score of 25% from baseline to 12 weeks was the next secondary objective, and SDR models on the ROC surface included presence of hypertension during treatment for the tadalafil group and PGI-S (<5) at baseline for the placebo group. Achieving an IPSS total score <12 points at 12 weeks was the third secondary objective. An IPSS score <12 was predicted using IPSS total score for the tadalafil 5mg once daily and placebo groups. Cut-off for response was selected as <16 for tadalafil 5mg once daily and placebo by SDR models on the ROC surface giving equal importance to false positive and false negative predictions. A reduction to <9 points on the BII total score after 12 weeks treatment was the fourth secondary objective. IPSS severity (mild/moderate) was used to predict BII <9 after 12 weeks treatment for the placebo group, while the BII total score (<6.5) was used by the SDR predicting response/improvement in tadalafil-treated patients. A reduction of >0.5 point on the BII scale was the fifth secondary objective. BII total score at baseline was used to predict any improvements in BII by the SDR models. The cut-offs employed were !1.5 and !2.5 for response in the tadalafil 5mg once daily and placebo groups, respectively. The final secondary objective was any improvement on the PGI scale. SDR models lying on the ROC surface that gave equal importance to false positives and false negatives in predicting improvements were, % bioavailable testosterone (!35%) for the tadalafil 5mg once daily group and sex hormone binding globulin (SHBG) (<42nmol/l) for the placebo group, respectively. Post-hoc Sensitivity Analysis All pre-specified analyses returned only SDR models. LR, SVM, RF and DT approaches did not yield models because missing values, that included parameters that were either not measured or intended for collection, resulted in an insufficient number of complete patient records. Testosterone measurements were the key driver, responsible for 79% of incomplete records, while missing PSA assessments accounted for 70% of records, followed by frequency of alcohol intake and SHBG assessments (both missing in >30% of cases). Finally, PGI assessment (PGI-I was assessed in only 3 of the 4 studies), previous overactive bladder therapy, ED characteristics and assessment of Q max were missing for 20% to 30% of patients. Table 6 details sensitivities and specificities on held-out test data from non-SDR models lying on the ROC surface when testosterone, alcohol intake, Q max , SHBG, albumin, PGI-S and PSA were excluded. For 13 of these models, pre-selection via a t-test filter improved prediction performance (S7 Technical Appendix). In these cases the pre-selected variables are given in the last column of the table. Only 4 of the models were RF; not a single SVM was observed. Of the better performing models, sensitivity and specificity were best with respect to BII total score of <9. DTs for the tadalafil 5mg once daily group achieved a sensitivity of 77% (95% CI: 0.72, 0.82) and specificity of 62% (95% CI: 0.35, 0.85). Discussion Identifying predictors of response to drug therapy can be beneficial, especially where significant improvements in patient health-related QoL are sought, such as in LUTS-BPH where symptom relief is the primary goal of treatment for the majority of men. It also has benefits in an era where patients are encouraged to take an active role in treatment decisions alongside their physician. The objective of this clinical data mining study was to identify prediction models and associated patient baseline characteristics that could be used in clinical practice to predict treatment response to tadalafil 5mg once daily among patients with a diagnosis of LUTS-BPH. To the best of our knowledge, this is the first clinical data mining analysis to use mathematical modelling in studies of patients with LUTS-BPH. To meet this objective, we adopted a rigorous data mining approach involving commonly used models and evaluated their discriminative ability on held-out data using eight different measures of treatment response and 107 possible predictors. These were chosen from a large patient population enrolled in a series of almost identical, placebo-controlled, randomized studies of the same duration of randomized treatment and with similar inclusion/exclusion criteria. Results were backed up by repeated evaluations and comparison to non-informative data to control for bias. As our results have demonstrated, we did not to obtain any sensitivities or specificities above an 80% threshold for the specified baseline characteristics. In other words, at this threshold there would be a 20% risk of an incorrect prediction, which we would argue is an acceptable basis on which to predict treatment response in a non-malignant condition in clinical practice. Thus, using our data from four clinical trials and modelling methods, no single predictive rule emerged from which a treatment algorithm could be developed to clinically guide the use of tadalafil 5mg once daily in patients with LUTS-BPH. Similarly, we found no characteristics that determined response to LUTS-BPH treatment when placebo is used. These findings applied to both primary and secondary objectives. Across the 107 baseline characteristics, there was evidence that with respect to 'severity MCID', LUTS severity at baseline as measured by IPSS score (mild-moderate 20 vs. severe >20) had sensitivity and specificity levels that approached 70% and 50%, respectively. While this level of prediction is marginally better than random guessing, it is still too low for clinical use. However, IPSS continues to underpin assessments with respect to baseline symptom severity and monitoring symptom progression in cases of "watchful waiting" [34]. This may be due to the fact that during its validation, care was taken to generate a predictive questionnaire [10; 21]. Several analyses of pooled data from the four clinical trials of tadalafil versus placebo that were used in this clinical data mining study have shown that tadalafil significantly improves symptoms of LUTS-BPH, including small but significant improvements in Q max [35] with concomitant improvements in QoL [11; 36]. Subsequent analyses revealed improvement in both IPSS storage and voiding subscores [12], and that improvements in LUTS occurred irrespective of the presence of co-existing ED [37]. Thus, tadalafil has therapeutic benefit beyond its effects on ED in men with comorbid LUTS-BPH. These findings have been confirmed in a prospective, naturalistic observational study (TadaLutsEd), which closely mirrors routine clinical practice. In this non-selective study, 86% of men aged 50 years and older with LUTS-BPH saw an improvement in urinary symptoms following 6 weeks treatment with tadalafil 5mg once daily [38]. A subgroup analysis of the effects of tadalafil in various patient subgroups concluded that tadalafil improves LUTS-BPH symptoms, as measured by the IPSS, across all clinical subgroups that included LUTS severity (IPSS 20/>20) and previous use of α-blocking agents [13]. However, while this analysis looked at the various subgroups from a population perspective and, as such, evaluates improvement on average, our work crucially looks at it from the perspective of the physician and the individual patient (i.e. predicting the improvement on an individual basis). Both analyses are consistent in that efficacy occurred across all subgroups in the pooled analysis of data from the four clinical trials, while no reliable predictor of response was found in our analysis of the same trials on an individual patient basis. Given that tadalafil provides early symptomatic relief [6] across a wide range of men with LUTS-BPH, including those with ED and other significant comorbidities, it is perhaps not surprising that we were unable to identify individual predictors of response to placebo or tadalafil 5mg once daily despite rigorous data mining. Many examples exist in the literature of predictors of response (or failure to respond) to drug therapy that include the use of drugs for LUTS-BPH. For example, large prostate volume and more severe symptoms at baseline have been identified as predictive factors for failure to respond to first-line medical therapy for LUTS-BPH [39]. Severity of symptoms is a strong influence on the extent to which patients judge treatment to give clinically meaningful improvement [19]-greater severity requires a proportionately greater improvement in symptom relief for patients to perceive the same degree of improvement as those with less severe disease [16]. A systematic review and metaanalysis of the use of PDE-5 inhibitors in LUTS-BPH suggested that younger men with lower BMI and severe urinary symptoms were the best candidates for PDE-5 inhibitor therapy [40], a finding we were unable to demonstrate and confirm in our analysis when examining patients treated with tadalafil or using placebo. We did, however, identify some potential candidates for predicting treatment response. In addition to IPSS-related characteristics, we found that bioavailable testosterone, ED etiology, cluster of lipid-lowering medications, antidepressants and previous use of 5-α-reductase inhibitors may have potential as predictors for treatment response, especially in relation to 'severity MCID' response. Although substantial further work is needed to test these observations, there is some independent evidence to suggest that some, if not all, may be viable candidates. A recent study on the effects of tadalafil 5mg in men with hypogonadism and LUTS-BPH showed that while tadalafil was effective in both men with and without hypogonadism, IPSS storage subscore and IPSS-QoL was appreciably greater in men without hypogonadism than those with low testosterone levels [41]. There is also evidence to suggest that depression, anxiety and somatization may influence the clinical manifestation of LUTS-BPH and that anxious patients respond less well to treatment [42]. Conceivably, treatment with antidepressants could play in role in not only alleviating symptoms of depression and anxiety but also increasing the likelihood of response to specific LUTS-BPH therapy, something for which there is now published evidence [43]. In this study we chose to use established models for prediction, such as LRs, DTs, SVMs and RFs, rather than newer and more complex models. Surprisingly, none of them showed robustness with regards to handling missing data. This was unexpected, especially for DTs and RFs. Current data mining research is focused on developing models that achieve ever better prediction methods (on complete datasets), while simultaneously ignoring the problem of missing information that could be informative but could also completely compromise the method. In our modelling study, even DTs that have an integrated mechanism for dealing with missing data via surrogate splits, often failed to achieve better performance over models that made only a single decision. Only nine DTs were found on the ROC surface and of these, six required preselection of variables via a t-test filter. This clearly highlights the importance of this issue in clinical data mining research. Despite its strengths, which include a pre-specified program of statistical analyses, this study has several limitations. Firstly, there was no subsequent independent study to validate our results. It is also possible that we may not have collected the "true factor" for predicting response, even though we examined 107 baseline characteristics. Better methods could have been employed to fine tune model parameters, especially for the SVM. For example, a triple split set evaluation, consisting of a training split for model generation, a validation split for model selection, and test split for hold-out evaluation could have been used to fine tune the selection of better generalising models. Evaluation of the training-test set bias did not, however, indicate a need for such additional complexity. An alternative pre-filtering step could have been used, adding clustering information as supportive predictor information, adding de-noising (whitening) data pre-processing steps, or using statistical bootstrapping. With respect to SVM, we employed the radial basis kernel but we could have used further kernels, such as random walk kernels, optimal matching kernels, or other kernel types or kernel machines for training this algorithm on our data. There were also limitations inherent in the trial inclusion and exclusion criteria; for example, patients with post-void residual urine volume >300ml were excluded and prostate volume was not directly assessed, although PSA can be used as a surrogate for prostate size. In conclusion, none of the approaches presented here led to a prediction model with sufficient accuracy for the development of a tailoring algorithm for tadalafil 5mg once daily or placebo in LUTS-BPH. Thus, the ideal patient profile for which tadalafil should be prescribed with respect to baseline demographics, medical history, IPSS, International Index of Erectile Function (IIEF) score and Q max remains as yet unknown. Although the response to treatment in an individual patient cannot be reliably predicted from the characteristics and methods we have evaluated so far, this does not mean that patients with LUTS-BPH are not likely to respond on average to treatment with placebo or tadalafil 5mg once daily. Among the approximately two-thirds of men with LUTS-BPH who achieved CMI following treatment with tadalafil 5mg, over half achieved CMI after one week of therapy and over 70% within 4 weeks [44]. Although this study did not identify any pre-existing patient characteristics that might predict a treatment-response, tadalafil 5mg once daily has been shown to effectively impact LUTS-BPH across a range of patient subgroups. Therefore, the decision to treat an individual case of LUTS-BPH with tadalafil 5mg once daily continues to rest on medical assessment of the patient, consideration of contra-indications, presence of co-existing conditions, with the patient's expectations and preferences leading to mutual patient-physician agreement. This approach is entirely compatible with the current concept of shared decision making, in which the patient's voice should also be heard as an integral part of the treatment decision [45], especially for a condition in which part of the symptomatic improvement is a strong placebo response [16].
2016-05-12T22:15:10.714Z
2015-08-18T00:00:00.000
{ "year": 2015, "sha1": "e7cf76783c16b6dedd858b3696a26f3532a78804", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0135484&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e7cf76783c16b6dedd858b3696a26f3532a78804", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245133786
pes2o/s2orc
v3-fos-license
Lack of Association Between Helicobacter pylori Infection and the Risk of Thyroid Nodule Types: A Multicenter Case-Control Studyin China The prevalence of Helicobacter pylori infection is high worldwide, while numerous research has focused on unraveling the relationship between H. pylori infection and extragastric diseases. Although H. pylori infection has been associated with thyroid diseases, including thyroid nodule (TN), the relationship has mainly focused on potential physiological mechanisms and has not been validated by large population epidemiological investigations. Therefore, we thus designed a case-control study comprising participants who received regular health examination between 2017 and 2019. The cases and controls were diagnosed via ultrasound, while TN types were classified according to the guidelines of the American College of Radiology Thyroid Imaging Reporting and Data System (ACR TI-RADS). Moreover, H. pylori infection was determined by C14 urea breath test, while its relationship with TN type risk and severity was analyzed using binary and ordinal logistic regression analyses. A total of 43,411 participants, including 13,036 TN patients and 30,375 controls, were finally recruited in the study. The crude odds ratio (OR) was 1.07 in Model 1 (95% CI = 1.03–1.14) without adjustment compared to the H. pylori non-infection group. However, it was negative in Model 2 (OR = 1.02, 95% CI = 0.97–1.06) after being adjusted for gender, age, body mass index (BMI), and blood pressure and in Model 3 (OR = 1.01, 95% CI = 0.97–1.06) after being adjusted for total cholesterol, triglyceride, low-density lipoprotein, and high-density lipoprotein on the basis of Model 2. Control variables, including gender, age, BMI, and diastolic pressure, were significantly correlated with the risk of TN types. Additionally, ordinal logistic regression results revealed that H. pylori infection was positively correlated with malignant differentiation of TN (Model 1: OR = 1.06, 95% CI = 1.02–1.11), while Model 2 and Model 3 showed negative results (Model 2: OR = 1.01, 95% CI = 0.96–1.06; Model 3: OR = 1.01, 95% CI = 0.96–1.05). In conclusion, H. pylori infection was not significantly associated with both TN type risk and severity of its malignant differentiation. These findings provide relevant insights for correcting possible misconceptions regarding TN type pathogenesis and will help guide optimization of therapeutic strategies for thyroid diseases. The prevalence of Helicobacter pylori infection is high worldwide, while numerous research has focused on unraveling the relationship between H. pylori infection and extragastric diseases. Although H. pylori infection has been associated with thyroid diseases, including thyroid nodule (TN), the relationship has mainly focused on potential physiological mechanisms and has not been validated by large population epidemiological investigations. Therefore, we thus designed a case-control study comprising participants who received regular health examination between 2017 and 2019. The cases and controls were diagnosed via ultrasound, while TN types were classified according to the guidelines of the American College of Radiology Thyroid Imaging Reporting and Data System (ACR TI-RADS). Moreover, H. pylori infection was determined by C14 urea breath test, while its relationship with TN type risk and severity was analyzed using binary and ordinal logistic regression analyses. A total of 43,411 participants, including 13,036 TN patients and 30,375 controls, were finally recruited in the study. The crude odds ratio (OR) was 1.07 in Model 1 (95% CI = 1.03-1.14) without adjustment compared to the H. pylori non-infection group. However, it was negative in Model 2 (OR = 1.02, 95% CI = 0.97-1.06) after being adjusted for gender, age, body mass index (BMI), and blood pressure and in Model 3 (OR = 1.01, 95% CI = 0.97-1.06) after being adjusted for total cholesterol, triglyceride, lowdensity lipoprotein, and high-density lipoprotein on the basis of Model 2. Control variables, including gender, age, BMI, and diastolic pressure, were significantly correlated with the risk of TN types. Additionally, ordinal logistic regression results revealed that H. pylori infection was positively correlated with malignant differentiation of TN (Model 1: OR = 1.06, 95% CI = 1.02-1.11), while Model 2 and Model 3 showed negative results (Model 2: OR = 1.01, 95% CI = 0.96-1.06; Model 3: OR = 1.01, 95% CI = 0.96-1.05). In conclusion, H. pylori infection was not significantly associated with both TN type risk and severity of its INTRODUCTION Helicobacter pylori, a Gram-negative bacterium that infects the human stomach and accounts for about half of infections worldwide, is difficult to eliminate by the human immune system. Previous studies have shown that H. pylori infection rates are on a decline but still high in developing countries (Maluf et al., 2020). H. pylori infection has been linked to the pathogenesis of numerous diseases of the upper digestive tract, including gastric cancer, gastric ulcer, and gastritis. Moreover, this infection has also been associated with many extragastric diseases, including hematological, allergic, neurological, ophthalmic, metabolic, and dermatologic diseases (Pero et al., 2019;Gravina et al., 2020). Based on these pieces of evidence, it is necessary to explore the relationship between H. pylori infection with other extragastric diseases. Thyroid nodule (TN), which has a prevalence rate of 20%-70% especially in women and the elderly, is a common clinical disorder (Schiaffino et al., 2020). Notably, recent advancements in ultrasonic resolution have also improved the detection rate of TN types (Yu et al., 2020). Previous studies have shown that although TN types are mostly benign, approximately 5%-15% of them may develop into malignancies (Chambara and Ying, 2019). Despite the reliability of the current diagnostics and treatment therapies for TN types, the pathogenesis remains unclear, and several risk factors, including diet, environment, genetics, and abnormal inflammation, have been documented. Previous studies on the association between H. pylori infection and thyroid diseases have mostly focused on thyroid autoimmunity, and the correlation between H. pylori infection and thyroid autoimmune diseases, such as Graves disease and Hashimoto thyroiditis, has been reported (Benvenga and Guarneri, 2016;Kohling et al., 2017;Figura et al., 2019). Shen et al. (2013) found that H. pylori infection was positively associated with the risk of TN types. Currently, the biological mechanisms that may explain the association mainly include molecular mimicry and dysbiosis (Zhang et al., 2018;Cuan-Baltazar and Soto-Vega, 2020;Docimo et al., 2020). Molecular mimicry theory suggests that there are at least 14 proteins of H. pylori antigen epitope similar to the local amino acid sequence of thyroid endogenous proteins (Benvenga and Guarneri, 2016). This structural similarity can trigger an immune cross-reaction and chronic thyroid inflammation, which may explain the reactive hyperplasia induced by H. pylori infection (Shi et al., 2013). T helper (Th) cells also play an indirect role in this mechanism, which may co-induce the formation of TN types (Shi et al., 2013;Cuan-Baltazar and Soto-Vega, 2020). In addition, intestinal flora affects endocrine signals through the brain-gut axis, and the intestinal microbial diversity of TN patients is significantly higher than that of healthy population (Zhang et al., 2019). It has been found that Lactobacillus, as an important intestinal flora, participates in protecting the thyroid gland from oxidative damage, while H. pylori infection has an inhibitory effect on the survival of this bacterium, which may be related to the pathogenesis of TN types (Iino et al., 2018;Zhang et al., 2019). Although these findings have indicated relationships and potential physiological mechanisms, there is still need for validation using large population epidemiological investigations. Here, we designed a case-control study to verify the relationship between H. pylori infection and the risk of TN types. Our results provide valuable insights into the etiology of TN types and are expected to guide future designing of management strategies for this common disorder. Study Population This observational study recruited community residents and workers from different organizations and companies who underwent a health examination at the Health Management Center of the First Affiliated Hospital of Anhui Medical University, Hefei, the capital city of Anhui Province, China, between January 2017 and March 2019. The hospital has six centers, located across different regions of the province. The study was approved by the Ethics Committee of the First Affiliated Hospital of Anhui Medical University (number: Quick-PJ 2021-10-28). Classification of TN types and controls was performed via ultrasound diagnosis. The inclusion criteria in this research were mainly based on the type of physical examination items selected by participants, and it is defined as individuals who volunteer for routine physical examinations, including carbon-14 (C14) urea breath test (H. pylori infection diagnosis), TN type screening (ultrasonographic diagnosis), height, weight, blood pressure (BP), and blood lipids. We also excluded individuals who have previously undergone thyroidectomy or those who have taken drugs that affect thyroid function, including antithyroid drugs, lithium salts, amiodarone. In addition, individuals who have undergone therapy for H. pylori eradication via antibiotics, proton pump inhibitor, or bismuth or those who have taken antibiotics within 14 days were also excluded. Finally, a total of 44,682 participants were recruited, and 1,271 individuals were excluded based on the exclusion criteria. Data Collection All participant information was obtained from the physical examination database of the Health Management Center, while H. pylori infection was diagnosed using the C14 urea breath test. Participants were first subjected to fasting, for more than 6 h, prior to the C14 breath test. In the C14 breath test, a positive result is defined by a H. pylori standard of more than 100 (dpm/ mmol), while a range of 0-100 (dpm/mmol) implies a negative result. TN type screening was evaluated using ultrasonographic diagnostic techniques by ultrasonologists with at least 5 years of independent work experience. The TN types were then divided into six grades and classified by American College of Radiology Thyroid Imaging Reporting and Data System (ACR TI-RADS) score. These ACR TI-RADS classification and diagnosis results were used to stratify the patients into the case group, comprising TI-RADS1/2, TI-RADS3, and TI-RADS4/5/6, as the number of patients with TI-RADS1, TI-RADS5, and TI-RADS6 was negligible. The TI-RADS4 group also included TI-RADS4a, TI-RADS4b, and TI-RADS4c, which were not subdivided into groups of different TN types. Considering the influence of potential confounding factors, we also collected other related information, including gender, age, body mass index (BMI), BP, total cholesterol (TCH), triglyceride (TG), low-density lipoprotein (LDL), and highdensity lipoprotein (HDL). These were selected according to previous research findings and expert opinions (Shen et al., 2013;Li et al., 2019;Li et al., 2020). In addition, trained professional nurses systematically collected information on participants' height, weight, and BP through operation of unified weight and height detector and intelligent BP meter. BMI was calculated as weight divided by height square (kg/m 2 ). All other data were obtained from laboratory tests on blood samples, and these were after fasting overnight or over 8 h. Statistical Analysis Statistical analyses were performed using SPSS software version 22.0, and all quantitative variables were presented as means ± standard deviations, while qualitative variables were expressed as quantity and percentages. Rates among different groups were compared using the chi-square test, while differences between quantitative variables were determined using a Student's T-test or Mann-Whitney U test. The relationship between H. pylori infection and the risk of TN types was determined using a binary logistic regression. We also calculated tolerance and variance inflation factor for model diagnosis in order to exclude multicollinearity among different independent variables. Multicollinearity was considered to be positive if the tolerance was less than 0.1 or the variance inflation factor was more than 10. In addition, we performed ordinal logistic regression to identify the potential association between H. pylori infection and the severity of TN types. Data followed by P < 0.05 were considered statistically significant. Participant Characteristics A total of 43,411 participants, of whom 13,036 and 30,375 were TN patients and controls, respectively, were included in this study ( Table 1). Among them, 17,697 subjects (40.8%) were positive for H. pylori infection, including 5,461 TN cases and 12,236 controls. Notably, H. pylori positivity rate was significantly higher in the TN types than that in control groups (P = 0.002). Similarly, the average age and the proportion of female subjects were significantly higher in the TN type group relative to those in the controls (each P < 0.001). In addition, all other indicators, namely, BMI, systolic pressure, diastolic pressure, TCH, TG, LDL, and HDL, were significantly higher in TN type groups than those in controls (all P < 0.001). Prevalence of Thyroid Nodule Types Between H. pylori-Infected and Non-Infected Groups We compared the prevalence of TN types between subjects infected with H. pylori and non-infected counterparts across different subgroups, including gender, age, and BMI. Results from the female subgroup indicated a significantly higher prevalence in the infected group than that in the non-infected group (P < 0.001), and this was slightly different from the male subgroup (P = 0.159). On the other hand, the prevalence of TN types increased in both age and BMI subgroups, although no significant differences were observed between H. pylori-infected and non-infected groups. Notably, significant differences in the prevalence of TN types were only found in age range (50-59 years, P = 0.030) ( Figure 1) and BMI (18.5-23 kg/m 2 , P = 0.001) (Figure 2). Correlation Between H. pylori Infection and the Risk of Thyroid Nodule Types We adopted three binary logistic regression models to calculate odds ratios (ORs) and depict the correlation between H. pylori infection and the risk of TN types ( Table 2). Results from Model 1, which employed the univariate logistic regression algorithm, revealed that the crude odds for TN types was 7% (OR = 1.07, 95% CI = 1.03-1.14), and this was positively correlated with H. pylori non-infection group. However, results from Model 2 revealed a negative correlation adjusted for sex, age, BMI, systolic pressure, and diastolic pressure (OR = 1.02, 95% CI = 0.97-1.06). On the other hand, Model 3 revealed stable and negative results (OR = 1.01, 95% CI = 0.97-1.06), adjusted for additional control variables, including TCH, TG, LDL-C, and HDL-C, on the basis of Model 2. In Model 2, control variables, FIGURE 2 | The prevalence of TN types increased slowly with the increase of BMI. Among the four BMI ranges, only in the 18.5-23 kg/m 2 subgroup, the prevalence of TN types in the H. pylori (+) group was statistically higher than that in the H. pylori (-) group, and no significant difference was found in the prevalence of TN types between H. pylori (+) and H. pylori (-) groups in other BMI subgroups FIGURE 1 | The prevalence of TN types increased with age in both H. pylori (+) and H. pylori (-) groups. Among the five age ranges, only in the 50-59 years subgroup, the prevalence of TN types in the H. pylori (+) group was statistically lower than that in the H. pylori (-) group. However, in other age subgroups, no significant difference was found in the prevalence of TN types between H. pylori (+) and H. pylori (-) groups. In the binary logistic analysis results, control variables with P value <0.001, including gender, age, and BMI, were analyzed by stratification in order to control for potential confounding bias ( Table 3). The three models revealed no statistically significant correlation between H. pylori infection and the risk of TN types in the male subgroup (Model 1: OR = 1.04, 95% CI = 0.98-1.08; Model 2: OR = 1.01, 95% CI = 0.95-1.08; Model 3: OR = 1.01, 95% CI = 0.95-1.07). However, in the female subgroup, Model 1 revealed a positive correlation between H. pylori infection and the risk of TN types (Model 1: OR = 1.12, 95% CI = 1.05-1.20), whereas both Model 2 and Model 3 revealed a negative correlation (Model 2: OR = 1.02, 95% CI = 0.96-1.10; Model 3: OR = 1.02, 95% CI = 0.95-1.09). In age subgroups, with the increase of age, ORs of all three models showed a downward trend, but the results still showed that H. pylori infection was not a risk factor for TN types. With regard to BMI, changes in ORs between different layers were stable, although subgroup 18.5-23 kg/m 2 showed a relatively high OR, which was accompanied by a negative overall correlation. Correlation Between H. pylori Infection and the Risk of Thyroid Nodule Type Severity We performed ordinal logistic regression in order to analyze a potential relationship between H. pylori infection and malignant tendency of TN types (Table 4). Summarily, we divided patients with different TN types into three grades, according to the ultrasound diagnosis results, then finally divided them into four grades, including normal controls. Results from Model 1 revealed a positive relationship (OR = 1.06, 95% CI = 1.02-1.11), whereas both Model 2 and Model 3 revealed negative results even after inclusion of different control variables (Model 2: OR = 1.01, 95% CI = 0.96-1.06; Model 3: OR = 1.01, 95% CI = 0.96-1.05). The positive correlation observed in Model 1 might be attributed to the influence of various confounding factors. DISCUSSION Although previous reports have shown that H. pylori infection is involved in various thyroid diseases, very few of these studies analyzed large sample sizes. Results of the present study, comprising a population-based case-control study of community residents and workers from different organizations or companies, indicated that H. pylori infection was not significantly associated with the risk of TN types, which were in contrast with previous reports (Shen et al., 2013). We validated this lack of significant association through ordinal logistic regression analyses. The findings of our study, as well as those from published literature, indicated that the theoretical mechanisms underlying the possible association between H. pylori infection and the risk of TN types are not rigorous. Molecular mimicry is considered a possible mechanism underlying the relationship between H. pylori infection and pathophysiology mechanisms of thyroid diseases. In fact, previous evidence has suggested that infection-induced chronic inflammation could be a crucial cause of TN types, which may also be related to the structural similarity between H. pylori epitope antigen and thyroid autoantigen (Yu et al., 2019;Liu et al., 2020). It has been reported that 14 proteins of H. pylori antigen epitope are similar to thyroid endogenous proteins, including the segments of human thyrotropin receptor, thyroid autoantigen, and sodium iodide symporter (Benvenga and Guarneri, 2016). Moreover, previous studies have revealed the structural similarities between H. pylori epitope and H-K-ATPase in the thyroid gland as well. Furthermore, immune responses, induced by H. pylori infection, have been shown to indirectly cause Th1 activation and apoptosis and promote secretion of pro-inflammatory cytokines, including tumor necrosis factor-alpha (TNF-a) and interferon-gamma (INF-g), thereby causing thyroid tissue injury and inflammation (Cuan-Baltazar and Soto-Vega, 2020). However, molecular mimicry has only been used to explain the autoimmune thyroid pathophysiology caused by H. pylori infection; thus, it may not explain the comprehensive induction of TN type development. Another possible theoretical mechanism is dysbiosis. Although the composition of gut microbiome of TN patients differs from that of healthy controls (Zhang et al., 2019;Docimo et al., 2020), there is no definite evidence to affirm that H. pylori infection can directly induce such differences in the gut microbiome and the dysbiosis clearly causes TN types. Results of the present study further indicated that control variables, including gender, age, and BMI, were significantly correlated with the risk of TN types, which was consistent with findings from previous studies (Kwong et al., 2015;Zheng, 2015;Jasim et al., 2020). Collectively, it may imply that the results of this study are reliable and better reflect the association between H. pylori infection and the risk of TN types. Based on our results, we can consider that female, increase in age, and BMI were all risk factors for TN types, while the potential interactions in effects across different confounding factors may also have an impact on the risk of TN types. Therefore, it is possible that the influence of these confounders or selection bias in study population might contribute to false positive results of H. pylori infection associated with TN types that has been previously reported. Our results also suggested that higher diastolic pressure might be a risk factor for TN types. This is similar to the findings of Li et al. (2020) who reported that higher systolic pressure was positively correlated with increased risk of TN types in a female cross-sectional study. The mechanism may be that H. pylori infection often induces an increase in serum fibrinogen, which interferes with the release of nitric oxide from vascular endothelium. This mechanism tends to inhibit normal relaxation of blood vessels, while vasoconstriction is the main factor leading to increased diastolic BP (Migneco et al., 2003). However, the potential biological mechanisms underlying the observed association between diastolic pressure and the risk of TN types necessitate further exploration using animal experiments or cohort studies. This study had some limitations. Firstly, we also compiled family history including past diseases suffered and lifestyle habits of the study population, which may be potential confounding factors. However, the information was incomplete due to low response rate of supplementary questionnaires; thus, it was not finally included in the regression models. In addition, considering the feasibility of the study implementation, there may be other potential confounding factors such as radiation exposure to head and neck that have not been collected as well, and these factors may also have an impact on the results. Secondly, there may still be potential selection bias in study population. Patients with a high risk for malignant TN types, diagnosed with TI-RADS5 or TI-RADS6, are few in this research. This might be attributed to the fact that residents and workers with higher TI-RADS grades may be more inclined to directly choosing hospital treatment, as opposed to receiving physical examination at a health management center, though residents and workers in the surveyed areas had a high prevalence of regular health screenings. Moreover, Anhui is one of the most densely populated provinces in eastern China. Actually, the included participants in this research were mainly from urban areas of Anhui, and the majority was from the capital city, Hefei. These factors may also affect the representation of the study population. Finally, this study employed a case-control survey design; thus, it was difficult to analyze the causality between potential risk factors and TN types. Therefore, further better designed studies are needed to validate these results. In summary, we found no significant correlation between H. pylori infection and neither TN type risk nor the degree of its malignant differentiation, indicating that H. pylori infection neither promotes nor induces TN types. Therapies for eliminating H. pylori are not recommended for TN patients as an independent measure to reduce the risk of malignant differentiation. Further explorations, using large prospective studies, are needed to fully elucidate the association between H. pylori infection and more thyroid disorders. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author. ETHICS STATEMENT The studies involving human participants were reviewed and approved by The Ethics Committee of the First Affiliated Hospital of Anhui Medical University. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. AUTHOR CONTRIBUTIONS HW: Research design. X-SW: Article drafting and data analysis. GJ: Data collation and analysis. Y-HL, T-TY, and Y-WZ: Data collation. X-HX, KL, and Y-TL: Data collection. M-WC and H-QH: Revisions of article. All authors contributed to the article and approved the submitted version. FUNDING This study was supported by the National Key R&D Program of China (2020YFC2006500, 2020YFC2006502).
2021-12-15T14:16:27.750Z
2021-12-14T00:00:00.000
{ "year": 2021, "sha1": "8057e07249b8e8b8453ba9c8b7f73641873561c5", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcimb.2021.766427/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8057e07249b8e8b8453ba9c8b7f73641873561c5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252217700
pes2o/s2orc
v3-fos-license
Association of low-grade inflammation caused by gut microbiota disturbances with osteoarthritis: A systematic review Background Currently, many studies have been published on the relationship between the gut microbiome and knee osteoarthritis. However, the evidence for the association of gut microbiota with knee osteoarthritis has not been comprehensively evaluated. Objective This review aimed to assess existing results and provide scientific evidence for the association of low-grade inflammation caused by gut microbiota disturbances with knee osteoarthritis. Methods This study conducted an extensive review of the current literature using four databases, PubMed, EMBASE, Cochrane Library and Web of Science before 31 December 2021. Risk of bias was determined using ROBINS and SYRCLE, and quality of evidence was assessed using GRADE and CAMADARES criteria. Twelve articles were included. Results Studies have shown that a high-fat diet leads to a disturbance of the gut microbiota, mainly manifested by an increase in the abundance of Firmicutes and Proteobacteria, a decrease in Bacteroidetes, and an increase in the Firmicutes/ Bacteroidetes ratio. Exercise can reverse the pattern of gain or loss caused by high fat. These changes are associated with elevated levels of serum lipopolysaccharide (LPS) and its binding proteins, as well as various inflammatory factors, leading to osteoarthritis (OA). Conclusion This systematic review shows that a correlation between low-grade inflammation caused by gut microbiota disturbances and severity of knee osteoarthritis radiology and dysfunction. However, there was a very small number of studies that could be included in the review. Thus, further studies with large sample sizes are warranted to elucidate the association of low-grade inflammation caused by gut microbiota disturbances with osteoarthritis, and to explore the possible mechanisms for ameliorating osteoarthritis by modulating gut microbiota. Introduction Osteoarthritis (OA) is the most common musculoskeletal disease and one of the leading causes of disability (1). Epidemiological surveys show that more than 320 million people worldwide suffer from OA, and the prevalence is higher in women than men. Traditionally, mechanical and genetic factors have been considered important causes of OA (2,3). However, emerging evidence suggests that low-grade inflammation plays an important role in the development of OA (4), and this inflammatory state is closely related to the gastrointestinal microbiota (5). The gastrointestinal microbiota refers to the sum of all genetic material and its metabolites of all microbiota present in the gut (6,7). The gut microbiota plays an important role in maintaining the body's homeostasis, which underlies human physiology, immune system development, digestion, fat storage, regulation of angiogenesis, behavior, development, and detoxification responses. The human gut microbiota is mainly composed of Firmicutes, Bacteroidetes, Actinobacteria, Proteobacteria and Verrucobacterium. Among them, Bacteroidetes and Firmicutes account for more than 98% of the total number of intestinal symbiotic flora of more than 70 species (8,9). Studies have shown that a variety of diseases are associated with specific bacterial sequences and alterations and disturbances in the composition of the microbiota (10, 11). At the same time, the gut microbiota plays a key role in the development and function of the immune system, as well as in allergic and inflammatory responses (12)(13)(14)(15). Alterations in the microbiome activate the innate immune system, leading to increased pro-inflammatory cytokines, and these local and systemic low-grade inflammations contribute to the development and progression of OA (16,17). At present, there are more and more studies on the correlation between low-grade inflammation caused by intestinal flora disturbance and OA. It is difficult to draw conclusions about the consistency of the association due to different study designs and assessment methods, so it is unclear whether low-grade inflammation due to disturbances in the gut microbiota has a different effect on OA. Given the high prevalence of OA and its significant socioeconomic burden, it is important to explore the impact of low-grade inflammation caused by gut microbiota disturbances on OA. Search strategy We searched comprehensively for articles published before 31 December 2021 using four electronic medical databases (PubMed, EMBASE, Cochrane Library and Web of Science). Selection criteria Inclusion criteria: (1) clinical and basic research with any level of evidence; (2) English-language articles published in peerreviewed journals; (3) studies on the association of low-grade inflammation caused by gut microbial imbalances with OA, and OA Pathogenesis or related-symptoms. Exclusion criteria: (1) studies with missing data; (2) studies with duplication and poor scientific method; (3) abstracts, case reports, conference reports, reviews, editorials, and expert opinions were excluded. Literature screening and data extraction Two investigators (WX and HX) independently searched, selected relevant articles according to the inclusion and exclusion criteria, read the full text, and extracted data from the final included literature. Any disagreements were resolved by an experienced systematic reviewer (BJJ). Differences in data extraction are resolved by consensus. After extraction, the data was considered of heterogenous nature both by study design, measure, and method of assessment. Therefore, a descriptive analysis approach was preferred to a metanalysis. Figure 1 for details. Risk of bias assessment ROBINS was used to assess the risk of bias in nonrandomized clinical studies (18), and RoB 2.0 (19) was used to assess the risk of bias in randomized clinical studies. Risk of bias in preclinical studies was assessed using SYRCLE (20). WX and HX conduct evaluations independently, and any disagreements are resolved by consensus. Study quality assessment The quality of clinical studies (n = 6) was assessed using the GRADE method (21) and each study was classified as 'low' , 'moderate' or 'high'. All studies were ranked 'moderate' or 'high'. The quality of preclinical studies (n = 6) was assessed using the Collaborative Approach to Meta-Analysis and Review of Animal Data from Experimental Studies (CAMADARES) checklist (Supplementary material) (22, 23). Each study was scored on a scale from 0 to 10 points, and the overall quality of included studies was moderate (mean CAMADARES score 4.17, . Most studies used 16S ribosomal RNA (rRNA) gene sequencing to examine gut microbiota and Enzyme-linked immunosorbent assay (ELISA) to measure inflammatory markers. Meanwhile, most studies assessed radiographic or symptom severity of OA using Western Ontario McMaster Universities (WOMAC) score, Visual Analog Scale (VAS) score, scores for articular cartilage structure (ACS) score, the Osteoarthritis Research Society International (OARSI) score, synovitis score and Osteophyte size. Overall, various studies have suggested that there is a certain relationship between inflammation caused by intestinal flora disturbance and OA. E ects of diet, exercise or probiotics on gut microbiota High-fat diet leads to gut microbiota disturbances and is a common model of low-grade inflammation (35). Firmicutes, Bacteroidetes and Proteobacteria are the three major phyla of the gut microbiota (28). High-fat diet cause disturbance of the gut microbiota, increase endotoxinproducing bacteria, and decrease bacteria protecting the intestinal barrier, thereby enhancing bone destruction on OA in mice. It is mainly manifested by an increase in the abundance of Firmicutes and Proteobacteria, but a decrease in Bacteroidetes, and an increase in the Firmicutes/Bacteroidetes ratio (28). Two studies suggest that probiotic supplementation reduces intestinal damage and inflammation, and has great potential in the treatment of osteoarthritis (27,31). The influence of intestinal flora disturbance on OA Intestinal microbial disturbances increase intestinal permeability and cause low-grade inflammation throughout the body, thereby aggravating OA. By transplanting human microorganisms into mice, it was found that the abundance of Fusobacterium and Enterococcus faecalis in the transplanted mice increased, but the abundance of Ruminococcus decreased, the average systemic concentration of inflammatory markers increased, and the intestinal increased permeability is associated with more severe OA (26). At the same time, the serum estrogen level in OA rats was significantly decreased, which was correlated with the significant increase in LPS. In Lactobacillus rhamnosus-treated OA rats, the expression levels of Monocyte chemoattractant protein-1 (MCP-1) and its receptors Recombinant Chemokine C-C-Motif Receptor 2 (CCR2), interleukin-1β (IL-1β), matrix metallopeptidase 3 (MMP3) were decreased, while γ-aminobutyric acid (GABA) and peroxisome proliferator-activated receptor γ (PPARγ), tissue inhibitor of metalloproteinases 1 (TIMP1), tissue inhibitor of metalloproteinases 3 (TIMP3), SRY-related high mobility group-box gene9 (SOX9) and Type II collagen fiber α1 gene (COL2A1) and interleukin-10 (IL-10) increased expression levels (27). The e ect of inflammation on OA Inflammation is a key link in the occurrence and development of OA. Whether it is inflammation in the plasma or in the local soft tissue of the joint, it can cause OA. Studies have shown that stimulation of toll-like receptor (TLR) signaling can exacerbate invasive OA in mice (29). At the same time, serum high-sensitivity C-reactive protein (hs-CRP) levels were correlated with bone and joint WOMAC score and VAS score (31). Research has shown that, LPS and lipopolysaccharide-binding protein (LBP) were significantly associated with activated macrophages and osteophyte severity in the joints of Knee Osteoarthritis However, not all studies have shown a correlation between inflammatory markers and osteoarthritis. Studies have shown no statistically significant association between soluble Toll-like receptor 4 (sTLR4) or IL-6 and radiographic progression of OA (32). Discussions Our systematic review suggests a link between low-grade inflammation caused by gut microbiota and osteoarthritis, but further research is needed in the future. Low-grade inflammation leads to OA through the production of inflammatory mediators, including innate immune activation, macrophage-dominated inflammatory response, Toll-like receptor (TLR) activation, and complement activation, among which TLR signaling plays an important role in the pathogenesis of OA (4,(36)(37)(38). Locally injured molecules activate TLRs, which trigger the secretion of pro-inflammatory substances and local inflammation in the joints (4,38). It has been found that TLR expression is increased in areas of cartilage damage in OA patients (39). Upregulation of various TLR signaling components is seen in OA-associated chondrocytes, most notably LBP and cluster of differentiation 14 (CD14), which are accessory proteins of multiple TLRs and interact with multiple signaling molecules including LPS (37, 38). Studies have shown that gut bacterial products such as LPS can enter the systemic circulation and affect many organs, including joints, by causing systemic low-grade inflammation (30,40). LPS is an endotoxin associated with the outer membrane of various Gram-negative pathogens (41) and a classic innate immune system activator that activates host immune cells by binding to Toll-like proteins. Meanwhile, a correlation study between LPS and OA has shown that human serum LPS levels are associated with osteophyte severity in OA, and synovial fluid LPS is associated with osteophyte severity, joint space narrowing, and total pain/function severity scores (30). Similar to LPS, LBP has also been shown to be associated with increased KOA severity in humans (30). LBP is mainly produced by hepatocytes and is a well-known acute phase reactant (42). LBP is activated by inflammatory mediators such as IL-6 and directly or indirectly by LPS itself (43-45). In humans, LBP triggers a dynamic endotoxin cascade by binding LPS and transferring it to CD14, which transfers LPS to the Tolllike receptor 4-myeloid differentiation protein-2 (TLR4-MD-2) receptor on immune cells; LBP thereby concentrates LPS on the cell membrane of immune cells, to induce an inflammatory response (46). LBP binds pro-inflammatory components of both Gram-positive and Gram-negative bacteria (47), making it a more prevalent marker of bacterial exposure than LPS derived only from Gram-negative bacteria (45). Meanwhile, other studies have shown that LBP is necessary for the inflammatory cascade triggered by saturated fatty acids and metabolic endotoxemia (48,49). A high-fat diet, an unhealthy dietary pattern that leads to obesity, altering microbial community structure and reduce microbial diversity, resulting in an increase in pro-inflammatory microbiota, thereby increasing intestinal permeability and circulating levels of LPS. In a high-fat diet model, TLR signaling plays a key role in low-grade inflammatory pathways (4,50), such as toll-like receptor 4 (TLR4) (37, 51, 52), LPS, and LBP (31), and interleukin 6 (IL-6) (53-55), and have also been implicated in the inflammatory mechanisms of OA. Exercise diversifies the gut microbiota and reduces the Firmicutes/Bacteroidetes ratio (56). This view was validated in our systematic review (28). At the same time, exercise produces high levels of endocannabinoids in arthritis patients, which mediate the gut microbiota to produce anti-inflammatory substances that reduce pain (57). Gender variance is one of the factors affecting the prevalence of OA. A meta-analysis on global incidence and prevalence of OA in women is 1.69 and 1.39 times as much in males, respectively (58). Meanwhile, a study found that polymorphism in growth differentiation factor-5, estrogen-specific receptoralpha, and calmodulin-1 has increased the disruption of cartilage and reduced mRNA and protein synthesis, which increased the risk of KOA in women (59). Moreover, A prevalence study on osteoporosis, hypovitaminosis D, and OA found higher rates of Vitamin D insufficiency and deficiency in women than in men (60), and there is a correlation between vitamin D deficiency and OA (61). Limitations First, In the analysis of microbial sequencing, the analytical methods were different across studies involving various regions (V3-V5) and cut-off points for clustering OTUs which may affect the results. Second, the gut microbial community analysis by 16S rRNA sequencing was not used in all studies, which may affect the consistency of the results. Third, most of them are animal studies, and there are fewer extensive studies in humans, and fewer studies on the complexity of the gut microbiota and its association with OA. Finally, most studies have only observed changes in gut microbiota and inflammatory factors, but the underlying mechanisms have not been further explored. Conclusions In conclusion, our systematic review provides evidence for the development of OA due to low-grade inflammation caused by intestinal flora disturbance. Further studies are needed to explore the mechanisms involved.
2022-09-14T18:26:22.949Z
2022-09-12T00:00:00.000
{ "year": 2022, "sha1": "b5ccd33978f004670dad693c2be00d52d0972732", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "b5ccd33978f004670dad693c2be00d52d0972732", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
248990116
pes2o/s2orc
v3-fos-license
On correlated measurement errors in the Schwartz – Smith two - factor model : The Schwartz – Smith two - factor model is commonly used for pricing of derivatives in commodity markets. For estimating and forecasting the term structures of futures prices, the logarithm of commodity spot price is represented as the sum of short - and long - term factors being the unobservable state variables. The futures prices derived as functions of the spot price lead to the simultaneous set of measurement equations, which is used for joint estimation of unobservable state variables and the model parameters through a fi ltering procedure. We propose a modi fi ed model where the error terms in the measurement equations are assumed to be serially correlated. In addition, for comparative analysis, the modelling of the logarithmic returns of futures prices is also considered. Out - of - sample prediction performances of two proposed models were illustrated using European Unit Allowances ( EUA ) futures prices from January 2017 to April 2021. Historically, this period corresponds to the second half of Phase III, and the beginning of Phase IV of the European Union Emission Trading System ( EU - ETS ) . Introduction Stochastic processes have been commonly used for pricing of commodity derivatives for almost 50 years. The risk-neutral pricing theory for commodity derivatives was first developed in [7], which has become known as the Black-Scholes-Merton framework, where the commodity spot price is represented as a geometric Brownian motion (GBM), and for further details, see [8] and [21]. The principles of Black-Scholes-Merton's framework laid the foundation for asset pricing theory. Since then, many models were developed by considering a number of factors as stochastic processes, which reflect the specifics of the commodity market. The mean-reverting process, or the Ornstein-Uhlenbeck (O-U) process, is often used for pricing of commodity derivatives. For example, in the two-factor oil contingent claims pricing model in [15], a mean-reverting factor and GBM were employed for modelling of the convenience yield and correlated oil spot price, respectively. In the Schwartz-Smith two-factor model [23], the sum of a short-term and a long-term factors, incorporating short-term deviations and long-term equilibrium price level, respectively, is equal to the spot price  of a commodity. The short-term factor is assumed to tend towards zero, as it reflects short-term variations in prices from temporary changes in demand, supply, and other current market conditions, which will be corrected as the market responds over time. In addition, it is assumed that the dynamics of the long-term factor follows a Brownian motion with drift, which reflects expected permanent changes in the equilibrium price level, which can be explained by the advancement in technology for production, or any regulatory changes. The spot price of a commodity is then used to price futures contracts of different maturities jointly, under the risk-neutral probability measure. Studies in [5] and [12] develop models under the Schwartz-Smith model framework assuming both latent factors to follow an O-U process, with an additional constraint, which remedies the parameter identification problem. In this article, we present the model that incorporates dependence between futures contracts with different maturities. The novelty of our approach includes the introduction of correlations between measurement errors of different futures contracts, as well as allowing for serial correlation in each marginal measurement error. The correlations of the measurement errors along with other unknown model parameters will be jointly estimated with state variables using the Kalman filter. For illustration, we use the daily prices of European Union Allowance (EUA) futures contracts from January 2017 to April 2021, which were obtained using the Macquarie University access to Refinitiv Datascope. The European Union Emission Trading System (EU-ETS) was launched in 2005, with its aim to reduce greenhouse gas emissions from a variety of different sectors, such as agriculture, aviation, energy, and manufacturing industries across registered European nations. The implementation of the system puts obligations on those sectors to surrender one unit of EUA in order to emit one tonne of CO 2 or equivalent gases. The history of the EUA market is relatively short, compared with other classic commodity markets such as crude oil, metal, and gas. We choose the selected period to study the recent dynamics of the EUA futures market. The selected time period covers the second half of Phase III and the beginning of Phase IV of the EU-ETS. The EU-ETS initiation and Phase II data were used in the following studies, see [4,25]. Also, the study by [13] used intra-phase and inter-phase futures data and accommodated specifics of each type contract by continuous-time diffusion models with jumps. The remaining sections are organised as follows. Section 2 reviews previous studies on modifications of the Schwartz-Smith two-factor model used for pricing of commodity derivatives, and studies on different approaches to pricing of EUA derivatives. Section 3 presents the main model that deals with both serial correlations and inter-correlations in measurement errors of logarithms of futures prices or their logarithmic returns. In Section 4, the results of simulation study are summarised, where we validate our approach to estimation of the parameters and state variables in case when both inter-correlations and serial correlations between measurement errors of different contracts are present. In Section 5, we present the results of the calibration of the two proposed models relative to the extended Schwartz-Smith model using historical daily EUA futures prices. Section 6 concludes with overall discussion of results of this study. For reference, the detailed setup of the Kalman filtering procedure in the Schwartz-Smith two-factor modelling framework is presented in Appendix A. where the Kalman filter procedure has been modified to incorporate heteroscedasticity of prices and to estimate time-varying risk premium. For pricing of agricultural commodities, the performance of the Schwartz-Smith two-factor model has been studied in [24], using Fourier series as a seasonal component. An attempt at estimating covariances of measurement errors was also made, using a parametrised function of the time to maturity, but claimed that a substantial improvement in the model could not be seen. The study in [2] extended the two-factor modelling framework by incorporating explanatory variables/regression structure into the drift terms of the latent factors. The three-factor model was studied in [14], where the study allowed the deterministic seasonal component in the volatility of latent factors and used a function of inverse inventory as the third-state variable in the model. A step function was used in [22] as a seasonal component for the calibration of commodity spot and futures prices in a general multi-factor model, and a multi-factor model of commodity futures has been developed with stochastic seasonality in [19]. Under the same setup as in [23] and [24], instead of optimising the sample likelihood function, the study in [16] proposed a different estimation method, so-called a two-step least-square estimation method, which involves minimising the sum of squared residuals from the state equation. For pricing of EUA futures, the non-compliance event in terms of the total normalised emission was considered, along with the level of penalty in [9]. They used the digital nature of the terminal allowance price as the basis for modelling of the spot price process, and hence pricing of European options on EUA futures. The study in [26] developed a bivariate model in state-space form for parameter estimation through the Kalman filter, using December-maturity futures contracts from 2005 to 2012. In a recent study by [3], they have evaluated the term structure of EUA futures prices and compared performances of a single-factor GBM model by [1] and the original Schwartz-Smith two-factor model. Main model In this section, we introduce the modifications for modelling of the logarithmic prices and logarithmic returns of futures prices, incorporating serial correlation and inter-correlation in measurement errors of different contracts, within the Schwartz-Smith two-factor modelling framework. The risk-neutral dynamics of short-term and long-term factors, notated as χ t and ξ t at time t, are expressed as the following stochastic differential equations: where > κ γ , 0 are the speed of the mean-reversion for χ t and ξ t , respectively. > σ σ , 0 χ ξ are instantaneous volatilities of two latent variables, and λ χ and λ ξ are the risk premia adjustments for χ t and ξ t that appear after transforming the model from the real probability measure to the risk-neutral (note that, the risk-neutral process is used for deriving the futures price). * W t χ and * W t ξ are correlated standard , where ρ χξ is the correlation coefficient of two stochastic processes. By setting up the pricing model as a linear state-space model, two latent variables are expressed in the state equation, and the relationship between state variables and futures prices is expressed in the measurement equation. Then, we implement the Kalman filter to estimate values of latent variables and the marginal likelihood function, which are used for the estimation of model parameters. The readers are referred to [5] for a detailed setup under the assumption of measurement errors being independent for each contract. Correlations in measurement errors Consider the following linear state-space form for the Schwartz-Smith model: where, for = … t n 1, 2, , and for N contracts with different maturities < <…< Here, ϕ is a diagonal matrix that consists of autoregressive (AR) coefficients for each marginal measurement error of different contracts, and t Δ is the time difference in years between − t 1 and t. We may generalise AR process in measurement errors v t with order p, and introduce additional ϕ m matrices for = … m p 1, , in (4). For state and measurement errors, denoted as w t and v t , we assume that they are independent of each other, and We denote s jj 2 to be variances of measurement errors for contract j, and s jk to be covariances of measurement errors between contracts j and k, where ( ) = … j k N , 1, , , and ≠ j k. For estimation of covariances of measurement errors, we follow the estimation method in [18], where we estimate correlation coefficients of measurement errors between different contracts, and convert them back to covariances. We use the estimation approach introduced in [17] for correlation coefficients, which is often used in credit risk modelling. Let z j be the normalised prices of the contract j, so that the vector of prices consists of ( ) for N contracts. We assume that On correlated measurement errors in the Schwartz-Smith two-factor model  111 where In this setup, we have the following correlation matrix structure: is a diagonal matrix that consists of volatilities of measurement errors, the estimation will involve estimating both D and R. Hence, the modified covariance matrix V is applied in the Kalman filter and in the estimation procedure. Modelling of the logarithmic returns In this section, we develop the measurement equations for the logarithms of relative returns on futures prices. Since the logarithmic returns are differences of the logarithmic prices at time t and − t 1, we set up a linear state space model in the following way. For = … − t n 1, 2, , 1, state and measurement equations are written as follows: Constant vectors, transition matrices, and measurement errors are from the original model setup shown in (5) and (7), with I being the identity matrix. Note that V r is the covariance matrix of the measurement errors of logarithm of returns, instead of logarithm of prices. If v t r follows the AR process, then we can also set our measurement errors similar to Section 3.1. We can proceed with the standard Kalman filter and maximise the likelihood function using new notations accordingly. Parameter estimation The unknown parameter set ( ) = ϕ ψ κ σ λ γ μ σ λ ρ V , , , , , , , , , is estimated by optimising the log-likelihood function of y, the joint distribution of ( ) … y y y , , , n (14) can be re-expressed as follows: (15) with respect to ψ jointly. To obtain the quantity for the log-likelihood function, latent state vectors and their covariance matrices at each time t need to be estimated through the Kalman filter. [5] detected the parameter identification problem within the log-likelihood function in the Kalman filter, and hence, the constraint ≥ κ γ is considered in the optimisation procedure. Simulation study In this section, we perform a simulation study to validate the new approach described in Section 3.1. We focus on validating serial correlation and inter-correlation assumptions in measurement error considered under the Schwartz-Smith model framework, to see how our novel approach performs at estimating parameters and state variables. The steps for this simulation study are as follows. . Then, obtain the state and measurement variables x t and y t by simulating error terms w t and v t . v t is assumed to follow an AR(1) process. (2) Choose appropriate initial values, and determine appropriate feasible bounds for each parameter. (3) Conduct the optimisation procedure through the Kalman filter, which involves jointly estimating state variables and parameter estimates. Obtain parameter estimates, and estimated values for state variables. We simulate for = n 200, 500, 1,000, 2,000, 5,000, with five different futures contracts maturing in 1, 2, 3, 4, and 5 months. The parameter estimates are presented in Tables 1-3, with standard errors computed using Monte Carlo approach, to assess the accuracy of parameter estimates. The MATLAB code for the simulation study is available at https://github.com/Junee1992/EUA_Futures_Pricing/tree/main/Serial-Correlation. Overall, parameter estimates are quite close to their true parameters, except for κ λ , χ and λ ξ . κ γ , and ρ χξ tend to fluctuate for ≤ n 1,000 ; however, it stabilises as the sample size increases. Estimated AR coefficients, volatilities, and correlation coefficients are close to the true values, indicating that the model is able to capture dependencies between measurement errors of contracts with different maturities as well as their serial correlations. The estimation errors of first eight parameters are summarised in Figure 1. The simulated and estimated state variables are shown in Figure 2, along with mean absolute errors (MAE) calculated for two latent variables in each panel, showing that estimation of state variables improves as n increases. Figure 3 illustrates the historical daily futures price dynamics through the trading phases of the EU ETS (Phases I-IV). The graph clearly illustrates the necessity of developing sufficiently versatile models to be able to accommodate the intricacies of EUA futures price dynamics across the different phases, including Phase IV. In addition, Figure 4 shows the term structure of EUA futures prices, which matures in December annually. The main difference between commonly traded commodities and EUA is that futures curves tend to increase smoothly for as we increase the maturity of futures contracts over time. In the two works by [23] and [5], measurement errors are assumed to be independent. However, based on the belief that the price movement for each contract must be correlated with other available contracts during the same period within the same commodity market, we assume that measurement errors have the full covariance matrix. After investigating the statistics of measurement errors for EUA future prices, we found that measurement errors of contracts with different maturities are highly correlated with each other, as close to 1. The correlation matrix of measurement errors of EUA future prices using Model 1 is as follows: We also observe that each series of measurement errors follow AR(1) process, with all AR coefficients being highly significant (with p-value <0.0001). Hence, the price data of EUA futures contracts are suitable to test the model that is developed in this study. For comparative analysis, we use two different models. The following two models consider estimating inter-correlation between measurement errors in different settings: Model 1 -Modelling of logarithmic returns; Model 2 -Modelling of logarithmic returns with serially correlated measurement errors. Table 3: Estimates of standard deviations and factors of correlation coefficients of the measurement errors. For goodness-of-fit assessment, we present the performance of each model using root mean-squared error (RMSE). The results are summarised in Table 4 for futures contracts C j , = … j 1, 2, , 7. In both models, the logarithmic returns are converted to the logarithm of prices. In temrs of RMSE solely, Model 1 performs better than Model 2. However, using model selection criteria (Akaike and Bayesian information criterion), Model 2 is preferable to Model 1. Next, we provide results of out-of-sample predictions using 30/50-day out-of-sample windows in Tables 5 and 6. In each setting, we considered four different scenarios, where we repeat parameter estimation and out-of-sample prediction for every 1 day, 5 days, 10 days, and 30 or 50 days. We use the deseasonalised price data from January 30, 2017, to February 18, 2021, = n 1,040 business days. From Table 5, we found that, in general, Model 2 performed better than Model 1, although differences in RMSE were minimal. We used Diebold-Mariano test for detecting a significant difference in out-of-sample In Table 6, we found that Model 1 performed better for out-of-sample predictions for = h 1, 10 day, whereas Model 2 obtained the lower RMSE for = h 5, 50. The Diebold-Mariano test showed no significant difference in 50 days out-of-sample predictions between the two models again. This pattern persisted in other subsets of data from Phases III and IV. Conclusion In this article, we have presented the modified Schwartz-Smith two-factor model, which can be used for modelling of logarithmic returns of futures contracts, incorporating serial correlation and inter-correlation of measurement errors. We compared the two different models that use the logarithm of futures prices and the logarithmic returns, applying them to EUA futures price data from Phase III and Phase IV of EU-ETS. The simulation study has illustrated that our novel approach was able to jointly estimate both parameters and state variables when serial correlations and inter-correlations were present in measurement errors. We have illustrated that the parameter and state variables estimates converged as the sample size increases. The maximum likelihood method was used for the estimation of coefficients of Gaussian AR processes used for modelling of serial correlations in measurement errors. Finally, we discussed the results of the comparative analysis of the two models in the context of EUA futures data. The goodness-of-fit and out-of-sample performances in predicting futures prices were discussed for each model. Overall, models for logarithmic returns showed a good performance in terms of RMSE for out-of-sample predictions. The full Model 2, for logarithmic returns with serial correlations, performed better than its reduced-form Model 1 for calibration of data in terms of AIC and BIC, and for predicting for 30 days out-of-sample window. The empirical results of the study emphasised the necessity of considering both serial correlations and inter-correlations of measurement errors for modelling of futures prices. For parameter estimation in Model 2, we used the two-step approach. First, we obtained the parameter estimates similar to Model 1, then we proceeded with estimating parameters of AR processes used for modelling of serial correlations. By using the real data through the incorporation of serial correlations in the measurement errors, we showed that the proposed models for logarithmic return capture the price movement more reliably. Once we obtain the log-likelihood function, we maximise it to obtain relevant parameter estimates. Depending on the model assumption, elements in c, G, W, d_t, F t , and V will differ.
2022-05-24T13:11:26.153Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "5e81ca50311b28e22b591f301c33f61b2da2f8f0", "oa_license": "CCBY", "oa_url": "https://www.degruyter.com/document/doi/10.1515/demo-2022-0106/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "aa5041ecfc995f522696464fa6c5842e4e928efc", "s2fieldsofstudy": [ "Economics", "Mathematics" ], "extfieldsofstudy": [] }
234482610
pes2o/s2orc
v3-fos-license
Multi-target DoA Estimation with an Audio-visual Fusion Mechanism Most of the prior studies in the spatial \ac{DoA} domain focus on a single modality. However, humans use auditory and visual senses to detect the presence of sound sources. With this motivation, we propose to use neural networks with audio and visual signals for multi-speaker localization. The use of heterogeneous sensors can provide complementary information to overcome uni-modal challenges, such as noise, reverberation, illumination variations, and occlusions. We attempt to address these issues by introducing an adaptive weighting mechanism for audio-visual fusion. We also propose a novel video simulation method that generates visual features from noisy target 3D annotations that are synchronized with acoustic features. Experimental results confirm that audio-visual fusion consistently improves the performance of speaker DoA estimation, while the adaptive weighting mechanism shows clear benefits. INTRODUCTION In human-robot interaction, a robot relies on its Sound Source Localization (SSL) mechanism to direct its attention. Traditionally, SSL approaches only use audio signals and attempt as a signal processing problem [1,2,3]. However, those approaches are adversely affected by acoustically challenged conditions, such as noise and reverberation scenarios [4]. To address that, several Neural Networks (NN)based approaches were explored [5,6,4,7] assuming a sufficient amount of data are available. Specifically, location-related Short-Time-Fourier-Transform (STFT) cues are mapped to sound DoA information in [5,6] while the Generalized Cross Correlation with Phase Transform (GCC-PHAT) cues are used in [4,7]. Despite the progress, many research problems remain. One of them is multispeaker localization in real multi-party human-robot interaction scenarios under acoustic challenging conditions [4]. Considering seeing and hearing are the two most essential human cognitive abilities, studies observed that audio and video convey complementary information and may help to overcome uni-modal limitations of a degradation condition for scene analysis [8,9,10]. There is a very broad literature of audio-visual approaches for speaker localization over the past decades [11,12,13] was not until recently that the deep learning-based approaches have attracted more attention, thanks to the increasing computational power and rapid development in NN techniques. Nevertheless, most of these methods aim at locating sound sources in visual scenes [14,15,16,17]. Specifically, an attention mechanism is incorporated into the individual sound and vision network to model the audio-visual image correspondence [14] . A visual saliency network is employed in [15], together with an audio representation network, to feature a SSL module for producing an audio-visual saliency map. An attention network is proposed in [16] to learn the visual regions of a sounding event. By fusing audio and visual features using LSTM and bilinear pooling, the audio assisted visual feature extraction is described in [17]. All the research studies use audio as a supplementary modality for visual localization and require the sound sources to be both audible and visible. Unlike the prior studies, we aim to perform audio-visual speaker localization in the spatial DoA domain where targets can appear either inside (visible) or outside (invisible) the camera's Field-of-View (FoV). We propose two neural network architectures and make the following contributions in this paper: (1) we propose a novel video simulation method to deal with the lack of video data; (2) for the first time, we design a deep learning network for audio-visual multispeaker DoA estimation, and (3) we adopt an adaptive weighting mechanism in a simple feedforward network to estimate the multimodal reliability under different conditions. PROPOSED METHOD Given a sequence of frame-synchronized audio and video signals captured by a microphone array and a calibrated camera, we aim to estimate the DoA information θ = [−180 • , 180 • ) for each sound source at each frame. Next, we describe the way we characterize audio and video signals, the video simulation method, and the proposed neural networks. Audio features The GCC-PHAT is widely used to calculate the time different of arrival (TDOA) between any two microphones in a microphone array [4,7]. We adopt it as the audio feature [1] due to its robustness in the noisy and reverberant environment [18] and the fewer tunable parameters than the other counterparts e.g. STFT [5]. Let S l and Sp be the Fourier transforms of audio sequence at l and p th channels of the microphone array, respectively. We compute the GCC-PHAT features with different delay lags τ as: where * denotes the complex conjugate operation, R denotes the real part of complex number and N denotes the FFT length. Here, the delay lag τ between two signals arrived is reflected in the steering vector e j 2πk N τ in Eq. 1. Visual features and simulation With the advent of deep learning, accurate face detection at low computational cost becomes widely available [19]. Let us define The visual feature is encoded as the exponential part of the multi-variant Gaussian distribution (in u and v direction) with the standard deviations specified by the detection width and height and achieves the maximum at the central point: where x indicates the potential image positions, Audio-visual parallel data are not abundantly available. However, it is possible to obtain the camera's extrinsic and intrinsic calibration parameters ζ e and ζ i , the 3D location p = (x, y, z) of a sound source. We propose a novel method to synthesize visual features in synchrony with the audio features by Eq. 3. The overall pipeline of visual feature generation is illustrated in Fig. 2(a) and the process is formulated next. We first add three-variant Gaussian distributed spatial noise to the target 3D location p to account for possible face detection error, and transfer the resulting point to the camera coordinates given the extrinsic parameters:p with noise covariance matrix Σp = diag(σ 2 x , σ 2 y , σ 2 z ) assuming that the additive noises to (x, y, z) are independent, and Φ is the transformation using the pin-hole camera model [20]. Then, we geometrically create the 3D face bounding box whose plane is perpendicular to the camera's optical axis (zc in Fig. 2(b)), and project to the image plane: where Ψ is the 3D-to-image projection, v is the translation vector which equals to (− W 2 , − H 2 , 0) for the top-left point χ tl and ( W 2 , H 2 , 0) for the bottom-right point χ br , respectively. W and H are the width and height assumptions of a real human face. Finally, the simulated face detection bounding box b is computed as b = cat(χ tl , χ br −χ tl ), where cat denotes a concatenation operation to form a column vector. Neural network architecture We propose two NN architectures for audio-visual speaker DoA estimation based on Multilayer Perceptron (MLP), namely MLP Audio-Visual Concatenation (MLP-AVC) and MLP Audio-Visual Adaptive Weighting (MLP-AVAW), which specify different ways of audiovisual feature fusion and classifier design as illustrated in Fig. 3. MLP-AVC consists of three hidden layers, denoted as MLP3 in Fig. 3(a) by a dotted blue box, each one is a fully-connected layer with ReLU activation [21] and batch normalization [22]. It takes the flattened and concatenated GCC-PHAT and visual features as an input vector. The network is trained to predict the probability of DoA labels, as in [4], using a sigmoid output layer. MLP-AVC adopts an early fusion strategy by concatenating audio and visual features. We hypothesize that such early fusion doesn't learn to pay selective attention to uni-modal features, that are crucial in face of missing data or noisy data. MLP-AVAW introduces an adaptive weighting mechanism, which uses a tiny NN with two fully-connected layers, colored in purple in Fig. 3(b)), to learn three adaptive weights for the audio GCC-PHAT feature, video image horizontal and vertical features, respectively. A softmax activation function is applied for weights normalization. We call this as 'adaptive weighting' mechanism as the weights are adapted according to the live input during inference. Finally, the weighted multi-modal features are concatenated for MLP3 to compute DoA. Dataset and performance metrics The existing audio-visual datasets, such as AV16.3 [23], CAV3D [12], and AVASM [24], are either of limited size, or don't provide the spatial ground truth. We, therefore, simulate the synchronized visual features for a SSL dataset of the loudspeaker cases. We choose the recently released SSLR dataset 1 [4], that is recorded in a physical setup from one or two concurrent speakers, and with adequate target 3D annotations. It consists of 4-channel audio recordings at 48 kHz sampling rate, that is organized into three subsets, namely train (loudspeaker), test-human, and test-loudspeaker. We evaluate the performance of DoA estimates using the same metrics of [4] i.e. Mean Absolute Error (MAE) and Accuracy (ACC), where MAE is defined as the mean absolute error between the actual and the estimated DoA, while the accuracy allowance of ACC is 5 • in the classification prediction. For the test-human subset, we apply the RetinaFace detector [?] to achieve the face bounding boxes. For the train and test-loudspeaker subsets, the visual features are simulated with the method proposed in Sec. 2.2 with a noise covariance matrix Σp = diag(0.2, 0.2, 0.2). Fig. 4(a) illustrates the ground truth camera (magenta) and target 3D locations for the train (blue), testloudspeaker (green) and test-human (red) subsets for all frames. Targets in the gray region are inside the camera's FoV, therefore, visible to the camera. We only generate face bounding boxes of visible targets, as visualized in Fig. 4(b-c) and formulated in Eq. 4-5 with the simulated bounding box b. Fig. 4 shows that the face bounding boxes spread well across the FoV with a balanced distribution. We don't generate bounding boxes for speakers that are outside the FoV. As a result, the visual features for the invisible speakers become missing data (the normal distribution in Eq. 3 for visual feature representation) in the audio-visual dataset. The statistics of simulated visual features are summarized in Tab. 1 where DR represents the percentage of video frames having targets inside the FoV. Low DR means a high percentage of missing visual features. We also report in Tab. 1 the DoA MAE and ACC of the simulated visual features, indicating that the simulated data is of enough difficulty to represent real scenarios. Parameter settings The GCC-PHAT is computed for every 170 ms segments with delay lags τ ∈ [− 25,25], resulting in 51 coefficients for each microphone pair as in [4]. With 6 microphone pairs, each pair contributing 51 GCC-PHAT coefficients, we obtain 306 GCC-PHAT coefficients. For visual features, the human face width and height are assumed to have W = 0.14 m, H = 0.18 m, respectively as such in [12]. We adjust the size of the horizontal and vertical visual feature encoding to 51 to match that of GCC-PHAT coefficients. We use the Adam optimizer [25]. All models are trained for 10 epochs with a batch size of 256 samples and a learning rate of 0.001. Since multi-speaker localization is not a single-label classification problem, we use Mean Square Error (MSE) instead of cross-entropy as the loss function. Results Tab. 2 provides the experimental results on the SSLR test set. Results are separately reported for different subsets and the speaker number (assumed to be known). The best result for each column is in the bold font. We compare the results of MLP-AVC and MLP-AVAW with two audio baseline methods: the traditional Steered Response Power PHAse Transform (SRP-PHAT) method [2] and the state-ofthe-art MLP-GCC method [4]. As speakers are not always visible, we don't provide the video-only baseline to avoid unfair comparison. Furthermore, Tab. 1 suggests that it is challenging to expect visual features alone to outperform the audio DoA estimation. Tab. 2 shows that, by both early fusion of audio-visual features. In particular, MLP-AVC reduces MAE from 4.63 • (MLP-GCC) to 4.42 • , which confirms the audio-visual fusion benefits. For the testhuman subset, speakers are mostly inside the camera's FoV (the red points locate in the gray region in Fig. 4(a)) and DR of the Reti-naFace detector [?] achieves 100 %, which is much higher than DR in test-loudspeaker (9.2 %). Thus, the MAE degradation in test-human (from 4.75 • to 1.84 • and from 5.98 • to 3.89 • ) is more significant than in test-loudspeaker (from 4.06 • to 3.87 • and from 8.10 • to 7.80 • ). Besides, further improvements are introduced by the adaptive weighting mechanism in MLP-AVAW, which achieves the best results in most cases with the overall MAE at 4.22 • and ACC at 92.0%. Next, we further evaluate the noise robustness of the proposed networks. For audio, we apply additive white Gaussian noise of SNRs varying from −10 dB to 20 dB on the original SSLR audio signals. For video, we randomly swap up to 70% face detections to the other frames to generate false positives and false negatives. Tab. 3 lists the overall MAE and ACC of MLP-AVAW in comparison with those under clean audio condition. We also provide the MLP-GCC results in the first two columns indicating the audio-only performance without swapping the face detection. From the results, we can see that fusing visual features always brings benefits. Additionally, audio is of more importance than video since with the degradation of SNR, both MAE and ACC are getting worse as Face Detection Swap Percentage (FDSP) increases, the performance degradation is also obvious but not so significant. Even at FDSP=70%, the proposed network still outperforms the MLP-GCC. The performance gains by MLP-AVAW suggest that visual features provide additional information in degraded acoustic conditions. CONCLUSIONS This paper presented two neural network architectures for multispeaker DoA estimation using audio-visual signals. The comprehensive evaluation results confirm the benefits of audio-visual fusion and the adaptive weighting mechanism. Besides, we proposed a technique to synthesize visual features from geometric information about the sound sources to deal with lack of annotated audio-visual data. Future work will include exploring network models that can generalize with limited training data.
2021-05-14T01:15:58.278Z
2021-05-13T00:00:00.000
{ "year": 2021, "sha1": "30ee22878e01b04340489843497fe44a4c531fcc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2105.06107", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "30ee22878e01b04340489843497fe44a4c531fcc", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
196685859
pes2o/s2orc
v3-fos-license
131I-labeled polyethylenimine-entrapped gold nanoparticles for targeted tumor SPECT/CT imaging and radionuclide therapy Purpose: Polyethylenimine (PEI) has been widely used as a versatile template to develop multifunctional nanosystems for disease diagnosis and treatment. In this study, we manufactured iodine-131 (131I)-labeled PEI-entrapped gold nanoparticles (Au PENPs) as a novel nanoprobe for single-photon emission computed tomography/computed tomography (SPECT/CT) imaging and radionuclide therapy. Materials and methods: PEI was PEGylated and sequentially conjugated with Buthus martensii Karsch chlorotoxin (BmK CT, a tumor-specific ligand which can selectively bind to MMP2), 3-(4′-hydroxyphenyl)propionic acid-OSu (HPAO), and fluorescein isothiocyanate to form the multifunctional PEI template for entrapment of Au NPs. Then, the PEI surface was radiolabeled with 131I via HPAO to produce the novel nanoprobe (BmK CT-Au PENPs-131I). Results: The synthesized multifunctional Au PENPs before and after 131I radiolabeling were well-characterized as follows: structure, X-ray attenuation coefficient, colloid stability, cytocompatibility, and radiochemical stability in vitro. Furthermore, BmK CT-Au PENPs-131I were suitable for targeted SPECT/CT imaging and radionuclide therapy of tumor cells in vitro and in a xenograft tumor model in vivo. Conclusion: The developed multifunctional Au PENPs are a promising theranostic platform for targeted imaging and treatment of different MMP2-overexpressing tumors. Introduction Nanomedicine holds great promise for diagnosis and treatment of various diseases, particularly cancer. 1,2 Glioma is the most common intracranial tumor and has the highest mortality rate. 3 Due to the invasive nature of glioma cells, difficulties in accurate delineation of tumor margin and unsatisfactory treatment result in increased mortality. 4,5 For high grade gliomas, the 5-year survival rate is less than 5%. 6 Thus, it is urgent to develop novel diagnostic and therapeutic options. Rapid development of nanomedicines has conferred the advantages of different imaging modalities and therapy techniques against this malignant disease. 7 Nuclear medicine is a powerful technology that uses radionuclides for diagnosis and treatment of many diseases. [8][9][10] Single-photon emission computed tomography (SPECT), one of the most important radionuclide-based imaging techniques, has shown great value in tumor imaging. [11][12][13][14] Meanwhile, a number of therapeutic radionuclides have been widely used for tumor treatment, including but not limited to iodine-131 ( 131 I), rhenium-188 ( 188 Re), yttrium-90 ( 90 Y), lutetium-177 ( 177 Lu), and radium-223 ( 223 Ra). [15][16][17][18][19][20][21] Among these therapeutic radionuclides, 131 I has been routinely used in radionuclide therapy and imaging of thyroid diseases, such as thyroid cancer, because of its high affinity for the thyroid and relatively long half-life (8.01 days). Beta minus decay provides therapeutic effects, while gamma emissions are used for SPECT imaging. 17,22,23 Therefore, 131 I-labeled molecular probes have been developed for theranostic applications in treatment of various kinds of cancer. [24][25][26][27] Several studies have suggested that 131 I-labeled gliomatargeting ligands such as chlorotoxin, and chlorotoxin-like peptides such as Buthus martensii Karsch chlorotoxin (BmK CT), are potential candidates for targeted SPECT imaging and radionuclide therapy of glioma. [28][29][30][31] To overcome the main obstacle of blood-brain barrier (BBB), some interventional therapy strategies have been attempted, which have greatly promoted the development of glioma treatment. 32 Computed tomography (CT) is a powerful, noninvasive diagnostic technique that frequently requires additional CT contrast agents for high resolution imaging to allow for more accurate diagnoses. However, commonly-used iodine-based CT contrast agents have short half-lives and poor specificity. 28,33 Recent studies evaluating gold nanoparticles (Au NPs) have shown that various Au-based CT contrast agents are emerging due to high atomic number, tunable surface chemical modification chemistry, and biocompatibility after appropriate surface modifications. [34][35][36][37] Polyethylenimine (PEI) has the advantages of high-density amines and good water solubility, and it has been widely used as a template to produce multifunctional CT imaging agents. [38][39][40] PEI-entrapped Au NPs (Au PENPs) can be easily PEGylated and functionalized with targeting molecules, resulting in prolonged blood circulation time, low toxicity, and designed targeting ability for imaging applications. In addition, PEI has been identified as an excellent vehicle to encapsulate drugs or genes for treatment of different cancers, suggesting that PEI is an excellent template for development of theranostic nanosystems. [41][42][43] Our previous work has demonstrated that PEGylated PEI was able to load Au NPs and doxorubicin for tumor-targeted CT imaging and chemotherapy. 44 Furthermore, these Au NPs could be readily labeled with radionuclides for nuclear medicine applications. For instance, we have shown that PEI could be utilized to entrap Au NPs, then labeled with 99m Tc for SPECT/CT imaging of tumors. 45,46 However, few studies have evaluated the use of PEI as a vehicle to load therapeutic radionuclides for tumor treatment. The previous successes and properties of 131 I suggest that PEI may be further utilized as a versatile platform to develop multifunctional nanoprobes for tumor theranostic applications. In this work, we reported the development of 131 I-labeled Au PENPs modified with the glioma-targeting peptide BmK CT for targeted SPECT/CT imaging and radionuclide therapy of glioma. First, PEI was sequentially modified with BmK CT via a PEG linker. Carboxyl-terminated methoxy PEG (mPEG-COOH), 3-(4′-hydroxyphenyl)propionic acid-OSu (HPAO), and fluorescein isothiocyanate (FI) were used to form the multifunctional PEI template. The template was used to entrap Au NPs via sodium borohydride reduction chemistry. Then, the remaining terminal amines were acetylated by acetic oxide (Ac 2 O), and the product was radiolabeled with 131 I via HPAO, resulting in {(Au 0 ) 200 -PEI.NHAc-mPEG-(PEG-BmK CT)-131 I-HPAO-FI} PENPs (BmK CT-Au PENPs-131 I). The multifunctional Au PENPs before and after 131 I labeling were wellcharacterized, including structure, X-ray attenuation coefficient, colloidal stability under different pH and temperature conditions, cytocompatibility at an Au concentration up to 200 μM, and radiochemical stability in vitro. Furthermore, the prepared BmK CT-Au PENPs-131 I could be utilized for targeted SPECT/CT imaging and radionuclide therapy of glioma cells in vitro and in a xenograft tumor model in vivo. The developed multifunctional Au PENPs may provide a promising theranostic platform for targeted imaging and radionuclide therapy of glioma. Synthesis of BmK CT-Au PENPs-131 I BmK CT-modified PEI was synthesized according to our previous work. 29 Briefly, mPEG-COOH (300 mg) dissolved in DMSO was activated by EDC (175.2 mg), then added dropwise into a DMSO solution containing PEI.NH 2 (100.0 mg) with vigorous stirring for 3 days at room temperature to obtain PEI.NH 2 -mPEG. MAL-PEG-SVA (300.0 mg) dissolved in DMSO was mixed with the reaction solution for another 3 days to form PEI.NH 2 -mPEG-(PEG-MAL). BmK CT (114.8 mg) was reacted with the MAL groups of PEI overnight to produce PEI.NH 2 -mPEG-(PEG-BmK CT). Unreacted MAL groups on the PEI surface were blocked using excess 1-butanethiol to prevent reaction with HPAO in the following step. After that, HPAO (31.6 mg) and FI (7.8 mg) were sequentially added to the reaction mixture with stirring overnight to obtain PEI.NH 2 -mPEG-(PEG-BmK CT)-HPAO-FI. Functionalized PEI was used as a template for entrapment of Au NPs using sodium borohydride reduction chemistry with a PEI/Au salt molar ratio of 1:200. Briefly, an aqueous HAuCl 4 solution (0.01 M, 80 mL) was mixed with the PEI.NH 2 -mPEG-(PEG-BmK CT)-HPAO-FI (768.5 mg, 20 mL) solution by stirring for 0.5 hours. Then, cold NaBH 4 solution (10.0 mg/mL, 9.1 mL) was added rapidly and the mixture was stirred for 2 hours to form {(Au 0 ) 200 -PEI.NH 2 -mPEG-(PEG-BmK CT)-HPAO-FI} NPs. After acetylation of the remaining NH 2 groups on the surface of PEI by reacting with TEA (1,957.1 μL) and Ac 2 O (1,107.6 μL) for 24 hours, the mixture was purified using a dialysis membrane (MWCO =14,000) against PBS (three times, 2 L) and water (six times, 2 L) over 3 days to remove excess reactants and byproducts. The final {(Au 0 ) 200 -PEI.NHAc-mPEG-(PEG-BmK CT)-HPAO-FI} NPs (BmK CT-Au PENPs) were obtained by lyophilization. For comparison, Au PENPs without BmK CT modification were also prepared under similar conditions. The intermediate products were collected, purified, and characterized to calculate the average number of conjugated moieties (HPAO, BmK CT, mPEG, and FI) per PEI. Finally, 131 I radiolabeling of BmK CT-Au PENPs was achieved using the chloramine T method. Briefly, a PBS solution of BmK CT-Au PENPs (200 μg, 200 μL) was mixed with chloramine T (200 μg) and Na 131 I solution (20 mCi, 200 μL). After incubation for 30 minutes at 37°C under continuous stirring, the reaction mixture was eluted through PD-10 desalting columns with PBS as the mobile phase, and 1 mL of liquid was collected in each tube. After ten tubes were collected, the radioactivity of each tube was measured. The radiochemical yield was calculated as (A0-A)/A0. A0 is the total activity of 131 I in the reaction, and A is the activity of PD-10 desalting column after purification. BmK CT-Au PENPs-131 I was collected and Au PENPs-131 I without BmK CT was also prepared for comparison using the same method. Radiochemical purity and radiostability were assessed in vitro according to our previous work. 17 Characterization techniques 1 H NMR spectra of samples dissolved in D 2 O were obtained using a Bruker AV400 nuclear magnetic resonance spectrometer (Bruker AXS Advanced X-ray Solutions GmbH, Karlsruhe, Germany). UV-Vis spectra were collected using a Lambda 25 UV-Vis spectrophotometer (PerkinElmer, Inc., Waltham, MA, USA). Dynamic light scattering (DLS) and zeta potential were measured using a Malvern Zetasizer Nano ZS model ZEN 3600 (Malvern Instruments, Malvern, UK) with a standard 633 nm laser. The Au content of the prepared Au NPs was evaluated using a Leeman Prodigy inductively coupled plasma optical emission spectrometer (Teledyne Leeman Labs, Hudson, NH, USA). Transmission electron microscopy (TEM) samples were prepared by dropping an aqueous particle suspension (1 mg/mL) onto a carbon-coated copper grid, followed by air-drying prior to analysis. TEM imaging was performed using a JEOL 2010F analytical electron microscope (JEOL, Tokyo, Japan) at an operating voltage of 200 kV. The X-ray attenuation properties of the formed Au NPs were compared using Omnipaque (iohexol 300; GE Healthcare, Chicago, IL, USA) at different Au or iodine concentrations (6.25-100 μM). CT images were acquired using a GE Discovery STE PET/CT system (GE Healthcare) with the following settings: 100 kV, 220 mA, and a slice thickness of 1.25 mm. SPECT imaging was performed using a GE Infinia SPECT scanner equipped with an Xeleris workstation and High-Energy General-Purpose collimators (GE Healthcare). Cell culture and construction of glioma-bearing nude mouse model C6 cells were cultured in DMEM containing 10% FBS in a humidified incubator with 5% CO 2 at 37°C. We established a subcutaneous glioma model in nude mice for in vivo experiments. Briefly, 2×10 6 C6 cells were subcutaneously injected in the right flank of each mouse. The mice were then fed regularly for 3 weeks and tumor volumes reached 0.8-1.0 cm 3 . Cytotoxicity assay CCK-8 assay was used to assess the cytotoxicity of BmK CT-Au PENPs before and after 131 I labeling in C6 cells. In brief, C6 cells in the logarithmic growth phase were seeded onto a 96-well plate (1×10 4 cells per well) and incubated overnight. The cells were treated with Au PENPs or BmK CT-Au PENPs at different final Au concentrations (0, 12.5, 25, 50, 100, and 200 µM, respectively). After 24 hours' incubation, C6 cell viability of each group was analyzed using the CCK-8 method according to standard procedures. Cytotoxicity of BmK CT-Au PENPs-131 I in C6 cells was also evaluated at different radioactivity concentrations (0, 12.5, 25, 50, 100, and 200 µCi/mL, respectively). After 24 hours' incubation, the viability of C6 cells was determined. In vitro targeting assay Flow cytometry and confocal microscopy were used to assess the targeting efficiency of BmK CT-Au PENPs to tumor cells in vitro. For flow cytometry analysis, C6 cells in the logarithmic growth phase were seeded onto a 6-well plate (2×10 5 cells per well) and incubated overnight. The cells were treated with BmK CT-Au PENPs or Au PENPs at final Au concentrations of 0.5 μM and 5 μM, respectively. PBS was used as the control. After 4 hours' incubation, the cells were trypsinized, centrifuged, and resuspended in PBS. The mean fluorescence intensity of approximately 10,000 cells in each group was analyzed using a BD AccuriTM C6 Flow Cytometer in the FL1fluorescence channel. For confocal microscopy imaging, C6 cells in the logarithmic growth phase (5×10 4 ) were seeded onto 35 mm glass bottom dishes and incubated overnight. The cells were treated with BmK CT-Au PENPs or Au PENPs at a final Au concentration of 5 μM. PBS was used as the control. After culturing for 4 hours, the cells were rinsed with PBS, fixed with 4% paraformaldehyde and nucleic acids were stained with DAPI according to standard procedures. FL1-fluorescence of the stained cells was measured at 488 nm by confocal microscopy (LSM 700, Carl Zeiss Meditec AG, Jena, Germany). SPECT and CT imaging in vitro Feasibility of BmK CT-Au PENPs for CT imaging of tumor cells was assessed in vitro. First, C6 cells in the logarithmic growth phase were seeded onto a 6-well plate (2×10 5 cells per well) and incubated for 24 hours. The cells were treated with BmK CT-Au PENPs or Au PENPs at different Au concentrations (20,40,60,80, and 100 μM, respectively). After 4 hours' incubation, the cells were trypsinized, centrifuged, and rinsed with PBS in 1.5 mL microcentrifuge tubes, then imaged using a CT system (GE Inc., USA). SPECT and CT imaging in vivo All animal experiments in this study were approved by the ethics committee of Shanghai General Hospital and conformed to the National Institutes of Health Guidelines. Before in vivo imaging, the glioma-bearing nude mice were randomly divided into two groups (five mice per group) and anesthetized with pentobarbital sodium (40 mg/kg). The mice were intravenously injected with PBS solutions containing BmK CT-Au PENPs or Au PENPs ([Au]=100 µM, 100 µL) to evaluate CT imaging performance in vivo. CT images were obtained at 0, 0.5, 2, 4, 6, 8, and 16 hours post-injection. For SPECT imaging, glioma-bearing nude mice were fed and given water containing 1% potassium iodide for 3 days to block thyroid uptake of 131 I. Then, the mice were anesthetized and randomly divided into two groups (five mice per group). We intravenously injected a PBS solution of BmK CT-Au PENPs-131 I or Au PENPs-131 I at the same dose (500 μCi, 100 μL) into the mice and performed SPECT imaging at 0.5, 2, 4, 6, 8, and 16 hours post-injection using an Infinia SPECT scanner. In vivo antitumor efficacy The in vivo therapeutic efficacy of BmK CT-Au PENPs-131 I was further assessed in a subcutaneous tumor model. To reduce thyroid uptake of 131 I, the tumor-bearing nude mice were fed and given water containing 1% potassium iodide for 3 days. After being divided into five groups randomly (five mice per group), the mice in each group were intravenously injected with 100 μL PBS solutions of BmK CT-Au PENPs-131 I (250 μCi), Au PENPs-131 I (250 μCi), BmK CT-Au PENPs (0.1 M Au), or Au PENPs (0.1 M Au), or saline. Treatment was administered every 3 days for a total of seven treatments. During treatment, body weight and tumor size were recorded before each injection. After the 21-day treatment period, one mouse from each group was sacrificed to obtain the major organs (heart, liver, spleen, lung, and kidney) and the subcutaneous implanted tumors. The harvested major organs and tumors were stained with H&E according to the standard procedure. To further evaluate apoptosis in the treated glioma-bearing mice, the tumors were stained using the TUNEL method using an apoptosis detection kit (Hoffman'La Roche Ltd., Basel, Switzerland). The stained specimens were imaged using an AMEX 1200 inverted phase contrast microscope. Statistical analysis Experimental data in this study were analyzed by one-way ANOVA and the final data were marked with (*) for p<0.05, (**) for p<0.01, and (***) for p<0.001. A p-value <0.05 was considered statistically significant. Results and discussion Synthesis and characterization of the BmK CT-Au PENPs-131 I PEGylation has been identified as an effective strategy to improve biocompatibility and pharmacokinetic properties of NPs. In our previous work, PEGylated PEI was successfully used as a template to entrap Au NPs for CT imaging or to encapsulate drugs for chemotherapy of tumors invivo. 41,44 In this study, PEGylated PEI was sequentially modified with BmK CT peptide using PEG linker, HPAO, and FI, and then utilized to entrap Au NPs. Remaining PEI surface amines were acetylated and the product was radiolabeled with 131 I via HPAO. The Au PENPs modified with BmK CT were used as a multifunctional nanoprobe for tumor-targeted SPECT/CT and radionuclide therapy ( Figure S1). The average number of mPEG, PEG, HPAO, BmK CT, and FI moieties attached to each PEI was estimated using NMR integration according to our previous work, 11,29 and the results were recorded in Table 1. Then, PEI.NH 2 -mPEG-(PEG-BmK CT)-HPAO-FI and PEI.NH 2 -mPEG-(PEG-Bu)-HPAO-FI were respectively employed as templates to entrap Au NPs for synthesis of BmK CT-Au PENPs and Au PENPs with an Au salt/PEI molar ratio of 200:1 as previously described. 11 The synthetized Au NPs were analyzed with different techniques. Inductively coupled plasma optical emissionspectrometry was performed to calculate Au content, and the data indicated complete reduction of Au(III) to Au(0) in the BmK CT-Au PENPs and Au PENPs with the average numbers of Au atoms per PEI close to the selected Au salt/ PEI molar ratio. Successful capture of Au NPs within PEI was confirmed by UV-Vis spectroscopy. In agreement with the results reported in the literature, 11 a noticeable surface plasmon resonance peak at approximately 540 nm was clearly observed due to the particle-induced light scattering effect ( Figure S1).Furthermore, UV-Vis spectroscopy was used to assess stability of the Au NPs in vitro in the given pH (5.0-8.0) and temperature (4-50°C) ranges. As shown in Figure S2A and B, no obvious changes in absorption characteristics were observed in BmK CT-Au PENPs, indicating acceptable stability under different temperature and pH conditions. BmK CT-Au PENPs' particle size was measured using DLS and TEM. BmK CT-Au PENPs had a hydrodynamic diameter of 147.0±9.1 nm as determined by DLS, and Au core size of BmK CT-Au PENPs was 4.4 ±0.7 nm, as determined by TEM (Figure 2), smaller than the hydrodynamic size determined using DLS. This was likely due to the fact that TEM only measures the Au core of a single particle, rather than numerous Au NPs in aggregates, as measured by DLS. Notably, the low polydispersity index (0.3±0.06) determined using DLS and the relatively narrow size distribution determined using TEM, suggested favorable size uniformity of the BmK CT-Au PENPs. TEM also showed that BmK CT-Au PENPs were nearly spherical in shape, and high resolution TEM images further showed Au crystal lattices, suggesting high crystallinity of the formed BmK CT-Au PENPs ( Figure 2C). This was confirmed through evaluation of selected area electron diffraction patterns, where the featured (111), (200), (220), and (311) rings were used to confirm the face-centered-cubic crystal structure ( Figure 2D). Energy dispersive spectroscopy of BmK CT-Au PENPs samples indicates the existence and distribution of Au elements ( Figure 2E). Finally, surface potentials of the BmK CT-Au PENPs and Au PENPs were estimated at 6.06±0.16 mV and 14.5±0.15 mV, respectively, demonstrating successful acetylation of the remaining surface amine groups of PEI. The prepared Au NPs were readily labeled with 131 I using the chloramine-T method due to the presence of HPAO on the surface of PEI. The radiolabeling yields of BmK CT-Au PENPs-131 I and Au PENPs-131 I were 77.0 ±4.97% and 72.3±3.62% (n=3), respectively. After purification using a PD-10 column, the radiochemical purities were greater than 99%, and they remained above 90% after exposure to PBS at room temperature and FBS at 37ºC for 24 hours ( Figure S2C and D), indicating excellent radiostability, which allowed for further in vitro and in vivo analyses. Cytotoxicity assay CCK-8 assay was used to assess the potential cytotoxicity of BmK CT-functionalized Au PENPs before and after 131 I radiolabeling ( Figure S3). Viability of C6 cells treated with BmK CT-Au PENPs or Au PENPs Notes: PEI.NH 2 was first PEGylated using mPEG-COOH and conjugated with MAL-PEG-SVA to form PEI.NH 2 -mPEG-(PEG-MAL). Then the PEI.NH 2 -mPEG -(PEG-MAL) was divided in half. One was modified with BmK CT, and the other was reacted with 1-butanethiol. Therefore, the mean number of mPEG and PEG in PEI. PENPs-131 I exerted a stronger inhibitory effect compared with that of Au PENPs-131 I at the same radioactivity concentrations, demonstrating that the BmK CT modification enhanced cellular uptake of 131 I-labeled Au PENPs into C6 cells in the given concentration range. Targeting specificity of the BmK CT-Au PENPs to tumor cells BmK CT is a tumor-specific ligand and can selectively bind to MMP2 which is overexpressed in various tumors. Based on this property, BmK CT-Au PENPs were expected to specifically target C6 cells. Flow cytometry ( Figure S4) and confocal microscopy ( Figure S5) were used to assess targeting specificity of BmK CT-Au PENPs. Flow cytometry assay showed that the fluorescence intensity in C6 cells treated with BmK CT-Au PENPs for 4 hours was significantly higher than that in cells treated with untargeted Au PENPs at the same concentration (p<0.05). In contrast, the fluorescence intensity in C6 cells treated with Au PENPs was similar to that in the PBS control. These results demonstrated that the targeted ligand BmK CT increased uptake of Au PENPs into C6 cells. Similarly, enhanced cellular uptake of BmK CT-Au PENPs was visualized using confocal microscopy. After treatment with BmK CT-Au PENPs for 4 hours, C6 cells displayed prominent fluorescence signals, while cells treated with Au PENPs at the same concentration exhibited similar fluorescence signals to those of the PBS control, further confirming that uptake of BmK CT-Au PENPs in C6 cells was enhanced by modification with BmK CT. Thus, the synthetized BmK CT-Au PENPs displayed high targeting specificity to C6 cells based on flow cytometry and confocal microscopy analyses. SPECT and CT imaging in vitro SPECT and CT performance of BmK CT-Au PENPs were first evaluated in vitro (Figure 3). Due to high X-ray attenuation, Au NPs have been explored as contrast agents for CT imaging. In this study, the formed BmK CT-Au PENPs were compared with Omnipaque (a small molecule CT contrast agent used clinically) to investigate the effects of high X-ray attenuation on performance. As shown in Figure 3A-C, both Au NPs and Omnipaque had brighter CT images and higher HU values as their concentrations increased, while a sharper trend of Au NPs could be clearly seen as a result of larger HU values than Omnipaque at the same Au or I concentrations, revealing stronger X-ray attenuation by Au NPs than by iodinebased contrast agents. Similarly, for CT imaging in vitro, the brightness of CT images increased with increasing Au concentrations in the cells treated with BmK CT-Au PENPs and Au PENPs ( Figure 3D and E). Quantitative analysis showed that a higher CT value was obtained in the C6 cells treated with BmK CT-Au PENPs than those treated with Au PENPs at Au concentrations of 20, 40, 60, 80, and 100 μM ( Figure 3F). At an Au concentration of 100 μM, cells treated with BmK CT-Au PENPs showed a 2.2 times higher CT value than those treated with Au PENPs, indicating that BmK CT modification enhanced cellular uptake of Au PENPs into C6 cells. SPECT images of C6 cells treated with BmK CT-Au PENPs-131 I were clearly brighter than those treated with Au PENPs-131 I at the same radioactivity concentrations ( Figure 3G and H). Further quantitative analysis demonstrated that the radioactive signal intensity in the BmK CT-Au PENPs-131 I group was significantly higher than that in the Au PENPs-131 I group, especially at the radioactivity concentration of 400 µCi/mL ( Figure 3I). These data indicated that BmK CT-Au PENPs-131 I allowed for excellent SPECT imaging of gliomas in vitro. SPECT and CT imaging in vivo The 131 I has been widely used in clinical radionuclide therapy. However, 131 I-labeled substances have some short-comings, such as low image resolution and in vivo dehalogenation. Therefore, before the in vivo experiments, all nude mice were fed with potassium iodide to saturate the thyroid and reduce the unwanted thyroid uptake of 131 I. SPECT and CT imaging suitability of BmK CT-Au PENPs-131 I was further evaluated in vivo in C6 tumor-bearing nude mice. As shown in Figure 4A, no obvious tumor SPECT signal was observed in mice after injection with BmK CT-Au PENPs-131 I or Au PENPs-131 I at 2 hours. However, tumor uptake in mice treated with BmK CT-Au PENPs-131 I gradually increased with time, and reached a peak at 8 hours post-injection and could be clearly visualized at 16 hours post-injection. In contrast, only a sight accumulation of radioactivity was observed in tumors treated with Au PENPs-131 I at 8 hours post-injection via the EPR effect, and no distinct tumor uptake was observed at other time points ( Figure 4B), which was confirmed by the SPECT imaging of ex vivo tumors at 8 hours post-injection that showed higher relative SPECT signal intensity in the mice treated with BmK CT-Au PENPs-131 I ( Figure 4C). Quantitative analysis ( Figure 4D) showed that relative SPECT signal intensities in the two groups peaked at 8 hours postinjection and decreased by 16 hours post-injection, correlating with imaging results. The relative SPECT signal intensity of the BmK CT-Au PENPs-131 I group was higher than that of the Au PENPs-131 I group at the same time points. For instance, the mice treated with BmK CT-Au PENPs-131 I had 2.08 and 2.11 times higher tumor signal intensities than those treated with Au PENPs-131 I at 6 and 8 hours post-injection, respectively. In addition, biodistribution analysis was also performed to assess differences in tumor SPECT signal intensity between the BmK CT-Au PENPs-131 I and Au PENPs-131 I groups at 8 hours post-injection. As shown in Figure S6, 131 I-labeled Au NPs were mainly absorbed by the liver, stomach, and intestines, with relatively low radioactivity in other organs. In contrast, much higher tumor uptake of BmK CT-Au PENPs-131 I was observed compared with that of Au PENPs-131 I, further demonstrating BmK CT-dependent enhancement of tumor uptake. Due to the targeting ability of BmK CT, similar results were observed in CT imaging of BmK CT-Au PENPs in tumor-bearing nude mice. As shown in the Figure 5A and B, the anatomic structure of implanted tumors in mice could be seen in the CT images of both the BmK CT-Au PENPs and Au PENPs groups before injection. Peak tumor CT values were observed at 8 hours post-injection, followed by a gradual decrease in mice treated with BmK CT-Au PENPs and Au PENPs. Quantitative results showed higher CT tumor values in the BmK CT-Au PENPs group during the study period ( Figure 5C). In particular, the HU value of tumors in the mice treated with the BmK CT-Au PENPs was 1.67 times higher than that in mice treated with the Au PENPs at 8 hours post-injection. According to SPECT and CT data, prepared BmK CT-Au PENPs possessed targeting specificity to gliomas in vivo and could be used as a nanoprobe for SPECT/ CT imaging. Radionuclide therapy of gliomas in vivo The targeting ability of BmK CT and properties of 131 I enabled BmK CT-Au PENPs-131 I to be used for tumor-targeted radionuclide therapy were evaluated in tumor-bearing nude mice in this study. No significant differences were observed among the control groups, which included Au PENPs-131 I, BmK CT-Au PENPs, Au PENPs, and saline (p>0.05), while BmK CT-Au PENPs-131 I treatment significantly inhibited tumor growth ( Figure 6A). After seven treatments across 3 weeks, tumor volumes of mice in the control groups increased to levels 19 Survival rates of tumor-bearing mice agreed with the in vivo antitumor results ( Figure 6B). These results showed that mice treated with BmK CT-Au PENPs-131 I had significantly longer survival time than mice in the control groups. Although the body weights among all groups were not significantly different across the 21-day treatment period (Figure S7) To further evaluate therapeutic effects and safety of BmK CT-Au PENPs-131 I in vivo, H&E and TUNEL staining were performed. H&E staining results showed that necrotic regions were only observed in tumors treated with BmK CT-Au PENPs-131 I or Au PENPs-131 I ( Figure 6C) and the necrotic area in the BmK CT-Au PENPs-131 I group was much larger than that in the Au PENPs-131 I group. A similar trend was observed in TUNEL assay results ( Figure 6D). Positive staining of apoptotic cells was only observed in the tumor sections treated with Au PENPs-131 I or BmK CT-Au PENPs-131 I, and the area of apoptotic cells was much greater in the BmK CT-Au PENPs-131 I group. Therefore, the results of H&E and TUNEL staining confirmed that BmK CT modification enabled 131 I-labeled Au PENPs' targeting specificity to gliomas and enhanced the therapeutic effects on tumor cells. We further assessed potential toxicity of the multifunctional Au NPs before and after 131 I labeling toward major organs using H&E staining (Figure 7). No obvious organ damage or abnormalities were observed, indicating good organ compatibility of the multifunctional Au NPs before and after 131 I labeling. Conclusion In this work, we developed 131 I-labeled Au PENPs for tumortargeted SPECT/CT imaging and radiotherapy. PEGylated PEI was sequentially linked with BmK CTand HPAO to be used as a template for entrapment of Au NPs. BmK CT-Au PENPs showed favorable water solubility and stability, X-ray attenuation properties, and cytocompatibility at the prepared Au concentrations. After 131 I radiolabeling through the HPAO on the PEI surface, BmK CT-Au PENPs-131 I exhibited relatively high radiochemical purity and radiostability in vitro, and were used as a multifunctional nanoprobe for targeted SPECT/CT imaging and radionuclide therapy of tumor cells in vitro and in a tumor-bearing mouse model in vivo, with acceptable organ compatibility. The synthesized multifunctional Au PENPs may hold great promise in SPECT/CT imaging and radiotherapy of different MMP2-overexpressing tumors. International Journal of Nanomedicine Publish your work in this journal The International Journal of Nanomedicine is an international, peerreviewed journal focusing on the application of nanotechnology in diagnostics, therapeutics, and drug delivery systems throughout the biomedical field. This journal is indexed on PubMed Central, MedLine, CAS, SciSearch ® , Current Contents ® /Clinical Medicine, Journal Citation Reports/Science Edition, EMBase, Scopus and the Elsevier Bibliographic databases. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/ testimonials.php to read real quotes from published authors.
2019-07-16T23:01:14.836Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "c73b2b64235fdabccd47d45d5c241a7a16f650f4", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=50474", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c73b2b64235fdabccd47d45d5c241a7a16f650f4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
21501120
pes2o/s2orc
v3-fos-license
School-based health centers: A four year experience, with a focus on reducing student exclusion rates We describe a four year collaborative experience with an on-site, community school-based health center that is staffed by the Vallejo City Unified School District and supervised by the pediatric faculty of the Touro University College of Osteopathic Medicine, with particular attention to first grade student exclusion rates. Patient demographics (including payer source), first grade enrollment statistics, and first grade exclusion rates were analyzed using school district enrollment and exclusion data, billing data, and Child Health Disability Program data. An ethnically diverse patient population is described, with the payer source in 99% of patients being the State of California Child Health Disability Program or no insurance source. Ninety-one percent of office visits were for well child care and immunizations. First grade student exclusion rates for failure to meet the state-mandated physical examination requirement fell 74% over the first four years of the school-based health center's operation. In summary, our school-based health center serves a patient population that is primarily uninsured. Reduction in first grade student exclusion rates enhances student education and reduces the loss of attendance-based state matching funds. Additionally, our school-based health center has been well accepted by the local community. Introduction The concept of school-based health centers began in the early 1970s in Dallas, Texas and St. Paul, Minnesota, and these centers are now found in elementary schools, middle schools, and high schools, now numbering approximately 1,700 across the nation [1]. However, the concept did not take hold in California until 1987, when a school-based health center was established at San Fernando High School under a Robert Wood John-son grant. California now has 153 programs state-wide that are providing primary in-school care. Forty-two (27%) are in elementary schools, 14 (10%) in middle schools, 58 (38%) in high schools, 16 (10%) are on mixed-grade campuses, and 23 (15%) are linked, but not on campus, or are in mobile vans [2]. Though they were once controversial, the centers are now viewed as meeting the needs of a population of students that might otherwise go without healthcare, as many chil-dren suffer from unrecognized health problems due to lack of access to healthcare. These school-based centers can provide more easily accessible care [3] because they deliver care in a convenient fashion in a familiar and friendly environment that students visit each school day. School-based health centers meet the needs of pediatric patients without a primary care home, reducing the use of the emergency department for minor conditions [4,5], which unnecessarily taxes our healthcare system and the families themselves. One of the goals of school-based health centers is to reduce these types of visits, and redirect them to the appropriate ambulatory level of care. In our school-based health center, reducing missed school days is a secondary goal which is beneficial to the student, the working parent, and avoids loss of the school district's attendance-based state funding of approximately $32 per student per day. Program history Efforts to develop the first school-based health center in Solano County began in our local community school district in Vallejo in 2002, with the development of a Parents' Health Advisory Group, to assess health care accessibility. This parent group, meeting with the Vallejo City Unified School District (VCUSD) pediatric nurse practitioner staff and an outreach consultant, reported that traditional medical services were either unaffordable or located too far from home, school and work. Low-income families reported using the emergency department, rather than establishing a medical home. At that time (2001)(2002) school year), rates of documented first grade physicals at the eighteen elementary schools in the VCUSD averaged 61%, with a range of 43% to 91%. These first grade physical examinations are required by the State of California, and if not documented, result in the student's exclusion from school for up to five days. Students may return to school after the five day exclusion, with or without a documented physical examination. However, this exclusion has a negative impact on the child and family, and reduces the school district's state funding. Thus, one of the targeted outcomes for our school-based health center was an increase in the percentage of first grade children receiving a timely physical examination. Armed with this information, funding sources were approached for grants. Support was received from several sources, and fiscal planning was begun. Pennycook Elementary School was selected as the site, representing a diverse, low-income population with a need. A kitchen attached to the assembly hall at Pennycook was renovated into a small, one examination room medical facility, and in 2004, began to offer services to the children of Vallejo. Our school-based health center ("the Center") is operated by the VCUSD, with medical supervision provided by the Touro University College of Osteopathic Medicine. The Center was opened in August of 2004 at Pennycook Elementary School in Vallejo, California. The data sources that we analyzed included: 1) billing data, including Child Health and Disability Program (CHDP) billing data; 2) school district attendance data; 3) school district first grade physical exclusion data; and 4) appointment scheduling information. The Center is open two days a week during the school year, staffed by a certified pediatric nurse practitioner, and supported by one bilingual medical assistant. Services provided at the Center include physical examinations, immunizations, treatment of minor illnesses and injuries, laboratory tests and referrals for dental, optometric and specialty medical services. The Center serves children between one and eighteen years of age, with the majority being elementary school age. It is open to all children in the community. No patients are refused treatment based on financial considerations. Program results Our patient population ethnicity profile over the three and a half years of operation reveals: 1) 28% Asian; 2) 21% African American; 3) 27% Hispanic; 4) 13% Caucasian and 5) 11% other ethnicity. Billing data reveal that 99% of the patients seen in our Center have no medical insurance, and are financially covered by the CHDP (income criteria less than 200% of the federal poverty level), or have no financial coverage at all. Our patient encounter data show that 91% of patient encounters were for well child care and immunizations, and 9% were for problem-oriented care. Of course, many of the well child visits also involve care for newly-diagnosed or known chronic medical problems. California law requires that immunizations be up to date when entering kindergarten, and that upon entry into the first grade the student is required to document a physical examination within the previous eighteen months. Students that do not meet these requirements may be excluded from school for up to five days. First grade students excluded from school attendance due to lack of a physical examination have dramatically decreased in the years since the Center opened (Table 1). Our school district data base shows that there were 402 first graders excluded in 2004, falling to only 104 in 2007, representing a 74% decrease in first grade student exclusions. School enrollment figures remained relatively static during this four year period (Table 1). Although this decrease may in part be due to a variety of societal factors (fluctuating insurance coverage, increasing parental awareness of pediatric health screening), it is clear that accessibility to the Center played an integral part in the decline of first grade student exclusions. Conclusion Our experience documents a marked decrease (74%) in first grade exclusion rates due to lack of a state-mandated physical examination. These improved rates result in increased school attendance, and directly benefit the school district financially. Additionally, these improved rates have also served to protect the school-based health center from budgetary constraints during times of school district financial difficulties. Collaboration with community school districts in terms of school-based health center formation and supervision falls within the community service mission of colleges of osteopathic medicine. This collaboration serves the community, and promotes community awareness of osteopathic medicine and its teaching institutions. School-based health centers are successful because they fill a need. They are located in a convenient, non-traditional setting, where students go on a regular basis. School-based health centers have been shown to provide care in a timely fashion [2], have proven to help children stay in school and improve academic outcomes [6], increase the use of well child services [7], improve immunization rates [7], and reduce the use of expensive emergency room visits [5,7]. National statistics concerning school-based health centers demonstrate that 97% of the patient encounters are for preventive well child care and immunization [8]. This is similar to our experience, with 91% of our patient care visits being for well child and preventive care. Studies have shown that children without health insurance are four times more likely to go without needed den-tal care in comparison to children with insurance [9]. Indeed, severe dental decay has also been a great concern in our community, and the children seen in our Center often need dental care and referral. School-based health centers have been shown to improve asthma care and reduce hospitalizations for childhood asthma [10]. This is particularly pertinent in our community, since our Center is located in Solano County, which has the highest incidence of childhood asthma of all the counties in California [11]. In summary, our Center fills a void that benefits the children, their families and the community that we serve, and augments the safety net system for uninsured and underinsured children. Our experience has documented: 1) medical care provision to a student population that is primarily uninsured; 2) an increased rate of meeting statemandated first grade physical examination requirements; 3) reduced rates of first grade student exclusion, which improves attendance and enhances student promotion rates [10]; and 4) reduced school district funding losses from state matching funds. Additionally, our Center has been well-accepted by the local community. Indeed, our efforts have been so wellaccepted that we have obtained new grant funding to open a new, larger school-based health center in a second underserved area of Vallejo in the spring of 2009.
2016-05-12T22:15:10.714Z
2009-03-10T00:00:00.000
{ "year": 2009, "sha1": "2e451b83e694394546436c1da4892d6c15361bff", "oa_license": "CCBY", "oa_url": "https://om-pc.biomedcentral.com/track/pdf/10.1186/1750-4732-3-3", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "227b421fec6652e30b9924ccbd56c6e27b278af4", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
266410801
pes2o/s2orc
v3-fos-license
An Integrated Two-Factor Authentication Scheme for Smart Communications and Control Systems Fast and reliable authentication is a crucial requirement of communications networks and has various research challenges in an Internet of Things (IoT) environment. In IoT-based applications, as fast and user-friendly access and high security are required simultaneously, biometric identification of the user, such as the face, iris, or fingerprint, is broadly employed as an authentication approach. Moreover, a so-called multi-factor authentication that combines user identification with other identification information, including token information and device identity, is used to enhance the authentication security level. This paper proposes a novel two-factor authentication scheme for intelligent communication and con-trol systems by utilizing the watermarking technique to incorporate the mobile device authentication component into the user’s facial recognition image. Our proposed scheme offers user-friendliness while improving user security and privacy and reducing authentication information exchange procedures to provide a secure and lightweight schema in real applications. The proposed scheme’s security advantages are validated using the widely accepted Burrows–Abadi–Needham (BAN) logic and experimentally assessed using the Automated Validation of Internet Security Protocols and Applications (AVISPA) simulator tool. Finally, our experimental results show that the proposed authentication scheme is an innovative solution for a smart-home control system, such as a smart lock door operation. Introduction The number of communications devices connecting to the Internet has rapidly increased, facilitating various practical applications in commercial, industrial, and personal application scenarios.While the massive deployment of IoT devices changes how people run their businesses and daily lives, unauthorized access from IoT devices to IoT systems imposes some serious security threats.Smart-home management applications via smartphones have been widely renowned recently because of their convenience.However, since the data between a user device (e.g., a smartphone) and the smart home's gateway is often transmitted over insecure wireless communications links, various cyberattacks, namely user impersonation, device ID clones, and modification, can arise in the smart home environment, [2], [13].Secure authentication is one of the most critical security functions that should be prioritized to be developed at the highest security level.Conventional authentication schemes were previously designed to exploit only one authentication factor-either one of the personal belongings (e.g., IC cards, tokens, and keys) or information about the user's identity (e.g., a PIN, a password).Modern multi-factor authentication schemes have been engineered by exploiting biometric information such as fingerprints, retina scans, facial/voice recognition, and other information to strengthen the system's security level.However, security, convenience, performance, and application deployment remain the foremost concerns of academics and developers. The multi-factor authentication scheme is viewed as an end-user identity supplement.Adding an authentication element improves the level of security, but it can also be inconvenient for the user or increase the workload associated with processing sensitive data.Not only do smart device application services have limited processing capability, but they are also frequently strongly related to human behavior.Therefore, developing a user-friendly, secure, and effective authentication procedure is vital. To prioritize the convenience of smart door opening applications in smart homes, we offer a transparent mechanism for two-factor authentication that combines facial recognition and mobile device identification.Accordingly, the second authentication element is gener-ated automatically without requiring additional action from the user or device.Furthermore, a watermarking approach is applied to the user's image transferred from the user's device and the gateway to simplify the authentication procedures and protect the user's privacy from attacks on the wireless transmission in an open environment.Our proposal has been assessed for security using BAN logic, validated for safety using a formal model as Automated Validation of Internet Securitysensitive Protocols and Applications (AVISPA), and compared to prior research to demonstrate its merits and applicability.This study aims to enhance the security level for intelligent communication and control systems by combining the authentication component of mobile devices into the user's facial image recognition with the watermarking technique.The main contributions are summarized as follows: • A multi-factor authentication scheme based on facial recognition and mobile device ID is proposed to enhance user convenience and secure smart home access. • By utilizing the watermarking technique to ease the procedure by embedding the session key in the user image and simultaneously protecting the user image's privacy. • The proposed scheme is verified by the AVISPA testing toolkit or logic analysis and evaluated in the real application setting to validate the feasible implementation and its security impact. The paper is structured as follows: the next section summarizes the research-related work of recent proposals.Session III focused on our proposed schema and its novel processes.Session IV will include an evaluation of the proposed schema and security analysis, as well as a comparison of its functionality to that of previous schemes.Session V presents an example of a prototype evaluation to demonstrate the practicality of the concept.Session VI will conclude with a discussion of the pros and disadvantages of the schema and the direction of future development. Related Work Multi-factor authentication is of great interest to researchers because it improves access security, especially for communication applications in an open environment.A common approach is that besides bio-metric user authentication, a second authentication factor is supplemented by what the user has or knows.This second factor can either be effective or transparent to the user.The authors in [17] have applied combined facial recognition and Radio-frequency identification (RFID) to increase the authentication accuracy for smart home access service users.The system's authentication performance has high accuracy, and the access time meets the requirements of smart home door opening (under 10s).However, this study needs to add an RFID identification device and can only be evaluated experimentally.Therefore, the convenience has been reduced, and the logic of the safety evaluation of the scheme is incomplete.Taking the same approach to device addition, the authors in [20] have proposed using a card reader instead of RFID.The proposed scheme has been security analyzed through BAN logic and proven resistant to side attacks, but this proposal still requires additional user validation action.In [7], develop a door lock system based on facial recognition with twofactor authentication using OpenCV.The design of this project is based on human face recognition and a One Time Password (OTP) solution using the Twilio service.Despite achieving high security, the system requires a communication solution from a 3rd party. In order to have the most convenient access for users and integrate authentication components to meet the requirements of lightweight authentication procedures, various authors have proposed several two-factor authentication schemes for smart homes as below.A scheme called TFA (Transparent Two-Factor Authentication) is proposed in [22] to avoid tedious interaction and bring satisfaction in the user experience by integrating two authentication components in one User action.Specifically, the voiceprint method is used as the second authentication factor.However, with AI's strong development in facial recognition, the facial recognition solution is accurate and effective.in smart home access practice [7] [12] [21].Facial recognition will provide an efficient user experience in applications close to everyday human interaction.In addition, these recommendations emphasize user-friendliness and empirical modeling.However, logical analyses to verify the proposal are not provided. Considering the smart home application as an IoT application, [10] offers a two-factor authentication solution for smart homes using elliptic curve cryptography (ECC) system to resist phone calls security attacks, including impersonation attacks and session key disclosure while ensuring secure user authentication.It is suggested to use fuzzy extraction to improve the security of the two-factor authentication scheme and ensure efficient performance because it uses only the hash function, and XOR operation generates low computational cost.However, the user still needs to verify the identity through the password, which increases the user's complexity and is not safe from guesting password attacks.In addition, the Schema is only evaluated for security through the BAN logic and the formal model Proveif.There are no media modeling assessments like AVISPA.Along with the solution of using ECC curve encryption and random number matching in session communication, the author in [14] proposed a performance and security balance scheme asymptotic to the real environment.However, some inside attacks or session locking are not guaranteed.To deal with the flaws of the above proposal. In [18], a proposed authentication scheme for a re-mote access solution to a smart home.Which uses one-way hash functions, bit XOR operations, and symmetric encryption/decryption.The security of the proposal is proven through the Real-Or-Random (ROR) model and the security verification by using the tool (AVISPA).However, this solution uses bio-metric and password and is not completely transparent (transparency), causing user problems.Sensor networks serving the medical field have higher security and authentication needs than conventional networks because they involve sensitive user data.In [19] presented a twofactor authentication mechanism based on a secret and shared key between the gateway and sensor.In [15] propose a resilient ECC-based three-factor mutual authentication protocol with key establishment technique.However, increasing the key length or using public key techniques burdens the processing and slows the authentication time. During media transmission and storage, data can be altered for illegal use by attackers.Watermarking effectively protects vulnerable data in a digital environment against the tampering of intellectual property rights and enhances security [11].The authors in [9] proposed a watermarking algorithm based on a lossy compression algorithm to ensure authentication and forgery detection.A cryptography-based bit-pair matching watermarking mechanism in the spatial domain was suggested in [4], where symmetric key cryptography was used for watermark encryption to protect information from intruders on the communication channel.The watermarking mechanism can improve security while minimizing the growth of the security traffic.To avoid exposing embedded bits of an image to attackers, the authors in [3] proposed a block-based image watermarking algorithm.The algorithm generates two different keys using Diffie-Hellman Key Exchange to find the position of the cover image to which watermark bits are to be embedded.The above proposals show that the applicability of the watermarking technique in user image data transmission reduces the amount of information to be transmitted, reduces the examined procedures, and can be used to transmit authentication information.However, previous studies have not mentioned solutions to protect the integrity of image data and use it for session key transmission. Based on a review of the aforementioned studies and to the best of our knowledge, a 2-factor authentication scheme that supports user convenience, a lightweight protocol, and an increased security level is not fully addressed.The watermarking technique was described in our previous study [8] as a method for embedding a random key generated from the device address into the user image.However, this scheme did not encrypt the user's outgoing messages or perform performance test assessments.To improve the security and privacy of the user's image, this proposal proposes encryption of the entire outgoing message with a session key validated by both the user's device and the gateway.In addition to security analysis with BAN logic, this work This paper presents an authentication solution that effectively integrates two identity factors into an explicit schema with the support of standard security algorithms, such as hash function, Bit-wise XOR, and symmetric cryptography.The proposed scheme can enhance security by using randomly generated session keys and offer a user-friendly and lightweight procedure based on face recognition over a single authentication message.The logic BAN analysis and the AVISPA tool have examined the security analysis of the proposed scheme.We also built an experimental testbed and proved that the response time of the proposed security procedure meets the user requirements for smart-home applications. 3 System Model and the Proposed Authentication Scheme System Model and the Two-factor Authentication Diagram Figure 1 shows the proposed two-factor authentication system model consisting of a user's mobile device (smartphone) and a gateway for illustration.The system deploys the two-factor authentication scheme that composes a user's face captured by the smartphone's camera and the hardware identifier of the user's mobile device.Facial recognition authentication ensures the validity of the user's access to the system when it matches the database available at the gateway.This user-friendly approach allows users to access the system easily but at the risk of counterfeiting, cloning, or privacy violations.To avoid such risks, a mobile device identifier is dynamically employed for each connection session and is used as the session key to protect the user image information transmitted between the user and the gateway.The encrypted user, image, and authentication information are embedded in a single encrypted message for participant exchange using watermarking techniques.As a result, the number of authentication exchange procedures and the data packet size are significantly reduced, and a lightweight authentication protocol is obtained. Figure 2 shows that the two-factor authentication diagram includes the setup and authentication phases. Setup Phase In the setup phase, the gateway collected the Medium Access Control (MAC) address of the user's mobile and user's face images to store in its databases (key database and image database).The MAC address is a 48-bit length stored on the user's smartphone and the gateway to use as an initialization key K 0 , represented by a vector K 0 [1 × 48].The user and the gateway mutually store the following information: • The initialization key K 0 [1 × 48] • The random number representing the network size through the maximum number of devices, N (u).• A pre-agreed binary matrix A p [48×48] (preloaded matrix).• A left bit shift algorithm is defined and used in the user device and the gateway. The image database stored at the gateway is used to verify the match of the user's facial recognition parameters.This study uses the standard Support Vector Machine (SVM) algorithm for face recognition [6]. Authentication Phase In this phase, several actions are required by both the mobile and gateway sides based on preset parameters in the previous phase.When a user wants to connect with the gateway, the user has to perform access actions as below. • Taking the user's face image, encrypting the image, and embedding it to a message to send to the gateway: - • The gateway extracts the session key and user image from the received message M s and executes the comparison processes between received parameters and predetermined parameters in the setup phase to recognize whether the authentication is successful.If the authentication is successful, the gateway sends the successful message (M succ ) to the mobile device.Otherwise, the gateway sends the error message (M err ) to the mobile device to retry the authenticated procedure. The Operation of the Authentication Scheme The proposed authentication scheme performs the following algorithms during the authentication phase: • Algorithm 1: The mobile user creates the session key and sends the encrypted message, M s , to the Gateway.• Algorithm 2: The gateway generates its session key and the embedded key matrix.• Algorithm 3: The gateway verifies authentication parameters.• Algorithm 4: The mobile device processes the notification messages from the gateway. Table 1 lists the notations used in the paper. The Mobile User Generates the Session Key, Creates and Sends the Message M s to the Gateway The mobile device performs the following computation steps described in Algorithm 1 to generate the session key.The calculation steps are described below: • The mobile device uses the initialization key K 0 , the matrix Ap and the random number N (u) to generate the session key and create the message Ms to forward to the gateway.Go to the next step 13: end if 14: Store K si for the next authentication session End (Step 3).Besides, the session key K si is used as the symmetric key that encrypts the user's captured image to generate an encrypted image that is resistant to identity detection or forgery attacks (Step 4). • The A k matrix contains the session key embedded in the encrypted image matrix along with the integrity-preserving parameters H k for the A k matrix and the sending time parameter T i to prevent modification or replay attack (Step 6, Step 7). • The mobile device and the gateway agree on a timeout period against man-in-the-middle attacks, which is the time it takes for the mobile device to receive a response of the authentication status. Algorithm 2 The mobile user generates the session key and sends the encrypted message (M s ) to the Gateway. In the i th session: i 0 == (i 0 + 1) mod 48.Output: The embedded key matrix containing K si (g) Start Left shifting i 0 bits to create the temporal key K temp 1: Create the session key K si from the temporal key K temp and matrix The Gateway Generates its Session Key and the Embedded Key Matrix The gateway maintains two databases (key database and image database) for performing the authentication: • Database of user's face images: when a user registers to use the service with the gateway, the user takes some face images, and the gateway stores the user's face images as biometric information. • The preloaded matrix A p and the MAC address of each mobile device are known by both a mobile user and the gateway. The session key generation in the gateway is carried out using Algorithm 2, as described below. The Gateway Verifies Authentication Parameters After receiving the message M s from the user's mobile device, the gateway performs the watermark decoding procedure to extract the user-encoded image matrix, I ′ [512 × 512], the embedded key matrix A k , the hash function H k and the message delivery time T i .Note, H k and T i sent as plain text. The gateway compares the message receipt time to determine if the message is valid or expired in step 3 of Algorithm 2. The gateway uses the session key K si (g) obtained after performing the Algorithm 2 to match the key K si in the matrix A k sent from the mobile device.If there is a match, the key K si is used to restore the captured user image, I u .The gateway authenticates the user image using the SVM algorithm to confirm face recognition parameters.The gateway then sends either M err or M succ to the mobile user in the case of authentication failure or success, respectively.Turn the error message M err to the mobile user.25: else{Send the successful message M succ to the mobile user} The Mobile User Processes the Response Message Received from the Gateway After the mobile user receives the authentication error message M err from the gateway, it performs Algorithm 1 again for re-authentication.In the case of receiving M succ , the mobile device knows that the session key is secured, then it can be used as the initialization key for the following authentication session.Algorithm 4 below describes how the mobile user processes the response message received from the gateway. Security Analysis In this section, we evaluate the security strength of the proposed authentication scheme by doing the following analysis: • Security protection against widespread security attacks.• Security evaluation using the BAN logic model.go to step 7 6: end if 7: Confirm K si is safe end • Security evaluation using the simulation tool AVISPA. Security Protection Against Security Attacks The proposed authentication scheme can protect the communications between mobile users and the gateway against the common security attacks below.a. Security attacks on mobile users • Impersonation Attack: Impersonation attacks aim to fake device parameters.However, such an attack in this scheme is not possible when the session key K si is changed dynamically on every session.the probability of finding the key K si in the matrix A k is 1/2 48 .• User Credentials Attack: because the proposed scheme provides two-factor authentication simultaneously, it will reduce the possibility of attacks on user images.Furthermore, the user's identity will not be revealed because the user's image is encrypted during communication from the mobile device to the gateway.• Attack on session keys: In this proposed scheme, the session key is secretly protected by matrix A p and hidden in the captured user image.Furthermore, the session key is integrity protected by the one-way Hash function and the session time limit.So the secrecy of the session key (K si ) is guaranteed. b. Security attacks on communications links • Replay attack: The attack repeats through spoofing packets transmitted from the mobile device to the gateway.However, the entire outgoing message is encrypted, and each packet has a unique identifier that comes from the hash value of the encryption matrix (A k ) and the sending message time T i .So, the repeat attack is defeated by this item.• Eavesdropping attack: Eavesdropping attacks illegally collect information from packets transmitted by mobile devices over the air.The proposed scheme changes the session key, followed by its previous safe state.Hence, it ensures the security of the session key (K si ), and the authentication information is securely encrypted against eavesdropping attacks.• Man-in-the-Middle attack: The proposed scheme is protected by the device identifier, the session key, and the hash.So, this attack is defeated. BAN Logic Analysis BAN logic was developed by Burrows, Abadi, and Needham [5], which includes a set of rules to design, develop, and validate security schemes.We have applied the BAN logic to test the correctness of the security protocol and determine the trustfulness of agreement among the participants in the proposed authentication scheme.The following notations are used in the BAN logic: • A| ≡ X: A believes the statement X. • A ◁ X: A sees X, i.e.A has received a message containing X. • A| ∼ X: A once said X i.e A ≡ X when A sent it. • A| ⇒ X: A has authority or jurisdiction over X. • #(X): X is a fresh message. • A ↔ B: K is the shared secret key between A and B. • X K : X is encrypted with key K. • < X> Y : formula X is combined with formula Y. • (X) K : X is hashed with key K. • (X, Y ): X or Y is one part of formula (X; Y).The logical postulates in the BAN logic are described using the below-mentioned rules: • Rule 1 (Message Meaning Rule (MMR)): P believes Q once said X if P sees a message X encrypted with K, and P believes K is a shared secret between P and Q. Rule 1 satisfies the proposed scheme because the key K si is secretly shared between the mobile device and the gateway via the A p matrix and the bit shifting method.When the gateway receives the encrypted M s , it believes that K si is a good and secret key associated with the identity generated from the mobile device's MAC ID.Here, P is the gateway, and Q is the mobile device. • Rule 2 (Nonce Verification Rule (NVR)): P believes Q believes X if P believes Q once said X and P believes X is fresh Rule 2 is satisfied because the current belief of the i th session key is confirmed by the previous (i−1) th session having been successfully authenticated. • Rule 3 (Jurisdiction Rule (JR)): P believes X if P believes that Q believes X and P believes Q has jurisdiction over X. Authentication sessions are sequenced concerning each other in order.The successful response message is the basis for generating the next session key, so rule 4 of BAN is satisfied in this proposed scheme. Security Assessment using AVISPA Tool AVISPA is a powerful tool for the Automated Validation of Internet Security Protocols and Applications [16].The tool uses modules and an expressive formal language to detail protocols and security properties in state-of-the-art automatic analysis techniques.AVISPA comprises state-of-the-art backend models such as CL-AtSe and OFMC [BACKEND].These backends perform various automatic analyses to detect vulnerabilities in the security scheme.It uses the formal language High-Level Protocol Specification Language (HLPSL) to code a specified security algorithm, and a translator known as HLPSL2IF is used to convert the HLPSL code into the Intermediate Form (IF) and then bring out the results.We have used HLPSL to test Algorithms 1 and 2 and get results, as shown in Figure 3. Figure 3 shows the output format generated by AVISPA's OFMC and CL-AtSe backends.SUMMARY generally shows whether a security scheme being tested is safe or unsafe.In our case, it presents as a safe condition. Authentication Testbed Based on the above-proposed algorithms, we implemented the testbed model, as shown in Figure 4, for a study case of the smart door lock application.The mobile devices in the testbed were Samsung A50, Samsung Note10, and Oppo Reno, which use the Android 9.0 operating system.The gateway is implemented using Raspberry 3 with OS version 4.19.We use the Mosquitto library to install the MQTT Broker service The authentication software performs the session key generation on the user device side, captures the user image via the camera, encrypts the image, and sends the authentication message (M s ) to the gateway.On the gateway side, the gateway recognizes the user's face by applying the SVM algorithm on the image database at ageitgey/f ace − recognition [1] and uses the algorithm proposed above to authenticate the session key.We have made 100 trials to collect run-time values of the session key, facial image encryption, and facial image decryption processes with various tested phones.Figure 5 shows the higher time value for generating the session key is 1.4 seconds with the Reno phone, and the lower time is 0.6 seconds with the Note 10 phone.Table 2 summarizes the features of our proposed scheme compared with the Multi-factor authentication schemes of other authors.Besides the apparent advantages of friendliness, ease of use, and lightness, the scheme provides a higher level of security thanks to the random session key generation.On the gateway side, the run times on the Raspberry 3 for decoding encrypted message M s is approximately 1.1 seconds, as shown in Figure 6.These test results show that the average time of the whole authentication process is approximately 3.0 seconds.The test results prove that the proposed authentication scheme can effectively provide smart door lock services for smart homes in real applications. Conclusion This paper proposes a two-factor authentication scheme for communication and smart control systems.The proposed scheme provides a user-friendly, secure, lightweight approach through face recognition, dynamic session key generation, and watermarking techniques.Furthermore, the random dynamic key embedded in the captured user image reduces the key distribution procedure and ensures the privacy of the user image.Our proposed scheme has been analyzed to illustrate its security strength under the standard attacks and ensure that BAN logic rules are met.Furthermore, the solution is also verified through the VISPA tool to prove the proposal's safety.Besides, to determine the solution's effectiveness in the actual application setting, the proposed scheme is deployed for a smart door lock application, a typical application in smart home systems.The experimental results show the authentication execution time is acceptable for the application, which does not strictly require real-time.In our near future work, several advanced AI-based recognition solutions will be integrated into the scheme to reduce the authentication time and extend to new real-time scenarios. Figure 1 : Figure 1: The model of the two-factor authentication system. The user's face image is captured by the smartphone's camera and transformed into an image matrix of size I[512 × 512].After that, the image matrix is encrypted by the session key, which is generated from the 2 nd authentication factor to preserve the confidentiality of the user image data, becoming the encrypted image, I ′ [512 × 512].-Themobile device creates the authenticated message (sending message) (M s ), which contains a session key, hash, and the encrypted image and sends it to the gateway at the given time. Algorithm 4 3 : The mobile user processes the response message received from the gateway.Input: Received messages (M err , M s ucc) Output: The authentication is confirmed Start 1: Receive the returned acknowledgment message 2: if Acknowledgement = M err then Perform Algorithm 1 again for reauthentication 4: else[M succ = decode(K si , M succ (s))] 5: P | ≡ Q| ≡ X, P | ≡ Q| ⇒ X P | ≡ X Assuming the index I o is established in the initial secure phase, the gateway can fully infer to believe the session key generated by the mobile device.• Rule 4 (Freshness Rule (FR)): The entire formula is believed to be fresh if a part of the formula is believed to be fresh.P | ≡ #{X} P | ≡ #{X, Y } P believes combined formula (X; Y) if P believes X and P also believe Y. P | ≡ X, P | ≡ Y P | ≡ (X, Y ) Figure 5 : Figure 5: Run times of session key generation on the tested phones. Figure 6 : Figure 6: Run times of facial image encryption processes. Table 1 : Integrated Two-Factor Authentication Scheme for Smart Communications and Control Systems List of notations.Io Round number I o th in the A p matrix 2023, Brno, Czech RepublicXHoangA n The mobile user generates the session key and sends the encrypted message (M s ) to the Gateway.Input: the key K 0 , matrix A p , random number i o , and the user's face image I[512 × 512] == User's image == Preloaded random matrix In the 1 st session: K 0 [1 × 48] == MAC-ID; i 0 == random(N (u)) mod 48.In the i th session:K 0 [1 × 48] == K si−1 [1 × 48]; i 0 == (i 0 + 1) mod 48 Output: the encrypted message M s = A k [48 × 48] * I ′ [512 × 512]∥H k ∥T i Start 1:Left shifting i 0 bits to create the temporal key 2: Create the session key K si from K temp and matrix A temp .The temporary key is XORed with the row whose index corresponds to I o of the A p matrix to generate a random session key, K si .•The session key Ks is embedded into the preloaded random matrix A p to create the key matrix A k Algorithm 1 48 × 48] 4: Use K si for encoding the matrix I[48 × 48] to matrix I ′[48×48]: I ′ [512×512] ← encode(K si ; I6: Create the hash value of the embedded key matrix A k to guarantee the integrity of the key matrix A k and the session keyK si : H k ← h(A k [48 × 48], T i ) 7: Create the encrypted message M s including A k andH k (from step 4 and 6):M s ← A k [48 × 48]∥ * I ′ [512 × 512]∥H k ∥T i 8:Transmit M s to the gateway and wait for a T timeout for the acknowledgment: T t ← T timeout ; T j − T i ← T t 9: if Acknowledgement = M err then Algorithm 3 The gateway performs the verification of authentication parameters.Input: Encrypted message M s Output: Verification of the authentication scheme Start 1: Receive the image message M s: M s = A k [48 × 48]∥ * I ′ [512 × 512]||H k ||T i 2: Detach components of M s to A k [48 × 48], I ′ [512 ×512], H k , and T i 3: Verify timeout value over received timeT j 4: if T j − T i > T 0 then Check the hash H k 10: if H k ̸ = H k (g) thenSend the error message M err to the mobile user 12: else Authenticate the session key K si 16: if K si ̸ = K si (g) then Send the error message M err to the mobile user.Use the session key to recover the captured user image I[512 × 512] = decode(K si , I ′ [512 × 512]) 22: Use the SVM algorithm to authenticate the user's face 23: if user's face is not recognized then Table 2 : The comparison features.
2023-12-21T16:12:08.197Z
2023-12-20T00:00:00.000
{ "year": 2023, "sha1": "7bdc7117bafffef15aa8149039349a84306f072d", "oa_license": "CCBYNCSA", "oa_url": "https://mendel-journal.org/index.php/mendel/article/download/250/212", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5cd450160a8ecbe934167f352904fd7d9b8d22c1", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [] }
264069601
pes2o/s2orc
v3-fos-license
Roles of negative pressure wound therapy for scar revision The purpose of this study is to review the research progress of negative pressure wound therapy (NPWT) for scar revision and discuss the prospects of its further study and application. The domestic and foreign literatures on NPWT for scar revision were reviewed. The mechanism and application were summarized. NPWT improves microcirculation and lymphatic flow and stimulates the growth of granulation tissues in addition to draining secretions and necrotic tissue. As a significant clinical therapy in scar revision, NPWT reduces tension, fixes graft, and improves wound bed. In the field of scar revision, NPWT has been increasingly used as an innovative and constantly improving technology. Introduction Scar hyperplasia and contracture deformity bring serious harm to the patients' physical and mental health.They damage appearances and affect functions of the patients, along with discomfort and unpleasant feelings such as itchiness, redness, pain, depression, and fear.Patients who have chronic ulcer and refractory wound in scar tissue are more likely to develop scar cancer. Currently, scar revision methods include scar excision and skin grafting, composite skin grafting over human acellular dermal matrix scaffold, and expanded flaps.With the development of negative pressure wound therapy (NPWT), the effect of scar revision has been significantly improved.The use of NPWT in burn surgery has attracted much attention (Chai and Shen, 2015), and its concept of promoting wound repair has taken root in people's hearts.However, little attention has been paid to its function in scar revision. In recent years, NPWT has been recommended as a significant treatment for scar revision (Cai et al., 2017).The technique is simple and has a good prognosis.Since published studies on scar revision rarely present the effect of NPWT alone, this study also refers to wounds similar to scar revision. NPWT materials In NPWT, the materials used include a foam sponge, drainage tube, and semipermeable membrane.Two main types of foam are available; a polyurethane foam or a denser polyvinyl alcohol foam.Suction in negative pressure creates a clean and one-way sealed environment around the wound.The characteristics of foam sponges have been widely reported (Agarwal et al., 2019).The neglected semipermeable membrane is also related to effectiveness and comfort (Dooley et al., 2012).It is a one-way, breathable, transparent film, and its main components are polyurethane and Several technical advances in NPWT devices and dressings have been made.The feasibility of home NPWT and single-use NPWT has been verified.(Mushin et al., 2017;Lim et al., 2021;Wilkinson et al., 2021).Other dressings may be considered for use in combination with NPWT.The gauze is placed over the wound to prevent the granulation from growing into the negative pressure material.Silver-ion dressings may increase the antibacterial effect of NPWT.In addition, the foam dressing is fixed around the negative pressure material to avoid tension blisters.Notably, some devices are only validated in vitro. Drain secretions and remove necrotic tissue NPWT has been recognized for its remarkable effect on enhancing wound drainage and removing bacterial products (Agarwal et al., 2019).It is carried out in a closed environment to prevent cross-infection (Huang et al., 2021).In addition, the vascularized bed with a low degree of bacterial colonization promotes the likelihood that a skin graft would succeed (Kantak et al., 2016).The NPWT group had considerably higher CD34 and CD68 levels than the traditional group (Yang et al., 2021).It could play an important role in the inflammatory response of wound healing by effectively draining secretions and removing necrotic tissues. Promote circulation and lymphatic return, reduce hematoma and edema The pressure difference between the inside and outside of the capillaries and the endothelial intercellular space of lymphatic capillaries are both increased by the mechanical traction of negative pressure.In addition, it appears that the levels of blood supply and lymphatic return flow have increased.Furthermore, tissue edemas are reduced (Cagney et al., 2020). Vascular endothelial growth factor and angiopoietin-2 (Ang-2) levels in the wound are much higher in NPWT, which may be caused by the closed and hypoxic environment (Ma et al., 2016;Yang et al., 2021;Zhu et al., 2021).Negative pressure provides effective and continuous power to the local circulation of the wound, reducing the seroma and hematoma, as compared with traditional dressing (Nagata et al., 2018;Mangelsdorff et al., 2019;Zwanenburg et al., 2020;Bueno-Lledo et al., 2021). Stimulate granulation tissue growth and improve vascular bed conditions Studies found that graft loss was associated with improper placement of skin grafts on an ill-prepared wound bed (Hsiao et al., 2017).NPWT improves wound healing by local immune modulation, hypoxiamediated signaling and mechanoreceptors (Glass et al., 2014).Higher levels of cellular fibronectin (cFN) and transforming growth factor-β1 (TGF-β1) were expressed in the NPWT group compared to the traditional group, which stimulated granulation tissue formation (Yang et al., 2017). Continuous negative pressure reduces the time needed to heal before skin grafting by drawing interstitial fluid from the wound, promoting the growth of capillaries and granulation tissue and improving blood circulation (Sun et al., 2021). Clinical application of NPWT in scar revision NPWT maintains negative pressure in the wound for wound treatment by attaching suction devices to specialized wound dressings.The use of NPWT in burn surgery has received a lot of interest.Its mechanism and efficacy in preventing infection, increasing blood supply, and reducing edema have all been demonstrated (Chai and Shen, 2015). Reduce incision tension Surgical site infections (SSIs) are usually accompanied by dehiscence of surgical wounds (Strickler et al., 2021).The negative pressure material and semipermeable membrane can transfer the incision tension to the surrounding skin.Less lateral force may resist mechanical stresses, delaying closure and predisposes wounds to dehiscence and infection.Smaller sponges, such as 3-cm-wide, should be considered to reduce incision tension and dehiscence (Googe et al., 2020). Reconstruction of contracture scar often requires scar modification and flap transplantation.The tension of the incision increases after suturing the flap, which might cause skin marginal necrosis.The technique in NPWT reduces incision tension and improves blood flow in the flap, especially at the edge and tip of the wound.At the same time, the two sides of the incision were matched neatly to improve the healing quality.In 219 incision cases, Chai and Shen (2015) used the NPWT to reduce tension following scar resection, and there were no complications.Cai et al. (2017) combined the NPWT and scar excision to treat 25 burn children with hypertrophic scars.All incisions healed well without redness, effusion, and rupture. According to a multinational, observer-blinded randomized controlled trial (RCT) involving 507 patients from 31 centers, the NPWT is effectively treats subcutaneous wound healing impairment following surgery (Seidel et al., 2020).The majority of wounds in the NPWT group were sutured.However, in the traditional group, there was a higher rate of wounds healed by secondary intention. Prevent infection SSIs are one of the most common postoperative complications (Bhangu et al., 2018).The use of NPWT in surgical incisions significantly resulted in a lower SSI risk at 30 days and 3 months postoperatively and reduced hospitalization costs (O'Leary et al., 2017;Javed et al., 2019;Hasselmann et al., 2020;Bueno-Lledo et al., 2021).The above wound is similar to the wound after scar resection, with high incision tension, which is characterized by easy rupture and infection.The effect of SSI prevention was more pronounced in the NPWT group when the incidence of SSIs was ≥20% in the traditional group (Meyer et al., 2021). Notably, not all surgical incisions benefit from NPWT (Gabriel et al., 2019).Tuuli et al. (2020) observed no significant difference in SSI risk reduction between the NPWT group and the conventional dressing group in a RCT of 1,608 obese women undergoing cesarean delivery.A meta-analysis involving 792 patients from five RCTs also reported conflicting results.The study concluded that the current evidence does not support the efficacy of routine NPWT to prevent SSIs (Kuper et al., 2020). Many factors, such as the procedure type, wound classification, negative pressure device, and parameters, contribute to the above differences.NPWT for different wounds should be cautiously adopted, and the available incision management plans must assess and address each case.Furthermore, several RCTs are in progress, and we await their results. Fix graft and promote survival Skin grafts are widely used to repair large skin defect following scar release and resection.However, there are problems with the traditional way to dress the wound, that is, there are instances of uneven pressure, improper tension, and insufficient drainage, especially on uneven or mobile surfaces, such as neck and joint.NPWT is an effective way to fix skin grafts following scar resection, resulting in proper pressure, stable tension, and adequate drainage of the graft area.It has also demonstrated significant advantages in reducing wound infection, healing time, and hospital stay (Li et al., 2017). NPWT produces promising results (Nakamura et al., 2018;Nakamura et al., 2021;Pedrazzi et al., 2021).Improvement of blood circulation promotes the survival of the skin graft.Furthermore, NPWT can treat potentially infected wounds and reduce the duration of antibiotic therapy.Thus, NPWT significantly increased graft survival, and reduced the incidence of reoperation because of skin graft failure (Yin et al., 2018;Sun et al., 2021).It is successfully applied to keep the graft immobilized, especially in exudative, irregular, and muscle-exposed wounds and special anatomical sites, with no serious wound-related adverse effects observed.In some cases, it is highly recommended that the gauze isolate the graft from a foam sponge, such as muscle-exposed wounds (Nakamura et al., 2021).It prevents difficulty in detaching when the NPWT sponge was removed. NPWT reduces surgery time by saving fixation after skin transplantation.Furthermore, NPWT can be applied to secure grafts without any sutures or staples (Inatomi et al., 2019).It means that pain, staple retention, and complications associated with this procedure were avoided. Simao (2020) developed a simple dressing for applying negative pressure after skin grafting.It is mainly made with three layers: petrolatum gauze soaked in ointment, gauze pad, and waterproof transparent film.All the air is aspirated using a 20 cc syringe, after fixing the dressing on the graft.In this way, effective fixation and pressure can last up to 5-7 days.However, the lack of drainage, display, and adjustability of negative pressure parameters is a downside. The U-shaped form fashioned by researchers was applied over the suture line.Its opening was at the root of the vessel after flap reconstruction.Consequently, the vascular pedicle may be kept from compressing.And the condition and temperature of the flap could be monitored (Chen et al., 2021).This innovative modification of eliminates the concern that NPWT affects blood flow in the vascular pedicle. Preparation before skin transplantation to improve the success rate of surgery The wound surface following scar release or resection is often uneven, which needs to be covered by flaps or skin grafts.NPWT before transplantation can improve the survival rate, especially when the condition of the wound bed is not ideal.At the same time, the compression of the negative pressure dressing may also tighten the wound edge and reduce the extent of the wound, ultimately reducing the area of skin grafting (Huang et al., 2021). Scar reconstructive wounds have few evidence on NPWT repairing wound beds; however, other wounds of similar types have been reported.In the treatment of necrotizing fasciitis and chronic venous leg ulcers (CVLUs), NPWT can be used as a wound bed preparation (Ren et al., 2020;Zhang B. R. et al., 2021).Common complications were effectively reduced on the account of applying NPWT before and after skin grafting in electrical burns and diabetic foot wounds (Smuđ-Orehovec et al., 2018;Gomez-Ortega et al., 2021). Patients using NPWT might experience fewer SSI during primary closure of surgical wounds (Norman et al., 2020).By using an emergency delay method and NPWT, Ishii et al. ( 2020) were able to successfully salvage a severely congested propeller perforator flap.Interestingly, the flap had transferred to the donor site for some time.Then, it was retransferred to the defect on day 19 after the wound bed was prepared using NPWT.After flap necrosis in the primary operation, Gigliotti et al. (2021) prepared the wound bed for the second operation combined with debridement, antibiotics, and NPWT.And there was 100% viability for above retransplanted flaps. Reduce dressing and pain The wound is maintained relatively clean and moist because of NPWT.Replacement of the dressings was reduced, which decreases pain experience and the workload of medical staff.The economic burden and the length of hospital stay were also cut down (Hsiao et al., 2017;Yin et al., 2018).Especially, children have a low pathophysiological pain tolerance.NPWT reduces pain, which helps children comply more readily (Huang et al., 2021).The NPWT is a reliable, simple procedure with an excellent clinical utility and feasibility. NPWT significantly reduces the donor site pain (Kantak et al., 2017).It may be related to the good fixation of the negative pressure dressing and the reduced shear force of the traditional dressing.In the meantime, NPWT promotes reepithelialization, accelerates healing, and reduces scar formation.In addition, the moist wound environment is the first choice for healing at the donor site.NPWT was found to lower the occurrence of flap donor sites significantly (Mangelsdorff et al., 2019). Reduce secondary scar NPWT can promote wound healing after scar resection and reduce secondary scars.This advantage distinguishes it from traditional dressings.Preclinical studies have shown that NPWT increases wound strength and reduces scar width (Zwanenburg et al., 2021).After scar removal and NPWT application, the appearance, function, and comfort of the children all clearly improved (Cai et al., 2017).Furthermore, the scar area was significantly reduced, ranging from 36% to 100% 6 months after the surgery.NPWT uses a simple and effective device that improves the appearance and histochemical properties of incision scars.Its effective fixation and compression can reduce collagen deposition and scar formation (Nagata et al., 2018). NPWT improved the smoothness of the scar formed after skin grafting and the satisfaction of the patients and researchers with the scar (Mo et al., 2021;Zwanenburg et al., 2021).Unlike conventional fixation techniques, NPWT applies negative pressure between the graft and the recipient, removing space and attracts the entire graft with uniform pressure.The possible reason is that NPWT provided a more uniform pressure and prevention of shear force, resulting in a uniform thickness of scar tissue.The surface of the scar is more regular and flatter. Complications NPWT may be more likely to develop skin blisters than standard dressing (Kuo et al., 2021;Norman et al., 2022).It recovers on its own in approximately 1 week.Appropriately changing the shape of the NPWT dressing can reduce the formation of tension blisters on the edge of the dressing (Zhang C. et al., 2021), because it can avoid the gap between the dressing and the skin when a semipermeable membrane is attached. Importantly, inappropriate use of NPWT might result in severe complications such as skin necrosis, bleeding, and allergic reactions (Agarwal et al., 2019;Ji et al., 2021).Medical staff should observe the effect of NPWT.Once these phenomena occur, NPWT must be stopped. Parameter of NPWT in scar revision The optimum negative pressure of NPWT create a favorable environment for wound healing (Horch et al., 2020).It is generally believed that 125 mmHg provides the most conducive environment for granulation tissue growth and blood supply.Zhu et al. (2021) also showed that an environment with a pressure of 125 mmHg in NPWT could accelerate bone regeneration.A single pressure setting throughout may not be the best choice for all wounds. In recent years, the setting of low negative pressure has attracted much attention.A systematic review suggested that high negative pressure may cause the ineffectiveness of NPWT on graft survival (Shimada et al., 2022).A lower negative pressure, such as 75 mmHg, is ideal for initial engraftment because it promotes strong adherence between the skin graft and the wound bed (Maruccia et al., 2017). Other factors that are easily ignored when setting negative pressure parameter include age and constitution.Adult devices and NPWT parameters have been adapted to pediatric use (de Jesus et al., 2018).Extra care is needed to protect the delicate tissues of pediatric or weak patients.The negative pressure should be reduced appropriately, and it is not >75 mmHg. Compared with the continuous mode, the intermittent mode significantly promotes wound healing.But it also increases pain experience.The circulatory mode activates, changing the circulatory within a certain range of negative pressure.The curative effect is comparable to intermittent mode, but the pain is significantly reduced. It should be noted that NPWT dressings are challenging to apply to areas without sufficient healthy skin (Jiang et al., 2021).For irregular wounds, it is difficult to maintain appropriate negative pressure.In addition, NPWT may leak air due to patients' movement and perspiration, affecting the treatment effect. The new monitoring equipment mainly focuses on in vitro studies, demonstrating its application potential.A noninvasive system was designed for adjusting the NPWT parameters (Wilkinson et al., 2021).Bioreactors, which evaluate the effect of NPWT on skin anatomy and physiology, also helpin parameter adjustment.(Notorgiacomo et al., 2022). Summary and prospect There are few articles summarizing the research of NPWT for scar revision.Although there are few separate literatures, NPWT is sometimes used as an important supplementary method in the traditional research of scar revision.The role of NPWT in other similar wounds may also be beneficial for patients with scar revision. There was no significant increase in wound-related adverse events with NPWT compared with conventional care.Complications can be prevented by appropriate measures.In recent years, the cost of NPWT has been reduced, which relieves economic burden of patients.And it is worthy of clinical promotion.In addition, more studies are needed to elucidate the mechanism of NPWT in scar revision. Future research should examine fixation time and observation time to find a better option for parameter, so as to provide the basis for the guidelines. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers.Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. Oxygen can enter through the dressing, and water vapor and carbon dioxide leave the wound.Consequently, little infiltration occurs around the wound.
2023-10-14T15:18:14.701Z
2023-10-12T00:00:00.000
{ "year": 2023, "sha1": "58f654af5eadd0b5462fabd6d912b1bfea24b347", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2023.1194051/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "573a764ee6c2363d9fdec4886fd15ce51591ac4a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12989261
pes2o/s2orc
v3-fos-license
Antiseptic properties of two calix[4]arenes derivatives on the human coronavirus 229E☆ Facing the lack in specific antiviral treatment, it is necessary to develop new means of prevention. In the case of the Coronaviridae this family is now recognized as including potent human pathogens causing upper and lower respiratory tract infections as well as nosocomial ones. Within the purpose of developing new antiseptics molecules, the antiseptic virucidal activity of two calix[4]arene derivatives, the tetra-para-sulfonato-calix[4]arene (C[4]S) and the 1,3-bis(bithiazolyl)-tetra-para-sulfonato-calix[4]arene (C[4]S-BTZ) were evaluated toward the human coronavirus 229E (HCoV 229E). Comparing these results with some obtained previously with chlorhexidine and hexamidine, (i) these two calixarenes did not show any cytotoxicity contrary to chlorhexidine and hexamidine, (ii) C[4]S showed as did hexamidine, a very weak activity against HCoV 229E, and (iii) the C[4]S-BTZ showed a stronger activity than chlorhexidine, i.e. 2.7 and 1.4 log10 reduction in viral titer after 5 min of contact with 10−3 mol L−1 solutions of C[4]S-BTZ and chlorhexidine, respectively. Thus, the C[4]S-BTZ appeared as a promising virucidal (antiseptic) molecule. a b s t r a c t Facing the lack in specific antiviral treatment, it is necessary to develop new means of prevention. In the case of the Coronaviridae this family is now recognized as including potent human pathogens causing upper and lower respiratory tract infections as well as nosocomial ones. Within the purpose of developing new antiseptics molecules, the antiseptic virucidal activity of two calix [4]arene derivatives, the tetra-para-sulfonato-calix[4]arene (C[4]S) and the 1,3-bis(bithiazolyl)tetra-para-sulfonato-calix[4]arene (C[4]S-BTZ) were evaluated toward the human coronavirus 229E (HCoV 229E). Comparing these results with some obtained previously with chlorhexidine and hexamidine, (i) these two calixarenes did not show any cytotoxicity contrary to chlorhexidine and hexamidine, (ii) C[4]S showed as did hexamidine, a very weak activity against HCoV 229E, and (iii) the C[4]S-BTZ showed a stronger activity than chlorhexidine, i.e. 2.7 and 1.4 log 10 reduction in viral titer after 5 min of contact with 10 −3 mol L −1 solutions of C[4]S-BTZ and chlorhexidine, respectively. Thus, the C[4]S-BTZ appeared as a promising virucidal (antiseptic) molecule. The lack in specific antiviral treatments is still persisting, considering the large variety of viruses already circulating among human population and the potential emerging ones. The Coronaviridae family illustrates this problem. Indeed, no specific treatment is available to fight coronaviruses infections, while they are known to be responsible for upper and lower tract infections as well as nosocomial ones. Thus, efficient means of prevention, as an adapted antisepsis-disinfection (ATS-D), should be developed to prevent the environmental spread of such infections. Human coronaviruses (HCoV) were historically known to be responsible for about 20% of common colds and other upper respiratory tract infections (Larson et al., 1980 HCoV were known, the 229E strain and the OC43 strain. This serious outbreak, due to a newly discovered HCoV, the SARS-CoV (Ksiazek et al., 2003;Peiris et al., 2003), reinforced the interest into the Coronaviridae family. Indeed, coronaviruses were since involved in more serious respiratory diseases, i.e. bronchitis, bronchiolitis or pneumonia, especially in young children and neonates (Gagneur et al., 2002;Gerna et al., 2006), elderly people (Falsey et al., 2002) and immunosuppressed patients (Gerna et al., 2007;Pene et al., 2003). Furthermore, they have been shown to survive for at least several hours under different environmental conditions (Ijaz et al., 1985;Lai et al., 2005;Rabenau et al., 2005a;Sizun et al., 2000). Finally, their adaptive properties and their ability of species barrier crossing, involve a significant possibility of new coronaviruses emergence (Laude et al., 1998;Li et al., 2005;Vijgen et al., 2005). Thus, these specificities (i.e. pathogenicity, potential environmental resistance and evolutionary ability) make the Coronaviridae family a pertinent model for studying ATS-D activity. New antiviral molecules are urgently needed. Within this purpose, some macrocyclic compounds belonging to the calixarene family (de Fátima et al., 2009;Rodik et al., 2009), have already been shown to be interesting as anti-HIV and anti-HSV agents (Coveney and Costello, 2005;Harris, 1995Harris, , 2002Hwang et al., 1994;Kral et al., 2005;Motornaya et al., 2006). In this field, our team described antiviral properties of various derivatives, such as 1,3-bis(bithiazolyl)-tetra-para-sulfonato-calix To evaluate these properties, a protocol described elsewhere (Geller et al., 2009) has been implemented. It responded to the general imperatives of the only European standard existing (NF EN 14476 + A1) to evaluate ATS-D antiviral activity in human medicine (AFNOR, 2007). According to this standard, a product should induce a 4 log 10 reduction in viral titers to quality as an ATS-D antiviral activity. For comparison, American standards recommend a reduction of 3 log 10 as efficiency criterion (ASTM, 1996(ASTM, , 1997. The general principle of our protocol is: (i) to incubate viruses with the test product, at room temperature, for a defined contact time, (ii) to neutralize product activity and (iii) to estimate the loss in viral titers. The neutralization process allows: (i) to stop the potential antiviral activity of the product, (ii) to remove its eventual cytotoxicity and (iii) to prevent interference, due to the test itself, in viral infectivity. It was achieved thanks to a gel filtration method, using Sephadex TM G-25 columns, developed and validated previously (Geller et al., 2009). These assays required appropriate controls especially to check the non retention of viruses by Sephadex TM columns, the absence of interference with viral infectivity, the efficiency of neutralization and the absence of cytotoxicity (Supp. data 1). As recommended by the European Standard NF EN 14476 + A1, controls are validated if the difference between viral titers, with and without treatment, is less than 0.5 log 10 (AFNOR, 2007). Molecular masses of C[4]S and C[4]S-BTZ were 1069.80 g mol −1 and 1365.22 g mol −1 respectively. Thus, they were susceptible to be retained by the Sephadex TM G-25 columns. To assess the non cytotoxicity of the filtrates, cytotoxic assays and spectrophotometric measurements were conducted. Two concentrations for both molecules were tested, i.e. 10 −4 and 10 −3 mol L −1 . MTT (methylthiazole tetrazolium) and NR (neutral red) assays were first performed to evaluate cytotoxicity of C[4]S and C[4]S-BTZ on L-132 cells. For both molecules, IC 50 (inhibitory concentration 50%) and CC 50 (cytotoxic concentration 50%) were higher than 10 −4 mol L −1 , even after 168 h, the time required for obtaining the HCoV 229E cytopathogenic effect. The same assays were then performed with the filtrates obtained after filtration of both molecules on Sephadex TM G-25 columns and cytotoxicity was also higher than 10 −4 mol L −1 even after 168 h of incubation. Spectrophotometric analyses, coupled with regression analyses, allowed to determine the specific parameters of each molecule (Supp. data 2). Retention rates by Sephadex TM G-25 columns were then estimated after evaluation of residual concentration in the filtrates. Retention rates of C[4]S solutions by Sephadex TM G-25 columns were 98.9% and 88.5% for solutions at 10 −3 mol L −1 and 10 −4 mol L −1 , respectively. The lower retention rate of 10 −4 mol L −1 solution was due to calculation limitations. Retention rate of the solution at 10 −3 mol L −1 was considered as significant. In the case of C[4]S-BTZ, retention rates were 94.8% and 99.4% for solutions of 10 −3 and 10 −4 mol L −1 , respectively (Supp. data 2). ATS-D antiviral assays were then conducted. Each experiment was performed in triplicate. To validate these tests, controls, mentioned above, were done the same time, and results are shown for both molecules in Table 1. The C[4]S was tested at a concentration of 10 −3 mol L −1 and contact times of 30 min and 60 min. Because of the very weak activity against HCoV 229E, i.e. 0.5 and 0.6 log 10 reduction for contact times of 30 min and 60 min, respectively, no further experiments were performed with C[4]S (Fig. 2). The HXM, in the manner of the C[4]S, showed a very weak activity on the HCoV 229E, i.e. 0.6 and 0.9 log 10 reduction after contact times of 30 min and 60 min, respectively. The CHX showed a better activity, since it induced 0.8, 0.5, 1.4 and 2.1 log 10 reduction at 10 −4 mol L −1 for contact times of 5, 15, 30 and 60 min, respectively, and 1.4, 2.1, 2.4 and 3 log 10 reduction at 10 −3 mol L −1 and for the same contact times (Fig. 2). When comparing C[4]S-BTZ and CHX activities, the first important point is that, even if they showed a certain anti-HCoV 229E activity, they did not reach the threshold fixed by European and American standards, except for CHX at 10 −3 mol L −1 and 60 min of contact time. However, this contact time could not be considered as really representative time for ATS-D in field conditions. Thus, a really attractive characteristic of the C[4]S-BTZ was its fast action at 10 −3 mol L −1 , as soon as 5 min, compared to CHX, which appeared concentration-and time-dependent. Furthermore, this activity persisted until 60 min of contact time. Several items should be yet taken into consideration when analyzing these results. First, the different molecules were tested alone, i.e. without any additive as alcohol and without any interfering substances. In this way, their own anti-coronavirus activity could be estimated, but this was not really representative of field conditions, since viruses are normally found embedded in organic materials, preventing them from the action of ATS-D. These results are consistent with previous studies, which showed that CHX did not have ATS-D anticoronavirus activity unless it was associated with cetrimide and 70% (v/v) ethanol (Sattar et al., 1989). It would be of interest to associate the fast and persistent action of C[4]S-BTZ with that of alcoholic solutions. Indeed, even if ethanol showed a good ATS-D activity, in particular against coronaviruses (Rabenau et al., 2005b;Sattar et al., 1989), its volatile nature involves a transient action, which could potentially be improved by C[4]S-BTZ activities. Furthermore, the absence of cytotoxicity made the C[4]S-BTZ even more promising, considering toxicity risks involved with the currently used ATS-D (skin reactions, allergy or occupational diseases).
2018-04-03T00:36:20.379Z
2010-09-18T00:00:00.000
{ "year": 2010, "sha1": "c0218fae778dc065c6f0454314a7c4a42a921c8c", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.antiviral.2010.09.009", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "c2c74aa952a76adf4c988e629ff4cd43842d1fb0", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
253019138
pes2o/s2orc
v3-fos-license
Early Assessment of Atherosclerotic Lesions and Vulnerable Plaques in vivo by Targeting Apoptotic Macrophages with AV Nanobubbles Background The early detection of atherosclerotic lesions is particularly important for risk prediction of acute cardiovascular events. Macrophages apoptosis was significantly associated with the degree of AS lesions and especially contributed to plaque vulnerability. In this research, we mainly sought to explore the feasibility of a home-made AV-nanobubbles (NBAV) for visualization of apoptotic macrophages and assessment of atherosclerosis (AS) lesions by contrast-enhanced ultrasound (CEUS) imaging. Methods NBAV were prepared by “Optimized Thin-Film Hydration” and “Biotin-Avidin-Biotin” methods. Then, the characterization and echogenicity of NBAV were measured and analyzed in vitro. The targeting ability of NBAV to ox-LDL–induced apoptotic macrophages was observed by laser scanning confocal microscope. The ApoE−/− mice mode fed with high fat diet were observed by high-frequency ultrasound, microanatomy and oil red O staining. CEUS imaging in vivo was performed on AS plaques with NBAV and NBCtrl injection through the tail vein in turn in ApoE−/− mice. After CEUS imaging, the plaques were confirmed and analyzed by histopathological and immunological assessment. Results The prepared NBAV had a nano-scale size distribution with a low PDI and a negative zeta potential. Moreover, NBAV showed an excellent stability and exhibited a significantly echogenic signal than saline in vitro. In addition, we found that NBAV could target apoptotic macrophages induced by ox-LDL. Compared with NBCtrl, CEUS imaging of NBAV showed strong and sustained echo enhancement in plaque area of aortic arch in vivo. Further research showed that NBAV sensitive plaques presented more significant pathological changes with several vulnerable plaque features and abundant TUNEL-positive area. Conclusion NBAV displayed a sensitive indicator to evaluate apoptotic macrophages, indicating a promising CEUS molecular probe for AS lesions and vulnerable plaques identification. Introduction Numerous studies have demonstrated that acute cardiovascular events were directly associated with rupture of atherosclerosis (AS) vulnerable plaques. [1][2][3] Therefore, it is significant to predict the AS lesions severity and particularly the vulnerability of plaques at early stages, which may have a positive impact on prevention of acute events as soon as possible. At present, the attention of the scientific community has focused on the development of higher sensitivity and specificity new tools on this issue. [3][4][5][6] Apoptosis was a key process in the pathogenesis of AS diseases [7][8][9] and a decisive factor in the progression of stable plaque lesions to vulnerable plaque lesions. 10 Several cell types that co-inhabit the atheroma can undergo apoptosis, including macrophages, smooth muscles cells, and endothelial cells. [11][12][13] As the main innate immune cells, macrophages were the first inflammatory cells invaded in AS lesions and could be recruited in large numbers during the development of atherosclerotic plaques. [14][15][16] The intraplaque macrophages undergo apoptosis during all stages of AS lesions. 17 Numerous apoptosis of macrophages promotes thinning of the fibrous cap and the development of the necrotic core. 18 Therefore, we reasoned that noninvasive detection of apoptosis could be used to identify AS lesions severity and instability or vulnerability of atherosclerotic plaques. Currently, molecular imaging can show specific biological pathways or cellular processes for a better understanding of the molecular events responsible for plaque destabilization. 19,20 Ultrasound molecular imaging technology presented great potentials to characterize and image the occurrence and development of diseases on the molecular level. [21][22][23] In particular, the development of a wide range of imaging contrast agents, functionalized with targeting ligands such as antibodies, peptides or aptamers, 24,25 could be promising for probing the molecular biomarkers of the atherosclerotic processes, promoting the translational potential of novel technologies. During apoptotic cell death, the phosphatidylserine (PS) exposed on the surface of cell membrane. 9 Annexin V can selectively bind to the externalized PS with high affinity (Kd in the range of 0.1~2 nm). 26 AV as an imaging agent for vulnerable atherosclerotic plaques has been reported in MRI, SPECT and other molecular imaging modes, 27,28 but not in ultrasound imaging. In this study, we used AV-nanobubbles (NB AV ) for the assessment of apoptosis in atherosclerotic lesions and analysis of its ability to identify vulnerable plaques in an experimental mice model. Preparation of NB AV The Nanobubbles (NB s ) were prepared with optimal thin-film hydration method as previously reported. 25 In order to get bubbles of uniform size, the synthesised NBs were filtered through fixed aperture (nuclear pore membrane) filter. Then NB AV were obtained by coupling the Bio-AV molecules to the surface of NBs with the "biotin-avidin-biotin" method. Briefly, 2.5 μL of Bio-AV (0.5 mg/mL) was firstly conjugated to 7.5 μL of streptavidin (0.5 mg/mL, Invitrogen) by incubation and slight oscillation at 4 °C for 20 min. Excessive streptavidin allows conjugated streptavidin-Bio-AV with free sites to bind NBs. Subsequently, the mixture was incubated with 500 μL of diluted NB (8.0 ± 0.8×10 8 bubbles/mL) at room temperature for 30 min. Finally, the resulting solutions were placed on ice to induce stratification, and the targeted NB AV suspensions were acquired by isolating the upper-middle layer. Theoretically, there would be two hundred AV molecules on the surface per NB AV . NB AV was sterilized by CO 60 irradiation for 15 min and then stored at 4 °C for the following experiments. Characterization of NB AV The morphology of NB AV was observed by scanning electron microscopy (SEM, S-4800, HITACHI, Japan) and the images were captured by software (HITACHI S-4800, PC-SEM). During the NB AV preparation, the lipid dye DiI (5 μM) was dissolved in the lipid solution for better observation. All other procedures remained the same. The obtained NB AV were examined under a fluorescence microscope (Olympus CKX53, Japan), and images were captured (Camera: Olympus CCD DP74). The characterization of NB AV , such as size distribution, zeta potential, polydispersity index and stability of NB AV , were measured as described below. The diluted NB AV (4.5 ± 1.0×10 8 bubbles/mL) solutions were kept at 4 °C, and then the size distribution was analyzed after 0, 12, 24, 36, 48, 60 and 72 h, respectively, by a NanoPlus-3 zeta/nano particle analyzer (Micromeritics Instrument CORP, USA). Furthermore, 1mL of NB AV stock solution was diluted with a 9 mL mixture of PBS (0.01 M) and fetal bovine serum (10%, FBS, HyClone, USA), and then size was measured after 0, 20, 40, 60, 80, 100 and 120 min at 37 °C. The zeta potential of NB AV was also measured with this analyzer. All the experiments were repeated three times. Echogenicity of NB AV in vitro To characterize the echogenicity of NB AV in tissue, a custom-designed tank was included in which the cylindrical tank consisted of agar-based material that can produce a reference echo signal mimicking tissue. This tank (3 cm in diameter, 3 cm in height, 0.5 cm in thickness) was fabricated with an agarose gel (1% agarose, 99% H 2 O). 2 mL of NB AV suspension was injected into the tank and then measured using a VisualSonics Vevo 2100 Ultrasound System (FUJIFILM, Toronto, Canada). A MS250 linear array ultrasound probe was placed perpendicular to the gel tank (mechanical index, MI <0.1) for echogenicity. Equal volumes of saline were included as control. Quantitative analysis of the echo intensity was performed by Image J software. Targeted Binding of NB AV to Apoptotic Macrophages in vitro RAW264.7 cells were cultured in Dulbecco's modified Eagle medium/high glucose (DMEM/high glucose) containing 10% heat inactivated fetal bovine serum and 1% penicillin/streptomycin in a 37 °C incubator containing 5% CO 2 . To establish apoptotic model of macrophages in vitro, RAW246.7 cells were inoculated with 2.5×10 5 cells/mL in 6-well plates (2 mL per well) for routine culture for 24 h, and then replaced with the same amount of serum-free medium for another 12 h. Next, these cells were treated with high ox-LDL at different concentrations (50 µg/mL, 75 µg/mL, 100 µg/mL) or control 4935 PBS buffer solution for 12 h, respectively. After that, the cells were randomly divided into two parts. One part of cells were collected and then stained according to the apoptosis detection kit of manufacturer's instructions (Key GEN Biological Technology Co., Nanjing, China). The ratio of apoptosis was measured by flow cytometry and analyzed quantitatively using Flow Jo software. The other part of cells were used for immunofluorescence assay. Briefly, 100 µL DiI-labeled NB AV were added into cells for 0.5 h and then washed with PBS twice. The cell nucleus was labeled with DAPI. The fluorescent imaging was viewed by a confocal laser scanning microscope (SP8, leica, Germany). The quantitative assay was performed from five random view fields of fluorescent images and the number of macrophages targeted by NB AV from per 10 cells from these five random view fields. Animal Model of AS Plaque All animal experiments were performed under protocols approved by the Animal Care and Use Committee of Fourth Military Medical University and comply with the NIH Guide for the Care and Use of Laboratory Animals (8th edition, 2011). Twenty male ApoE −/− mice aged 6-8 w (24.82 ± 0.26 g) were fed in separate cages at 20 ± 2 °C with 45% relative humidity and a daily 12/12 light/dark cycle. After one week of normal diet, the mice were switched to be fed with high fat diet (MD12015, containing 21% milk fat, 0.15% cholesterol, Jiangsu Medicience Ltd.) for at least 8 weeks to induce the formation of atherosclerotic lesions. C57 mice fed with similar diet were used as control group. For observation of AS plaques of ApoE −/− mice, high-frequency ultrasound imaging was performed. In brief, the ApoE −/− mice were anesthetized with isoflurane and then imaged by Vevo 2100 high-frequency ultrasound imaging (Visual Sonics, Canada) with 18-38 MHz high-frequency linear array probe (MS 400). The evaluation of plaque was performed every two weeks to observe the thickness and echo intensity of the intima-media of the arteries, the presence of plaques, echo and shape of plaques (especially aorta and carotid artery). For assessment of lipid deposition, the mice (n = 3) were anesthetized with intraperitoneal injection of 100 μL 4% pentobarbital sodium and then killed by dragging the neck. The heart and aorta, including partial carotid artery, were exposed and cleaned from surrounding adipose tissues. Aortic arch bifurcation images were captured by a digital camera equipped with a stereomicroscope. Subsequently, the aorta-to-iliac bifurcation was isolated and stained with oil-red-O according to the routine protocol. The area of lesions and the size of lipid core were measured by Image J software. Contrast-Enhanced Ultrasound (CEUS) Imaging of NB AV The ApoE −/− mice fed with high fat diet (n = 5) more than 8w were kept in a supine position and anesthetized with isoflurane (2% induction, 1.2% maintenance) before ultrasonic imaging. Then, the anesthetized mice were placed on temperature-controlled heating pads to maintain normal body temperature, and depilated from neck to the lower abdomen. CEUS imaging was performed by Vevo 2100 high-frequency ultrasound apparatus with a MS250 scanning probe (13~ 24 MHz). First, the optimal imaging view of atherosclerotic plaque was selected by two dimensions mode. Then, the imaging modality was switched to ultrasound contrast enhanced mode, simultaneously the imaging parameters were adjusted to maximize visualization of the contrast signal. The atherosclerotic mice (n = 10) received 150 mL NB AV injection via tail vein in random order. Real-time imaging was performed from 10s to 10 min after injection. During CEUS imaging, the heart rate of mice was maintained at 360-420 beats/min and the respiratory curve was kept stable. The contrast intensity was recorded synchronously and all the images of artery plaque in the areas of interest (AOI) were stored without delay. Twenty-four hours after injection of NB AV , the same mice received equal volume of control NBs (NB Ctrl ) for observation the artery plaque in AOI again. All of the other parameters were held constant. Contrast intensities of AOI in all images were analyzed at defined time points using software program (Vevo Lab). Quantitative analyses of the mean grayscale intensities were performed with software (Image J). Finally, AS plaques were grouped according to imaging performance of NB AV and NB Ctrl . Analysis of Apoptosis ex vivo After CEUS imaging, the mice were processed for analysis of apoptotic area in atherosclerotic plaques. The ApoE −/− mice were randomly sacrificed. Then, the arteries in AOI were sliced and stained for the TdT-mediated dUTP-biotin nick end-labeling (TUNEL) assay (Roche Diagnostics Corp.) by immunofluorescence examination. The C57 mice fed with https://doi.org/10.2147/IJN.S382738 DovePress International Journal of Nanomedicine 2022:17 4936 high fat diet were control groups. TUNEL positive cells were viewed by a confocal laser scanning microscope (Carl Zeiss, Germany). The immunofluorescent intensity was analyzed with ImageJ software. Histopathological and Immunological Staining In order to verify the results of CEUS imaging and further analyse the plaque characterization, the histopathological and immunological assay of AOI plaques were performed. The mice after CEUS were anesthetized with injection of 100 μL 4% pentobarbital sodium and then perfused with PBS. According to the ultrasonic localization and CEUS imaging results, the place of AS plaque areas with different imaging performance were prepared for arterial pathological specimens. The aorta and carotid in AOI were dissected and post-fixed at 4% paraformaldehyde for 24 h. For histopathological assay, the samples were embedded in paraffin and sectioned at 8 μm. The histological changes were analyzed by H&E staining. Masson's trichrome was used for collagen staining. The area of lipid deposition in lesions was stained with Oil-red-O. All the slides were scanned by a Pannoramic Scanner (P250, 3D HISTECH, Hungary). The percentage of lesion area in histopathological image was calculated with computer-assisted morphometric analysis system ImageJ. For immunofluorescence assay, the aorta and carotid in the AOI were extracted for frozen section to detect the content of macrophages and smooth muscle cells. The sections were incubated with anti-α-SMA antibody and anti-CD68 antibody (1:200, Abcam), respectively, for 24 h at 4 °C, and then washed, followed by incubation with corresponding secondary antibodies for 30 min at 37 °C. DAPI was used to restain the nuclei. The fluorescent imaging was viewed by a confocal laser scanning microscope. The entire process was conducted in the dark. Quantitative analyses of the mean fluorescence intensities were performed with software (Image-Pro Plus). In all above-mentioned experiments, the plaque of aorta and carotid arteries outside the AOI were used as control group. The artery of C57 mice were set as negative control. All of the other operation procedures were held constant. Statistical Analysis All quantitative data were expressed as mean ± SD (standard deviation) from at least three independent experiments unless otherwise stated. The statistical significance was determined with Student's t-test or one-way ANOVA using GraphPad Prism 6.0 software (GraphPad, San Diego, CA). Data with uneven variances were corrected by Welch test. The level of statistical significance was set at P < 0.05. Characterization of NB AV As shown in Figure 1A, the lipid membranes after oscillatory shedding were thin and translucent. The NB AV suspension was visibly creamy white and stratified after standing ( Figure 1B). The NB AV were monodisperse spherical particles with a diameter of about 500 nm by SEM ( Figure 1C). NB AV labeled with DiI presented scattered red fluorescent spots under fluorescence microscope ( Figure 1D). The results of particle size analyzer showed that NB AV had a size distribution of 519.9 ± 9.4 nm with a polydispersity index (PDI) of 0.142 ± 0.038 and Zeta potential of −22.04 ± 2.1 mV (Figure 1E and F). Under 4°C, the particle size of NB AV did not change significantly within 48h, and then gradually increased at 60h (627.47 ± 20.81 nm) ( Figure 1G). After 120 min of storage at 37 °C, the particle size stabilized within ~650 nm. Therefore, NB AV prepared within 48 h was selected for subsequent experiments ( Figure 1H). As illustrated in Figure 1I, the experimental setup is a custom-designed agarose mold, which was used as model simulating tissue in vitro. The NB AV exhibited a significant echo signal ( Figure 1J) with intensity of 120.10 ± 2.13 (gray-scale), which was higher than that of saline (8.66 ± 0.25, P < 0.01, Figure 1K and L), indicating that the NB AV can be used for subsequent in vivo ultrasound molecular imaging studies. 4937 control group (P < 0.05, Figure 2C). The apoptosis rate of RAW264.7 cells induced by the 100 µg/mL concentration of ox-LDL was slightly higher than that of 75 µg/mL concentration group, no significant difference between them (P > 0.05, Figure 2C). Therefore, ox-LDL was used at 75 µg/mL of concentration in the following experiments. To explore the targeting capability of NB AV on the apoptotic cells in vitro, the RAW264.7 cells were induced by 75 µg/mL ox-LDL for 12 h. As shown in Figure 2B, under confocal laser microscope, a large number of DiI-labeled NB AV accumulated on the cell surface, while nothing was observed in the control group (P < 0.01, Figure 2D). These results indicated that the NB AV could recognize the apoptotic macrophages in vitro. Atherosclerotic Plaque Model The two-dimensional ultrasonography was used to show the AS plaques. After 4-6 weeks of high fat feeding, the artery of ApoE −/− mice showed echo-enhanced arterial intima-media membrane, and appeared in the atherosclerotic plaque after 8-10 weeks. Figure 3A-D showed the representative images of normal and obvious plaques in longitudinal and transverse views of major artery, such as brachiocephalic artery, aortic arch, carotid artery and its branches. Compared with the control arteries, the atherosclerotic arteries presented intima-media thickness or different echogenic and sizes plaques in the lumen ( Figure 3A-D). Among them, the atherosclerotic plaques of brachiocephalic artery ( Figure 3B) and aortic arch ( Figure 3C and D) were the most easily displayed. From the gross view of anatomical microscope, a large number of milky lipids were observed on the arterial wall of ApoE −/− mice, especially at the lesser curvature of the aortic arch and the beginning of the carotid artery ( Figure 3E), but few in control mice. The difference was statistically significant (48.23 ± 1.58% vs 4.764 ± 0.43%, P < 0.001, Figure 3F). In addition, the results of oil-red O staining of arteries ex vivo showed that ApoE −/− mice fed with high-fat diet had 4938 a large number of red stained positive lipid distribution in the aorta and its branches ( Figure 3G), and the positive area was significantly higher than that in the control group (56.67 ± 1.62% vs 9.17 ± 0.61%, P < 0.001, Figure 3H). CEUS Imaging in vivo As shown in Figure 4A, the transverse view of the aortic arch was used to observe the echo enhancement and residence time of plaques (areas of interests, AOI) after NB AV and NB Ctrl injection. Specific results are as shown in Figure 4B, within 10s after injection of NB Ctrl and NB AV , obvious echo filling could be seen in the vascular lumen. The change of the echo signal for NB AV was consistent with that for NB Ctrl at 10s injection in plaque area (130.67 vs 126.00, t = 3.8829, P = 0.018, Figure 4C). After 1~5 min injection, the echo of NB Ctrl in vessel lumen and plaque gradually decreased, but strong echo of NB AV still existed in the plaque area. After 10 min injection, the echo of NB Ctrl in lumen and plaque area both disappeared, but the echo signal intensity in plaque area of NB AV still existed and far higher than that of NB Ctrl (72.67 ± 10.2 vs 22.00 ± 5.2, t = 18.17, P < 0.001, Figure 4D). According to the results of CEUS imaging, plaques with different performance of CEUS in NB AV and NB Ctrl were included in the group of NB AV sensitive plaques, and that with similar performance of CEUS were incorporated in the group of NB AV insensitive plaques. Histopathological Results Normal vascular endothelium is tightly and neatly arranged, but atherosclerotic vessels presented significant morphological changes, as seen from a large amount of vacuolar adipose tissue with disordered structure by HE staining, particularly the NB AV sensitive plaques ( Figure 5A). Oil red O staining showed increment of atherosclerotic burden with abundant red stained lipid distribution in atherosclerotic arteries ( Figure 5B). Compared with the NB AV insensitive plaques, the NB AV sensitive plaques has a larger area of OiI-Red-O lipid-positive areas ( Figure 5D, P < 0.05). Compared with control plaque, atherosclerotic vessels showed lesser blue stained collagen fibers with more significant performance in the NB AV sensitive plaques ( Figure 5C and E, P < 0.01). Figure 6A and B showed the immunofluorescence results of CD68 and α-SMA, which were specific markers of macrophages and smooth muscle cells, respectively. As expected, we found that reduced α-SMA expression and more CD68 positive cells near the vascular lumen within the NB AV sensitive plaques. By comparison, NB AV insensitive plaques did not find the above significant features. Further quantitative analysis revealed that the percentage of CD68 positive cells in NB AV sensitive plaques was significantly higher than that in NB AV insensitive plaques (14.73 ± 2.34% vs 7.30 ± 0.81%, P < 0.05, Figure 6D), but the percentage of α-SMA positive cells was opposite between them (4.80 ± 0.40% vs 11.50 ± 1.50%, P < 0.05, Figure 6C). In addition, abundant TUNEL-positive cells were observed in NB AV sensitive plaques and a significant difference with that of control mice was observed (P < 0.05, Figure 6E and F). Discussion At present, the mechanisms underlying the transition of atherosclerosis stable plaques to clinically significant lesions are currently not fully clear, which involve a complex interplay of several biological processes, including inflammation, matrix remodeling, angiogenesis and apoptosis. 29,30 In plaques, several biological processes occurred simultaneously, such as lipid accumulation, cell proliferation, apoptosis, extracellular matrix degradation, and repair. The balance among these processes is essential for plaque progression and clinical outcome. According to a new clinical study, only apoptosis (particularly of macrophages and SMCs) was significantly associated with a younger physical plaque age. 7 As was proved by many researches, apoptosis was a prominent feature of advanced atherosclerotic plaques. 8,31,32 These findings support the importance of apoptosis in plaque progression, unravelling its potential use for diagnostic and therapeutic strategies for patients with fast-progressing plaques. As we know, several kinds of cells undergo apoptosis during plaque formation, including smooth-muscle cells and endothelial cells. However, macrophages are implicated in all stages of atherosclerotic lesion development and play a critical role in the initiation and development of AS. 33,34 As mentioned before, macrophage apoptosis is a key cellular event in the development of early atherosclerotic lesions into vulnerable plaques, which determines the vulnerability of plaques to a certain extent, 35 and provides a more specific potential target for ultrasonic molecular imaging in the pathological process of plaques. Detection of macrophage apoptosis may help identify AS lesions severity. 7,9,36 4941 V-PS conjugates serve as an indicator of the early stages of apoptosis. 25,37 In this research, we prepared nanoscale bubbles carrying apoptosis molecular probe AV by optimized film hydration and "biotin-avidin" methods, as previously reported. 25 The NB AV has a diameter of 519.9 ± 9.4 nm with a very low PDI value and excellent stability under 4 °C and 37 °C. In addition, we found that NB AV could target apoptotic macrophages induced by ox-LDL and exhibited a high echogenicity signal intensity in vitro. These results confirmed that NB AV may potentially be used as a targeted ultrasound contrast agent for in tissue applications. Studies have shown that neovascularization within plaque is fragile and permeable probably due to the lack of mural cells and poorly formed endothelial cell junctions. [38][39][40][41] The normal vascular physiology results in tight (<2 nm) endothelial junctions, which will restrict nanoscale particle distribution, whereas a dysfunctional endothelium leads to large gaps that allow macromolecules and nanoparticles to extravasate from the bloodstream at local sites and remain retained locally owing to impaired lymphatic drainage. 39,42 Our home-fabricated NB AV has an appropriate range to easily penetrate into the vascular wall and accumulate in vulnerable lesions through enhanced permeability and leakage of inflammatory endothelium. The leakage of NB AV into atherosclerotic lesions through endothelium gap can be considered as a non-specific targeting process (passive targeting), whereas PS externalized on the surface of apoptotic cells targeted by ligand AV is an active effect. Taken together, these may be the reason for obvious sensitivity and specificity of NB AV than common NB Ctrl in the diagnosis of vulnerable plaque. As expected, we found that NB AV could exactly displayed a potential indicator to evaluate the AS lesions and vulnerable plaques via CEUS molecular imaging in vivo. Many research reported different imaging modalities for identification and visualization of AS lesions or plaque vulnerability, 19,43 such as micro-CT imaging, 44 intravascular ultrasound (IVUS) imaging, 45 optical coherence tomography (OCT) imaging, 46 contrast-enhanced MRI, 47 as well as CEUS. [48][49][50] However, these researches mainly focused on monitoring the outcomes of plaque formation, such as assessment of plaque neovascularization, macrophage distribution, vascular plaque burden and atherosclerotic plaque composition. In contrast, the purpose of our research emphasized on the feasibility of NB AV to monitor early plaque vulnerability for prediction of AS severity as soon as possible. In our study, plaques with strong echo signal of CEUS imaging were defined as NB AV sensitive plaques because of the presence 4943 of numerous apoptotic macrophages in these plaques and active targeting effect of NB AV on these apoptotic cells. What needs to be emphasized is that these NB AV sensitive plaques does not mean that the AS lesion has developed into a very severe degree. The fact is that it still takes time from vulnerable plaques to plaques prone to rupture or rapidly form a thrombus leading to acute cardiovascular events. Moreover, we quantitatively analyzed the content of lipid and collagen in plaques by oil red O staining and Masson staining. Poor clearance of apoptotic macrophages may lead to the accumulation of cell debris in the lipid-rich core of atherosclerotic plaques. Therefore, lipid-associated oil Red O staining could reflect the number of apoptotic macrophages and the vulnerability of AS plaques. In addition, CD68 and α-SMA, specific markers of macrophage and smooth muscle cells, were also identified as core parameters of vulnerability plaque assessment. We found that NB AV sensitive plaques presented several instability features, including large necrotic nucleus, decreased collagen content, more macrophage infiltration, and thin plaque fibre cap. These results further confirmed our result of CEUS imaging, indicating that NB AV as an ultrasonic targeted molecular imaging probe has great feasibility in diagnosing plaque vulnerability. Of course, prospective outcome studies in a larger group of animals were needed to establish the value of this imaging method. With the intensive studies on macrophage apoptosis and AS vulnerable plaques, the development of nano-scale ultrasound contrast agents for the targeted diagnosis of AS lesions and vulnerable plaques can provide a reference for pre-clinical ultrasound imaging strategies used early screening, early diagnosis and early treatment of AS lesions. In this study, the ApoE −/− atherosclerotic plaque model was established by high-fat and high-cholesterol feeding alone and then verified by high-frequency ultrasound imaging and histology. High-frequency ultrasound showed that 6-8 weeks old ApoE −/− mice fed with high-fat and high cholesterol diet had enhanced intimal echo after 4-6 weeks, slightly stronger plaque echo protruding from the intimal surface after 8-10 weeks, and plaque echo widely distributed in various sections of the aorta and major branches (innamial artery and common carotid artery) that could be detected by ultrasound. At present, in view of the advantages and disadvantages of different modeling methods of AS plaque, our study adopted a simple high-fat and high-cholesterol diet to establish AS plaque model, which greatly reduced the accidental death of mice caused by mechanical injury such AS arterial ligation. Conclusion In summary, this study provides a tool of ultrasound molecular imaging for assessment of atherosclerotic lesions and identification of vulnerable plaques in vivo by targeting apoptotic macrophages with NB AV in AS mouse models, suggesting that NB AV can be used as a molecular probe for identification of AS vulnerable plaques.
2022-10-21T05:07:19.753Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "7831ea9666da950f504b31f94364d8920e6e4c5a", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=84813", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7831ea9666da950f504b31f94364d8920e6e4c5a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
236421036
pes2o/s2orc
v3-fos-license
Influence of Plasma Nitriding with a Nitrogen Rich Gas Composition on the Reciprocating Sliding Wear of a DIN 18MnCrSiMo6-4 Steel In this study, the sliding wear of a DIN 18MnCrSiMo6-4 continuous cooling bainitic steel plasma nitrided with a nitrogen rich gas composition was investigated. To evaluate the influence of processing time and temperature on mechanical and microstructural characteristics of nitrided layer, the samples were nitrided at 400, 450, 500 and 550 °C for 3, 6 and 9 h. The produced nitrided layers were characterized concerning the microstructure, phase composition, microhardness and surface roughness. The samples were tested by ball-on-flat reciprocating dry sliding for friction coefficient and wear analysis. The tests were stopped after a given damage criteria involving the rapid growth of the friction coefficients and wear. The correlation of the different treatment parameters and resulting case depths and surface hardness with sliding distance at the time of microcracks formation or delamination of the surface layer was evaluated statistically by the analysis of variance (ANOVA). The plasma nitrided samples at 550 °C showed better wear performances in the ball-on-flat tests than the other groups investigated, since these samples have a thicker compound layer and diffusion zone higher than the other conditions investigated. In general, the beginning wear is slower because of the hardest region of the compound layer. Introduction Continuous cooling bainitic steel has an increasing use in industrial processes owing to its excellent outstanding combination of yield strength and toughness [1][2][3] , the possibility to reduce the process chain and reduced energy consumption [4][5][6][7] . Although their surface properties are acceptable for many purposes [8][9][10][11] , surface hardness and wear resistance are insufficient for some highly loaded automotive components as gears for example. Therefore, the improvement of surface properties is essential. Several surface treatments have been considered for advanced high strength steels for automotive applications [12][13][14] . The plasma nitriding process can be used to develop outstanding surface properties while carrying treatments using relatively low treatment temperatures 15,16 , thus representing a reliable alternative to conventional thermal and thermochemical treatments. Since long nitriding times can have a detrimental effect on the mechanical properties of steel, the treatment temperature should be carefully selected to avoid overheating of the steel itself and to preserve the bainitic microstructure of these steels, as well as softening of the plasma nitrided layer due to the excess aging of nitrides 15,17 . The microstructure of the plasma nitrided layer must be well controlled to achieve the desired increase in surface hardness and provide significant improvements in wear resistance 18,19 . This study is a sequence of a previous work about plasma nitriding of continuous cooling bainitic steel 15 . The previous investigations showed excellent results with respect to case depth, surface hardness and compressive residual stresses in the diffusion zone. Despite the variety and particularities of investigations concerning the wear resistance of plasma nitrided steels [20][21][22][23] , the influence of plasma nitriding on the reciprocating dry sliding wear of continuous cooling bainitic steels is relatively scarce. The main objective of this work is to improve the surface properties of DIN 18MnCrSiMo6-4 continuous cooling bainitic steel by plasma nitriding, aiming to increase surface hardness and optimize its wear resistance in reciprocating sliding. In order to study the influence of *e-mail: rafael.dalcin@ufrgs.br plasma nitriding with a nitrogen rich gas composition on the reciprocating sliding wear of a DIN 18MnCrSiMo6-4 continuous cooling bainitic steel, a systematic variation of temperature and time parameters has been carried out. The major contribution of the work is to attempt, through statistical methods, to establish correlations between performance parameters (wear, layer thickness, hardness) and temperature and time employed to produce the modified surfaces. Materials and Methods The steel to be plasma nitrided was a DIN 18MnCrSiMo6-4 steel (0.19% C, 1.16% Si, 1.35% Mn, 1.14% Cr, 0.26% Mo and 0.06% Ni) which is a continuous cooling bainitic steel. The material's microstructure is composed of pro-eutectoid ferrite and granular bainite after hot rolling 7,15 . In the case of continuous cooling bainitic steels there is no annealing (tempering) treatment applied as the microstructure was obtained directly after hot rolling and controlled cooling. Prior to the surface treatments, samples flat surfaces were ground with silicon carbide grinding paper in a sequence with increasing grit sizes (#100, #220, #320, #400, #600, #1200) and then polished with 3 μm diamond paste, in order to obtain low roughness, and almost no plastic deformation. The surface treatment was carried out in a plasma nitriding furnace equipped with a DC power supply which was developed by the Technology Center and Metallurgy Department of Federal University of Rio Grande do Sul. The samples were degreased and cleaned with acetone in an ultrasonic bath before being placed into the plasma nitriding furnace. The sputtering was performed for 15 min, using pure hydrogen with a flow of 140 sccm, and in the heating step until reaching the treatment temperature, a gas mixture composed of argon and hydrogen was used, with 150 sccm of argon and 140 sccm of hydrogen (H 2 ). For the plasma nitriding process, a gas mixture composed of 76 vol.% nitrogen (N 2 ) and 24 vol.% H 2 was used. The treatment parameters are shown in Table 1. In this work is studied the influence of processing time (3, 6 and 9 h) and temperature (400, 450, 500 and 550 °C) on mechanical and microstructural characteristics of plasma nitrided layer. The temperature range based on common plasma nitriding temperatures for low alloyed steels, from 400 to 550 °C, to evaluate the maximum temperature that could be used to accelerate the layer growth without causing a decrease in core hardness 15 . The nitrogen rich gas composition (76 vol.% N 2 ) is commonly used in gas nitriding, as it provides formation of higher layer depths. Therefore, in this work we firstly choose to employ this fixed composition and vary temperature and time. The current density was calculated by dividing the measured current by the total area covered by the plasma sheath. The current density cannot be directly controlled, as we did not use an auxiliary heating system. The current density is dependent on the desired treatment temperature and depends on the gas mixture being used, and on the heat exchange in the furnace. For the microstructural analysis of the plasma nitrided layers, samples were carefully cut perpendicular to the upper nitrided face in a precision diamond blade cutting machine. After hot pressing mounting in bakelite, the crosssections were ground with silicon carbide grinding papers in a sequence with increasing grit sizes (#100, #220, #320, #400, #600, #1200) and polished with a diamond paste of 3 μm particle size. A Nital 3% solution was used to etch the samples revealing the microstructure of the nitrided layers. The cross-sectional microstructural images were obtained by an Olympus metallurgical microscope (BX51M model). The compound layer thickness was measured using the Image-J software. X-ray phase analysis was carried out using a XRD M -Research Edition diffractometer (GE Seifert Charon model) equipped with a Meteor 1D fast line position sensitive detector. Phase analysis was performed on samples surface in the Bragg-Brentano geometry (θ-2θ) with Cr-Kα (λ = 2.2897 Å) radiation. Diffraction lines were recorded in the range of 2θ from 55° to 80°, with a step size of 0.01° and a scan time of 200 s per step. The phases present in the non-nitrided and in the plasma nitrided material were determined by comparative analysis between the standards contained in the crystallographic information files (CIFs) from the Crystallography Open Database 24 and the Inorganic Crystal Structures Database 25 by analysis using the Profex-BGMN software 26 . Vickers microhardness tests were applied to the plasma nitrided samples for determination of the surface hardness and Vickers microhardness profiles of the cross-sections. In the tests, an Insize micro hardness tester (ISH-TDV 1000 model) was used. Five microhardness profiles per sample were constructed, using a load of 100 gf with a dwell time of 10 s 27 . The case depth (compound layer + diffusion zone) was conventionally determined by the distance from the surface where the core hardness is exceeded in 50 HV. The determination of the case depth was carried out based on the microhardness profiles and following the recommendations established in DIN 50 190 28 . The roughness parameters (R a and R z ) were measured with a Mitutoyo contact profilometer (SJ-210 model) equipped with an 8 µm tip radius probe, based on ISO 4287 29 . Due to the topographic characteristics of the samples, measurements were made using the aperiodic roughness profile configuration 30 . In this case, a cutoff value of 0.8 mm and a measurement length of 4.0 mm was set on the profilometer. Wear tests were conducted using a CETR UMT (Universal Materials Tester) tribometer with a reciprocating ball-onflat configuration, according to the standard ASTM G133 31 . Tests were performed in a room with 50 to 55% of relative humidity and the temperature at 23 °C. The treated surfaces were placed in contact against a 4.76 mm zirconia ball (1150 HV 0.1 ± 28). The choice of a ball with a close hardness to the nitrided surface is due to the need to simulate the contact conditions with a close hardness, such as for example, the contact between the flanks of the gears, to be tested in future works. The load of 6 N was applied (maximum contact pressure nearly 1351 MPa), and the tests were conducted until the microcracks formation or the delamination of the surface layer. The applied load was selected aiming to analyze the wear resistance of the whole case (compound layer + diffusion zone), and not the specific wear resistance of the compound layer. The maximum contact pressure (p o ) of 1351 MPa was estimated for wear tests through the equation [p o =(3.W/2.π.a 2 )], being (W) is the normal load and (a) is the radius of the contact area 32 . For this reason, it was chosen to use a high contact pressure. Sliding proceeded in the reciprocating mode over a 4 mm track at a frequency of 4 Hz. The surface topography and track profiles were acquired in a Bruker interferometer (Contour Elite model). The cross-sectional area was measured, using the Image-J software, in three regions of each worn track. Finally, to identify wear mechanisms, some specific analysis of the balls and wear tracks were carried out in a Zeiss scanning electron microscope (EVO MA 15 model). Interaction plots for case depth, surface hardness, and sliding distance at the time of microcracks formation or delamination of the surface layer, with nitriding temperature and time were generated. The data was then submitted to an analysis of variance, in order to evaluate the statistical significance of temperature variation and treatment time on wear, following the Tukey's-b post hoc test 33 by using the statistical package program (Minitab 16). Three repetitions and a confidence level of 95% (significance level α = 0.05) were used. Microstructure, phase analysis and microhardness As a representative result of treatments using the gas mixture of 76 vol.% N 2 and 24 vol.% H 2 , cross-sectional microstructures of a plasma nitrided sample at 550 °C and 400 °C are shown in Figure 1a, b, respectively. The white portion on top distinguishes the compound layer, as seen at Figure 1a. For plasma nitrided samples at the lowest temperature, Figure 1b, the case depth is predominantly composed of the diffusion zone with a very thin compound layer (see Table 2). Under conditions where samples were plasma nitrided at temperatures higher than 400 °C, effective compound layer appears in the metallography. All different nitriding conditions in the present work, besides the diffusion zone, led to the formation of compound layer. Table 2 shows the average case depth of the plasma nitrided samples for different treatment temperatures and times. In general, the use of higher temperatures and longer treatment times favored the expansion of case depth. Previous work 15 showed that all the plasma nitrided samples exhibited diffraction peaks that indicated the formation of a compound layer. Since the penetration depth of radiation decreases with 2θ angle 34 , the peaks of the ε-Fe 2-3 (C)N phase, contained in the compound layer, are easily identified with smaller 2θ angles. Therefore Figure 2a-c present the diffractograms for the (2θ) angular interval from 55° to 80°. The α phase peaks of the nitrided samples are slightly shifted to the region of lower 2θ angle with respect to the non-nitrided samples. This is an information coming from the diffusion layer, as the compound layer is composed by (carbo)nitrides only 15 . In general, the peaks intensity related to the γ'-Fe 4 N and ε-Fe 2-3 (C)N nitride phases increase with the temperature and treatment time, since compound layer thickness (Table 2) follow the increase of these parameters. The surface hardness decreases, especially for longer treatments of 6 and 9 h ( Table 2). The core hardness is also reduced in samples treated at 550 °C, which is not observed for other treatment temperatures 15 . Higher plasma nitriding temperature leads to the intensification of defects formed in the compound layer, such as pores and cracks, and therefore the measure hardness value decreases. Our previous work shows that at high temperatures, there is also a competition between the hardening effect of nitriding and the hardness decrease due to the overheating of the steel matrix, so that the maximum hardening potential is not reached. Figure 3a-c show microhardness profiles for the different plasma nitriding conditions. From the microhardness profiles, a case depth was estimated. As expected, the nitriding depth follows the behavior of a diffusion controlled process 19,35 . Plasma nitriding under the conditions implemented in this work could lead to hardness levels from 1029 to 1295 HV 0.1 , and case depths up to 300 µm, see Table 2. Plasma nitriding leads to an increase in hardness on the surface (Table 2), for the reason it also ends up creating a less ductile region more prone to brittle fracture 36 . The mechanical properties of the diffusion zone influence the fracture properties of the compound layer, since the diffusion zone provides support for the surface compound layer 15,16,36 . Previous work 15 showed that the higher nitriding temperature and time promoted an increase in the fracture toughness of the compound layer. This would also support the proposition that a harder substrate results in an increase in fracture toughness through improved mechanical support (load bearing capacity) of the compound layer by the diffusion zone. Roughness To investigate the influence of temperature and nitriding time on the surface topography, the roughness values (parameters R a and R z ) measured before and after plasma nitriding are shown in Figure 4a, b. In the cases shown in Figure 4a, b, the exposure time to ion bombardment ends up changing the final a lot, confirming the results reported by 37,38 . It can be seen in Figure 4a, b that after the plasma nitriding treatment the roughness (R a and R z parameter) increases when compared to the polished surface prior to nitriding, and that the increased temperature and the nitriding time caused an increase in the roughness in the two parameters evaluated. From a statistical point of view, the ANOVA reveals that the roughness was influenced by the nitriding temperature (P -Value = 0.00 < α = 0.05) and that there was a synergistic effect between the temperature and the nitriding time (P -Value = 0.02 < α = 0.05) on the increase in roughness, but it was not possible to determine the effect of treatment time (P -Value = 0.25 > α = 0.05), due the standard deviation. Tukey's-b post hoc test shows that the greatest roughness was found in the plasma nitrided samples for 9 h at 550 °C, Figure 4a, b. The three-dimensional topographic measurements revealed that the plasma nitrided surface morphology is modified by the nitriding parameters, confirming the previously presented roughness values. Figure 5a, b show the 3D surface representative images of plasma nitrided samples with gas mixture of 76 vol.% N 2 and 24 vol.% H 2 . In general, there is an intense change in topography after nitriding. This generalized increase in roughness after nitriding can be related to the ion bombardment during plasma treatment and the formation mechanism of the nitrided layer [37][38][39] . Nevertheless, this increase in topography roughness may have contributed to the increased wear of the zirconia ball. Reciprocating dry sliding wear tests In this work, the friction coefficients (COF) approach obtained on the wear tests will be carried out in two regions: at the beginning on the wear tests, Figure 6a-d, and at the time of microcracks formation or delamination of the surface layer, Figure 7a-d. The compound layer of the group of plasma nitrided samples at temperatures of 400, 450 and 500 °C is similar (from point of view of the COF), with less roughness and more importantly the characteristics of the compound layer, as specifying the images shown in Figure 6a-d. For the group of plasma nitrided samples at 550 °C, the COF is very low since the beginning of the tests, regardless of the treatment time. Regardless of the nitriding temperature, at the beginning of the reciprocal sliding wear test, the behavior of the nitrided samples in 9 h is similar at all temperatures. This is justified due to the fact that the hardness of the compound layer is similar among the group of plasma nitrided samples in 9 h, Table 2. Even with the roughness being lower in the non-nitrided samples and in the nitrided samples in 3 h, the COF is higher, and this is related to an adhesion effect as being preponderant for this tribological system. The average COF of the non-nitrided samples was 0.56 ± 0.05 (admitting that the steady-state occurs from 30 m), Figure 7a-d. The non-nitrided samples stabilizes the friction values around 0.56 ± 0.05 in a test of up to 30 m of sliding distance, which did not occur for the plasma nitrided samples (with the exception from plasma nitrided samples at 550 °C for 6 and 9 h). The results show that the nitriding treatments were responsible for the reduction of the COF, Figure 6a-d, this can be attributed to the increase in hardness (Table 2) and the contact area between ball and surface, which reflects in smaller deformations in contact surfaces and depends on elastic deformations. The lower COF at first occurs due to the ceramic characteristics of the compound layer 40,41 . The increase in COF, over time of wear is due to the presence of the third body of the ball and of the compound layer that was wearing out. As shown in Figure 7a-c, the sudden growth of COF is related to the wear of the surface layer that exposes the diffusion zone. When the diffusion zone is reached, the friction rises and may even reach or exceed the friction of the non-nitrided samples. Some local damage, results in the propagation of microcracks 42,43 or delamination of the surface layer 43 . Figure 8a shows a SEM image of the worn section of the ball tested at a load of 6 N, and Figure 8b, c show an EDS analysis of the worn section of the ball. As shown in Figure 8a, b, the possibility of material detachment (in a region already weakened by microcracks) or delamination mechanism. Both mechanisms are characteristic of dry sliding wear tests. The plasma nitrided samples at 550 °C for 3 h showed an abrupt increase in COF in approximately 53 m of sliding distance. As shown in Figure 9a, b and the profile of the track with pile-up, the high plastic deformation of the substrate contributed to delaminate the compound layer, exposing the diffusion zone, and raising the COF. Therefore, greater case depth (Table 2) tends to prevent the pile-up formation and prevent the sudden delamination of the surface layer. The plasma nitrided samples at 550 °C for 6 and 9 h did not show an abrupt COF growth, Figure 7d, associated with the microcracks formation or delamination of the surface layer, confirmed by SEM/EDS, Figure 10a-d. However, the test was interrupted before the complete spalling of the plasma nitrided surfaces (Table 2). In general, the higher nitriding temperatures promoted an increase in the compound layer thickness 44,45 . This increase associated with the ε-Fe 2-3 (C)N phase, which according to Doan et al. 46 has a lower friction coefficient, Figure 2a-c, provided the bearing capacity of the compound layer, preventing the beginning formation of microcracks or delamination, as seen in Figure 11a, b. Another factor that must be acting strongly in this case is the softer hardness gradient between the surface and core of the plasma nitrided samples at 550 °C, see Figure 3a-c. Another fact observed, in EDS analyses by Figure 10c, d and cross-section profiles was the deposition of particles from the zirconia ball on the worn tracks of plasma nitrided samples at 550 °C for 6 and 9 h. In general, all plasma nitrided samples showed zirconia deposition on the worn tracks. As only slight wear occurred in the plasma nitrided samples at 550 °C for 6 and 9 h, the SEM/EDS analysis, Figure 10a-d detected only particles of third body generated by the wear of the zirconia ball. Figure 9c, d, Figure 10c, d and Figure 11c, d also show the EDS analyses carried out on the worn tracks after reciprocal sliding wear to identify the chemical elements present in the surface of the plasma nitrided samples, in addition to the zirconia transferred from balls to the tracks. The peaks associated with the core steel lose intensity in samples with higher compound layer thickness, that is, in plasma nitrided samples at higher temperatures (500 and 550 °C). Studies developed by Dutrey et al. 47 show that the plasma nitriding with a nitrogen rich gas composition on a low-alloy steel (AISI 4140) formed intergranular fracture mode in the diffusion zone, related to nitride precipitation at grain boundaries. Despite the gas mixture of 76 vol.% N 2 can be used to improve the surface properties of the DIN 18MnCrSiMo6-4 steel, the nitrogen rich gas composition showed precipitation of nitrides, carbo-nitrides or carbides at grain boundaries for plasma nitriding of M2 high-speed steel 44 . It is known that grain boundary precipitation increases with time and temperature 44,45 . The presence of precipitation at grain boundaries results in brittleness of the diffusion zone 44,48 , and this can lead to the delamination of the surface layer, Figure 9a, b and Figure 11a, b, or even spalling. The pile-up 42 of tracks were analyzed from the crosssections, according to Figure 12. As inferred from the cross-section profiles illustration ( Figure 12) and case depth (Table 2), tracks did not reach the substrate in the plasma nitrided surfaces. The friction and wear results can be associated to the operating wear mechanism found in the plasma nitrided samples. Observing a cross-section profile view of tracks revealed in the optical interferometry, one can note the absence of pile-up for the group of plasma nitrided samples at 550 and 500 °C for 6 and 9 h and 450 °C for 9 h. For the group of plasma nitrided samples at 550 and 500 °C for 3 h, 450 °C for 3 and 6 h and 400 °C for 3, 6 and 9 h, it was verified of pile-up, resulting in the predominance of plastic deformation. As mentioned before, samples presenting higher case depths, exhibited lower plastic deformation, since the diffusion zone provided the bearing capacity of the compound layer. In order to evaluate the statistical significance of treatment temperature and time on the case depth, surface hardness and sliding distance on the time for microcracks formation or delamination of the surface layer, the collected data results were analyzed using the Minitab 16 software. The results of the interaction plot by analysis of variance are shown in Figure 13a-c. The interaction plot demonstrates how the relationship between a categorical factor and a continuous response depends on the value of the second categorical factor. The interaction strength is greater when the lines in the graphs are non-parallel 49 . From a statistical point of view (Table 3), it is noticed that the case depth was influenced by the temperature (P -Value = 0.00 < α = 0.05) and time (P -Value = 0.00 < α = 0.05) parameters, but there was no synergistic effect between temperature and treatment time (P -Value = 0.68 > α = 0.05), Figure 13a, because there is a competition between sputter yield and nitrogen uptake. Tukey's-b post hoc test shows that the case depth has a significant difference between the times of 3 and 6 h, with a technical tie between 6 and 9 h. As for the case depth, the temperature that showed the best response was the nitriding performed at 550 °C, confirmed by the Tukey's-b post hoc test. As can be seen in Figure 13b, the surface hardness behavior of the samples remained relatively close. Both factors studied temperature and time showing statistical relevance (P -Value = 0.01 < α = 0.05) and were presented in Table 3. Tukey's-b post hoc test showed that although the temperature is significant in increasing the surface hardness, the difference between the results obtained in the time variation was not greater than three times the standard deviation. Tukey's-b post hoc test between the different temperatures showed that there was a significant increase between 400 and 500 °C. Regarding the hardness profile, the plasma nitrided samples at 550 °C did not show any significant difference with nitriding performed at 500 °C, but in terms of surface hardness, the best temperature was 500 °C. The time should be determined according to the need for the case depth. However, as previously discussed, Figure 13a, low case depths (compound layer + diffusion layer) provided the pile-up formation. Therefore, as there was a statistical tie on the wear of the plasma nitrided samples at 6 and 9 h at 550 °C, Figure 13c, the use of 6 h is recommended, due to the process cost. Figure 13c shows that the sliding distance at the time of microcracks or delamination of the surface layer had no synergistic effect (P -Value = 0.16 > α = 0.05), and that the main effects were more important ( Table 3). However, the input parameters: temperature (P -Value = 0.00 < α = 0.05) and time (P -Value = 0.01 < α = 0.05) had significant effects on the sliding distance until the abrupt increase in COF, Figure 7a-d. By Tukey's-b post hoc test analysis the nitriding time did not show differences greater than three times the standard deviation in the sliding distance. However, low nitriding times generated smaller case depth and favored the pile-up formation which, in some cases, contributed to delamination of the surface layer. The temperature variation analysis showed that the ideal temperature for increasing the sliding distance before the abrupt increase in COF and the consequent increase in wear resistance occur at 550 °C. The best performance for reciprocating dry sliding wear tests was found in plasma nitrided samples at 550 °C with nitriding time of 6 h. Final discussion The compilation of case depth, surface hardness and sliding distance at the time of microcracks formation or delamination of the surface layer of plasma nitrided samples are given in Table 2. Time and temperature can influence not only the compound layer thickness, but also its composition. The X-ray analysis, Figure 2a-c, showed the formation of biphasic compound layer, ε-Fe 2-3 (C)N and γ'-Fe 4 N, for in all treatment conditions. It might normally be expected that the compound layer containing predominant ε-Fe 2-3 (C)N has higher hardness than the compound layer monophasic containing γ'-Fe 4 N. For nitriding temperatures up to 500 ºC, hardness increases with treatment temperature and time. The observed behaviour seems to be governed, by the increase of the compound layer thickness. For 550 ºC on the other hand, the surface hardness is lower than for 500 ºC, which happened due to the overheating of the substrate, as core hardness decreased for this temperature 15,17 . Concerning the sliding distance at the time of microcracks formation or delamination of the surface layer, as nitriding temperature and time increased to 500 ºC, the compound layer also increased. The same happened to the sliding distance and the worn volume, as expected. For the group of plasma nitrided samples at 550 °C for 6 and 9 h did, no delamination of the surface layer (even at a sliding distance of 58 m), was observed. Actually, a marked wear of the zirconia balls took place for these testing conditions. The plasma nitrided surfaces presented a lower COF again zirconia ball than nonnitrided steel. When the surface layer starts to delaminate, the friction coefficient quickly exceeds the substrate values. The statistical analysis showed, with 95% of statistical confidence, that the best setup to resist dry sliding reciprocal wear on the tested configuration was plasma nitriding at 550 °C for 6 h. However, Dalcin et al. 15 showed that the core hardness of samples nitrided at 550 °C was impaired. Therefore, for practical applications in components such as gears, the use at 500 °C is recommended, due to temperatures at 400 and 450 °C would require too long treatment times to produce the appropriate case depth. In this work, it was found that the gas mixture of 76 vol.% N 2 can be used to improve the surface properties of the DIN 18MnCrSiMo6-4 steel, but this nitrogen rich gas composition can cause embrittlement of the diffusion zone, mainly in high carbon steels 44,45 . The embrittlement is associated with the formation of precipitates in the grain boundary 44,48 , but for steels with a low carbon, the embrittlement does not seem to be as intense. In order to avoid the embrittlement of the diffusion zone, a new study using a gas mixture with a lower nitrogen composition would be indicated. This issue will be addressed in future work. • Plasma nitriding of DIN 18MnCrSiMo6-4 continuous cooling bainitic steel is viable, since it demonstrated good response to the process with excellent results regarding layer depth and surface hardness. All plasma nitriding treatments were able to form a compound layer and generate a significant increase in surface hardness, raising it from 329.8 HV 0.1 in the material core to values above 1000 HV 0.1 ; • Plasma nitriding treatments were responsible for the reduction of the friction coefficient. The lower friction at first occurs due to the ceramic characteristics of the compound layer. The increase in COF, over time of wear is due to the presence of the third body of the ball and of the compound layer that was wearing out. The sudden growth of COF is related to the wear of the surface layer that exposes the diffusion zone. When the diffusion zone is reached, the friction rises and may even reach or exceed the friction of the non-nitrided samples; • If assessing the wear mechanisms, only the samples that have greater case depth (compound layer + diffusion zone), the diffusion zone supports the compound layer. This causes the wear to be slower, precisely because the beginning wear is in the hardest region of the compound layer until it reaches less hard regions. For this reason, this transition is smoother and takes longer before microcracks or delamination appears. In the case of plasma nitrided samples at 550 °C for 6 and 9 h, the wear test would have to be extended for a longer time until the damage mentioned above appears; • Since that the plasma nitrided samples at 550 °C have a thicker compound layer and diffusion zone higher than the other conditions investigated, these samples showed better wear performances in the ball-on-flat reciprocal dry sliding than the other groups investigated. The nitrided layer formed in the treatment at 550 °C for 6 h present the best performance for the tested conditions, with 95% of statistical confidence. Acknowledgments This
2021-07-27T00:05:04.853Z
2021-05-31T00:00:00.000
{ "year": 2021, "sha1": "efbfa04c4c675dc85a3f058c5d3903b93baf5849", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/j/mr/a/TMQ4rBQ9f7Z83pR9zPhTg7h/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5d11df4359f452bea1edd57285510848444da71d", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
267108777
pes2o/s2orc
v3-fos-license
Watershed Management for Sustainable Livelihoods: Integrating Local Pond Ecosystems in Malda, West Bengal : This research article explores the intricate relationship between watershed management, local pond ecosystems, and sustainable livelihoods in the Malda district of West Bengal. The study investigates the potential of integrating local pond ecosystems into watershed management strategies as a means to enhance economic prosperity, ecological resilience, and community well-being. Through a multidisciplinary approach, combining ecological assessments, socioeconomic surveys, and participatory engagement, the research aims to provide insights into the synergies that exist between watershed management practices and the sustainable livelihoods of the local villagers. The research methodology involves field studies encompassing hydrological analysis, biodiversity assessments, and community consultations, aimed at understanding the dynamics of local pond ecosystems and their interconnectedness with the broader watershed. Special attention is given to the diverse ecosystem services provided by these ponds, such as water provisioning, nutrient cycling, and supporting aquatic biodiversity, and how they contribute to the overall resilience of the watershed. The findings of this study are expected to contribute valuable information for policymakers, environmental practitioners, and local communities in Malda district, offering a comprehensive understanding of how the integration of local pond ecosystems into watershed management plans can foster sustainable livelihoods. The article discusses potential strategies for optimizing the co-management of these resources, balancing the ecological health of the ponds with the socioeconomic needs of the local population. Ultimately, this research aims to provide a blueprint for sustainable watershed management practices that prioritize the conservation and utilization of local pond ecosystems, fostering a harmonious balance between ecological integrity and the livelihood aspirations of the communities in Malda, West Bengal. Introduction: The Malda district in West Bengal, India, grapples with multifaceted challenges rooted in the delicate balance between natural resource management and the sustenance of local livelihoods.Among these challenges, watershed degradation and its impact on the well-being of communities stand out prominently.Local pond ecosystems, once integral to the socioecological fabric of the region, have faced neglect and degradation, contributing to a decline in both environmental quality and the economic prosperity of the inhabitants.This research endeavors to address these concerns through a focused exploration of watershed management strategies that integrate and prioritize the restoration of local pond ecosystems. Problem Statement: Malda's communities heavily rely on agriculture, with the local economy intricately linked to the health of the watershed.However, unsustainable land-use practices, deforestation, and inadequate water management have led to increased soil erosion, decreased water quality, and diminished agricultural productivity.These challenges exacerbate existing vulnerabilities and threaten the livelihoods of the local population. Prior Artwork: Existing literature has acknowledged the importance of watershed management for sustainable development.However, the specific focus on the integration of local pond ecosystems within watershed management strategies in the context of Malda is underexplored.Previous research tends to emphasize broader watershed issues or isolated pond management, neglecting the potential synergies that could arise from their integrated management. Rationale Behind the Research: The rationale for this research stems from the recognition that addressing the challenges in Malda requires a holistic approach that considers both the watershed and its integral components, particularly local pond ecosystems.By bridging the gap between ecological health and human well-being, this study aims to contribute nuanced insights into sustainable watershed management practices that can serve as a model for similar regions facing comparable challenges. Genesis of the Research: The genesis of this research lies in the direct observation of the deteriorating conditions in Malda, coupled with the acknowledgment of the unique role that local pond ecosystems have played historically in supporting livelihoods.Collaborative efforts with local communities, government bodies, and environmental organizations have highlighted the need for targeted interventions that consider both ecological restoration and community empowerment. Individual Efforts: The research amalgamates the expertise of environmental scientists, hydrologists, social scientists, and community engagement specialists.A multidisciplinary approach is deemed essential to holistically address the complex interplay between environmental and socioeconomic factors.Individual efforts coalesce into a comprehensive research framework, combining field surveys, ecological assessments, socioeconomic analyses, and participatory methodologies. Expected Outcomes: Anticipated outcomes include the development of sustainable watershed management strategies that integrate local pond ecosystems, fostering improved water quality, increased agricultural productivity, and enhanced community resilience.Furthermore, the research aims to provide actionable recommendations for policymakers, environmental practitioners, and local communities, fostering a model that prioritizes the symbiotic relationship between watershed health and sustainable livelihoods in Malda, West Bengal. Materials & Methods: Study Area: The research focuses on the Malda district in West Bengal, India.The study area encompasses a representative sample of watersheds within the district (Figure 01 to 05), selected based on ecological diversity, land-use patterns, and community characteristics.Special attention is given to areas with significant local pond ecosystems that are crucial to the livelihoods of the inhabitants. Data Collection: Ecological Assessments: • A comprehensive analysis of the watershed's ecological parameters, including soil composition, vegetation cover, and water quality, is conducted.• Hydrological data is collected through field measurements, including river discharge rates, groundwater levels, and precipitation.Figure 1 to 5: Study location and sample collection site: Biodiversity Surveys: • Faunal and floral biodiversity assessments are carried out to understand the impact of watershed management on local ecosystems.• Emphasis is placed on identifying indicator species in and around the pond ecosystems. Socioeconomic Surveys: • Household surveys are conducted to gather information on livelihood patterns, agricultural practices, and the dependence on local pond ecosystems.• Questionnaires and interviews are utilized to assess the perceptions and preferences of the local community regarding watershed management practices. Remote Sensing and GIS Analysis: • Remote sensing data is employed to analyze land-use changes over time, identifying areas prone to degradation and soil erosion.• GIS mapping is utilized to visualize and interpret spatial relationships between watershed components and human activities. Community Engagement: Participatory Rural Appraisal (PRA): • PRA techniques, such as focus group discussions and community mapping, are employed to actively involve local communities in identifying challenges and potential solutions.• Traditional ecological knowledge from community members is documented. Stakeholder Workshops: • Workshops are organized with local stakeholders, including farmers, community leaders, and government officials, to discuss findings, elicit feedback, and co-design intervention strategies. Intervention Design: Identification of Key Intervention Points: • Based on the data collected, critical points for intervention within the watershed are identified, with a focus on restoring and enhancing local pond ecosystems. Ecological Restoration Strategies: • Tailored ecological restoration plans are developed, incorporating measures such as afforestation, soil conservation, and water management to improve the health of local pond ecosystems. Data Analysis: Quantitative Analysis: Statistical analyses are performed on ecological and socioeconomic data using appropriate software to identify patterns, correlations, and trends. Qualitative Analysis: Qualitative data, including narratives from community members and insights from participatory methods, is analyzed thematically to understand local perspectives and values. Ethical Considerations: Informed Consent: Informed consent is obtained from all participants involved in surveys, interviews, and workshops. Community Sensitization: Local communities are sensitized to the research objectives, and efforts are made to ensure that the research respects and benefits the community.The integration of these diverse data collection methods and engagement strategies aims to provide a comprehensive understanding of the watershed dynamics in Malda and lay the foundation for sustainable watershed management practices that incorporate the restoration and sustainable use of local pond ecosystems.The research underscores the importance of biodiversity conservation within pond ecosystems.The presence of indicator species suggests the ecological health of these systems, and conservation efforts are essential to maintaining these balances. Community Empowerment: Socioeconomic insights and community engagement activities underscore the critical role of communities in sustainable watershed management.Empowering local residents through capacity building and participatory decision-making processes emerges as a key strategy.Policy Implications: The study discusses the implications of its findings for policy formulation.Recommendations for policy interventions that incentivize sustainable land-use practices and community-based conservation efforts are presented. Challenges and Opportunities: Challenges faced during the integration process are discussed, including potential conflicts between conservation goals and local needs.Opportunities for synergies between conservation and development are explored to strike a balance. Future Research Directions: The article concludes with suggestions for future research, including longitudinal studies to monitor the effectiveness of implemented interventions, further exploration of specific ecological indicators, and ongoing community engagement for adaptive management.Figure 11: River discharge (Jul-Sep) across Watersheds: Conclusion In conclusion, this research underscores the imperative of integrating local pond ecosystems into watershed management strategies as a holistic approach to fostering sustainable livelihoods in the Malda district of West Bengal.The findings presented in this study shed light on the interconnectedness of ecological health, community well-being, and watershed dynamics, advocating for a paradigm shift in current practices. Key Contributions: Holistic Approach to Watershed Management: By recognizing the integral role of local pond ecosystems, this study advocates for a holistic approach to watershed management.The integration of these ecosystems into planning and implementation processes is essential for achieving long-term sustainability. Ecological Resilience and Livelihood Security: The research provides empirical evidence linking the restoration of local pond ecosystems to improved ecological resilience and enhanced livelihood security.The symbiotic relationship between healthy ecosystems and thriving communities underscores the need for conservation efforts. Community Engagement and Empowerment: The participatory methodologies employed in this study ensured the active involvement of local communities in the decision-making process.Community engagement emerges as a cornerstone for successful and sustainable watershed management initiatives. Policy Recommendations: The study offers practical policy recommendations aimed at incentivizing sustainable land-use practices, supporting community-based conservation efforts, and fostering an enabling environment for integrated watershed management.These recommendations have the potential to guide policymakers in formulating context-specific strategies. Challenges and Considerations: Balancing Conservation Goals and Local Needs: The integration of conservation goals with the diverse needs of local communities poses challenges.Striking a balance between ecological conservation and meeting the socio-economic needs of the residents requires careful planning and adaptive management. Long-Term Monitoring and Adaptive Management: Recognizing the dynamic nature of ecosystems, long-term monitoring and adaptive management are essential.The research emphasizes the importance of continued engagement with local communities and stakeholders to assess the effectiveness of implemented interventions over time. Future Directions: Longitudinal Studies: Future research should focus on longitudinal studies to assess the sustained impact of interventions. Understanding the long-term ecological and socioeconomic changes is crucial for adaptive management and continuous improvement. Refinement of Ecological Indicators: Further exploration of specific ecological indicators is recommended to refine and strengthen monitoring efforts.Identifying key indicators will contribute to a more nuanced understanding of ecosystem health. Continued Community Engagement: Ongoing community engagement is crucial for the success of integrated watershed management initiatives.Ensuring that local knowledge and perspectives are incorporated into decision-making processes will enhance the sustainability and acceptability of interventions. In essence, this research contributes to the evolving discourse on sustainable watershed management by emphasizing the interconnectedness of local pond ecosystems and the livelihoods of the residents in Malda, West Bengal.Through a multidisciplinary and participatory approach, the study provides a foundation for future initiatives that prioritize both environmental conservation and community wellbeing in watershed management strategies. Acknowledgement The successful completion of this research endeavor was made possible through the collaborative efforts and support of various individuals and organizations, without whom this study would not have been feasible.We extend our deepest gratitude to the residents of the Malda district in West Bengal, whose unwavering cooperation and insights significantly enriched the research.Their active participation in surveys, interviews, and community workshops formed the bedrock of this study, and their commitment to sustainable development serves as an inspiration. Our heartfelt thanks go to the local community leaders, farmers, and stakeholders who generously shared their knowledge, experiences, and aspirations.Their willingness to engage in meaningful dialogue contributed immensely to the depth and richness of the research findings. We would like to express our appreciation to the governmental bodies and non-governmental organizations that collaborated with us throughout the research process.Their guidance, logistical support, and shared expertise played a crucial role in shaping the study and ensuring its relevance to local contexts.The success of this research owes much to the dedication and hard work of our research team.The interdisciplinary nature of this study required the expertise of environmental scientists, hydrologists, social scientists, and community engagement specialists, each bringing unique insights to the table. The collaboration and support from all these individuals and organizations have been instrumental in making this research on watershed management and local pond ecosystems in Malda, West Bengal, a reality.We extend our heartfelt thanks to everyone who has contributed to the success of this project. for the integration of local pond ecosystems within broader watershed management plans.By recognizing ponds as integral components, the research highlights the potential for enhanced ecological resilience and sustainable livelihoods.Ecological Restoration Strategies: Based on the ecological assessments, the discussion emphasizes the need for targeted restoration strategies.Afforestation, soil conservation, and water management interventions are proposed to improve the health of pond ecosystems and the overall watershed.Socioeconomic Insights: Socioeconomic surveys elucidated the dependence of local communities on pond ecosystems.Livelihood patterns, agricultural practices, and income sources were documented to understand the socioecological context. Figure 06 : Figure 06: Key Species indicators for ecosystem health in Malda: Figure 07 :Figure 08 :Figure Figure 07: Key species indicators for ecosystem health in Malda: Ecological Health of Pond Ecosystems: Comprehensive ecological assessments revealed the current health status of local pond ecosystems.Factors such as water quality, nutrient levels, and biodiversity were analyzed to gauge the overall ecological integrity. Watershed Dynamics:Hydrological data showed variations in river discharge rates, groundwater levels, and precipitation patterns across the studied watersheds.This information contributes to understanding the broader watershed dynamics.Biodiversity Patterns: Biodiversity surveys identified key species indicative of ecosystem health.Changes in faunal and floral diversity were noted, providing insights into the impact of watershed management on local ecosystems.Land-Use Changes:Remote sensing and GIS analyses revealed significant land-use changes over time.Deforestation, urbanization, and shifts in agricultural practices were identified, highlighting areas vulnerable to degradation.Community Perceptions:Participatory rural appraisal techniques captured local perspectives on pond ecosystems and their significance.Community workshops further provided insights into the perceptions, concerns, and aspirations of the residents.
2024-01-24T17:55:15.736Z
2024-01-13T00:00:00.000
{ "year": 2024, "sha1": "2231dc40fbf23c0536265ae048d9965bc2511b56", "oa_license": "CCBYSA", "oa_url": "https://www.ijfmr.com/papers/2024/1/12017.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "afd0277c7d7c213b9f9a89515e058748e41f71f3", "s2fieldsofstudy": [ "Environmental Science", "Geography", "Economics" ], "extfieldsofstudy": [] }
58286529
pes2o/s2orc
v3-fos-license
Complete heart block in pregnancy A pregnant woman presented with increasing exertional dyspnea and was found to have complete heart block with a junctional escape rhythm. The complete heart block did not resolve with exercise testing, suggesting infranodal disease. A presumptive diagnosis of mild viral myocarditis was made, having been exposed to her toddler with a viral exanthem days before. After giving steroids to preemptively accelerate fetal lung maturity and several days of close observation, the AV block resolved. She was discharged in stable condition without need for temporary or permanent pacing and later delivered a healthy infant. INTRODUCTION Maternal complete heart block in pregnancy is a rare entity. If a reversible cause of complete heart block is not found, then permanent pacemaker or use of transvenous pacing during labor may be indicated if the patient is symptomatic. CASE A 32-year-old woman with no past medical history presented in the 26th week of pregnancy with increasing exertional dyspnea and lightheadedness without syncope. She reported having a rash on her palms and soles about ten days prior to onset of the above symptoms, following a diagnosis of hand, foot, and mouth disease in her toddler, who had a similar viral exanthem including oral ulcerations. Upon evaluation the patient appeared comfortable and in no distress; vital signs showed bradycardia in the 40s with normal blood pressure. Physical exam was remarkable for bradycardia with otherwise regular heart tones, no murmurs, normal jugular venous pressure, and a gravid uterus. An electrocardiogram (ECG) demonstrated sinus with third degree AV block and junctional escape rhythm (see Figure 1); she had no prior ECG for baseline comparison. The patient was admitted for close observation but did not immediately require temporary pacing measures because of stable blood pressure. Laboratory data showed mild leukocytosis (12k, 71% PMN, 21% lymph), troponin I 0.24 ng/ml (reference normal < 0.04), CRP 6.2 mg/L, ESR 19 mm/hr, TSH 2.36, ANA < 1:80, and negative Lyme serology. Fetal heart tones were reassuring without fetal bradycardia. She received two doses of empiric betamethasone for fetal lung maturity, in case premature delivery occurred with use of atropine. Transthoracic echocardiogram showed no evidence of structural heart disease. On hospital day #2, the third degree AV block persisted, and assessment of both chronotropic competence and the level of AV block were made by exercise treadmill testing (per modified Bruce protocol) in order to determine whether back up pacing might be required. During exercise she maintained normal blood pressure, but the rhythm remained in complete heart block while the junctional rhythm accelerated to 109 bpm (see Figure 2). She did not experience symptoms during exercise; the test was terminated once chronotropic competence was demonstrated. After seven days of telemetry observation, she began to show improved AV nodal 1:1 conduction with only episodic AV dissociation without symptoms. Daily fetal monitoring remained reassuring, and she was discharged home in good condition. At time of two-week follow up, electrocardiogram showed normal sinus rhythm with 1:1 AV conduction, and the patient was completely symptom-free. Holter ECG at one month follow up showed normal sinus rhythm with normal heart rate variability (average 69 bpm) and no AV block. The remainder of her pregnancy progressed without complication, and she underwent schedule induction at 38 weeks and 6 days, delivering a healthy infant by spontaneous vaginal delivery. There were no episodes of maternal heart block during labor or the peripartum period. DISCUSSION Complete heart block in pregnancy is rare, and reversible causes must be readily identified. The narrow differential diagnosis includes: initial presentation of maternal congenital complete heart block, associated with maternal connective tis-sue disorders; hypothyroidism; and myocarditis. If the ECG shows narrow ventricular escape complexes (junctional) and the atrioventricular (AV) conduction improves with atropine or exercise, then the block is most likely within the AV node. These features are often indicative of a temporary and reversible AV block, and pacing may not be indicated. On the contrary, if the AV conduction worsens with atropine or exercise, the level of block is more likely to be at the level of the His bundle, and pacing methods may be required. In our patient, AV conduction did not improve with exercise, rather the junctional escape rhythm became faster. An identifiable cause of heart block was not found. Her recent viral exanthem following exposure to her toddler, which preceded the onset of symptoms, could have caused a mild viral myocarditis from an adenovirus or enterovirus (not tested). The clinical history suggested that this woman developed mild viral myocarditis, leading to peri-AV nodal edema and transient complete heart block. The use of steroids for accelerated fetal lung maturity and time led to restoration of normal sinus rhythm with 1:1 conduction. Temporary or permanent pacing may be indicated in the setting of symptomatic complete heart block with associated syncope, which our patient did not experience. The most comprehensive review of pacing strategies in complete heart block during pregnancy by Hidaka et al. suggests that most cases of asymptomatic complete AV block can be safely managed during labor without temporary pacing. [1] However, the symptomatic patient near term may require insertion of a temporary pacemaker as well as epidural anesthesia and assisted delivery to minimize cardiac demand during labor. Permanent pacing is reserved for women who remain in symptomatic complete AV block in the post-partum period or who develop bradycardia-related symptoms (syncope) during early pregnancy. Pacing is not necessarily indicated when no symptoms are present. [2] Evaluation of chronotropic competence can also be achieved with atropine and is a key step to determine whether a pacemaker will be necessary. [3] Atropine was avoided in this patient due to uncertainty of its effect upon the fetus, and a similar evaluation of chronotropic competency was established with moderate exercise. CONCLUSION Complete heart block in pregnancy is a rare entity that is often due to a reversible underlying condition, which should be readily identified. Response to atropine and exercise help to delineate the locus of conduction block. Our patient was thought to have mild viral myocarditis leading to peri-AV nodal edema and complete heart block that did not improve with exercise. Her symptoms fully resolved within days of presentation. CONFLICTS OF INTEREST DISCLOSURE The authors have no competing interests to declare.
2019-01-20T14:15:02.681Z
2017-02-27T00:00:00.000
{ "year": 2017, "sha1": "61f8c2252c690aaee747055bf14641072317b3ee", "oa_license": null, "oa_url": "http://www.sciedu.ca/journal/index.php/crim/article/download/10719/6817", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "184c04800ec35c3de1d22c31486e30d495a0a331", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
213424777
pes2o/s2orc
v3-fos-license
The Influence of Breed and Type of Extender on the Quality of Bull Semen This study aimed to determine the influence of breed and type of extenders on frozen semen quality of cows at BIB Lembang. The experimental study was conducted in a Factorial Randomized Block Design (RBD) with two factors. The first factor was four cow breeds, i.e. Ongole Cross (PO), Brahman (BR), Simmental (SM) and Limousin (LM), and the second factor was two types of extender, i.e. Skim-Egg Yolk (SKT) and AndroMed® (AND), all repeated four times. The observed variables were percentage of spermatozoa motility and intact plasma membrane (IPM). All data obtained were analyzed using a general linear model (IBM SPSS ver. 23). The results demonstrated an interaction between breed and the type of diluent to motility. Breeds showed significantly different motility but non-significantly different intact plasma membrane (MPU) of semen. The type of diluent did not significantly affect motility and intact plasma membrane (MPU) of the frozen semen. The effect of the breed on BR motility was lower and significantly different from PO, LM and SM. The types of diluent did not significantly affect motility, MPU. The results showed that SKT was lower than AND, it was indicative effect of breed on intact plasma membrane (MPU) PO was lower than BR, LM and SM and the effect of the type of diluent on whole plasma membrane (MPU) AND is lower than SKT. It can be concluded that breed influences the motility of semen. The lowest motility reduction in frozen semen is Brahman cattle by using skim-egg yolk extender. Keyword: breed, diluents type, motility, intact plasma membrane Abstrak. Penelitian ini bertujuan untuk mengetahui pengaruh bangsa dan jenis pengencer terhadap kualitas semen beku sapi pejantan unggul. Rancangan percobaan yang digunakan yaitu Rancangan Acak Kelompok (RAK) faktorial. Faktor pertama empat bangsa yaitu sapi Peranakan Ongole (PO), Brahman (BR), Simmental (SM) dan Limousin (LM), faktor kedua yaitu dua jenis pengencer Skim-Kuning Telur (SKT) dan AndroMed ® (AND) yang diulang sebanyak empat kali. Peubah yang diamati meliputi penurunan persentase motilitas spermatozoa dan membran plasma utuh (MPU). Data yang diperoleh dianalisis menggunakan general linear model (IBM SPSS ver 23). Hasil penelitian menunjukkan bahwa terdapat interaksi antara bangsa dan jenis pengencer terhadap motilitas, bangsa menunjukkan perbedaan nyata terhadap motilitas tetapi berbeda tidak nyata terhadap membran plasma utuh (MPU) semen. Jenis pengencer menunjukkan perbedaan yang tidak nyata terhadap motilitas dan membran plasma utuh (MPU) semen yang dibekukan. Pengaruh bangsa terhadap motilitas BR lebih rendah dan berbeda nyata dibandingkan PO, LM dan SM. Pengaruh jenis pengencer tidak memberikan perbedaan yang nyata terhadap motilitas MPU. Hasil penelitian menunjukkan bahwa SKT lebih rendah dibandingkan AND ini ditunjukkan bahwa pengaruh bangsa terhadap membran plasma utuh (MPU) PO lebih rendah dibandingkan BR, LM dan SM dan pengaruh jenis pengencer terhadap membran plasma utuh (MPU) AND lebih lebih rendah dibandingkan SKT . Dapat disimpulkan bahwa bangsa berpengaruh terhadap motilitas semen. Penurunan motilitas terendah pada semen beku sapi Brahman dengan menggunakan jenis pengencer skim-kuning telur. Kata kunci: bangsa, jenis pengencer, motilitas, membran plasma utuh Introduction Reproductive technology, such as artificial insemination, has been implemented to accelerate and improve genetic quality. Artificial insemination is a technique to transfer semen from bull to female reproductive tract with special equipment. Artificial insemination is superior because it enables a more effective and efficient cattle mating or breeding, and superior semen could better fertilize the female cows in artificial insemination than the natural mating (Wahyuningsih et al., 2013) and 3 to 4 times faster to improve the genetic quality (Arifiantini et al., 2005). One of the critical factors of an artificial insemination program is frozen semen. The quality of frozen semen is determined by various factors such as freezing technique, type of dilution or extender, and type and concentration of cryoprotectant (Ariantie et al., 2013). Semen diluents can maintain the quality of spermatozoa during cooling, freezing, and thawing process (Aboagla and Terada, 2004), and the best diluent should be able to reduce the impairment rate of sperm motility (Zega et al., 2015). Commercial diluents are composed of varied ingredients and formulation, such as AndroMed® (Minitube, Germany) which is based on lecithin from soybeans (Arifiantini and Yusuf, 2010). Semen diluents have varied components which reflect different capabilities to support spermatozoa survival (Solihati et al., 2008) and each diluent exhibits specialty (Paulenz et al., 2002). Accordingly, the variation of extender's components may affect semen quality differently across breeds. There is an interaction between cattle breed and type of extender because the physiological aspects of cattle breed may influence the semen quality. Time and Location of Research This research was conducted from December 22, 2017 to January 5, 2018 in the production laboratory of Artificial Insemination Center Lembang, West Bandung regency, West Java Province. Research Objective This research used semen from four eightyear-old bulls (Ongole cross breed/PO, Brahman/BR, Simmental/SM and Limousin Cross breed/LM). Semen was collected twice a week. Feeding consisted of 1 kg African grass hay, 4 kg concentrate, 15 g feed mix, 7 g mineral Selenium, and 50 kg elephant grass. Preparation of Semen Diluents The first type of extender/dilution -skim egg yolk (SEY) -was prepared one day before semen collection, stored in refrigerators at 4°C and divided into two parts. Part A (500 ml) consisted of 500 g of skim milk (Tropicana Slim) dissolved with 500 ml aquadest and heated at 90-92°C for 10 minutes. After temperature stayed at 5°C, antibiotics (Penicillin 100,000 IU and Streptomycin 100 mg in 10 ml aquadest solution) were incorporated into the solution with a 1:100 ratio. The solution was reduced to 475 ml and added 25 ml egg yolk. Part B (500 ml) consisted of 10 g glucose, 25 ml egg yolk, 80 ml glycerol and 385 ml aquabidest. The second type of extender, the AndroMed® diluents (AND), was made before semen collection with 1:4 ratios. Semen Collection and Evaluation Semen was collected twice a week in the morning using an artificial vagina for macroscopical and microscopical evaluation. Microscopic evaluation includes concentration using SDM 5, mass movement by putting a fresh drop of semen onto an object glass for microscopic observation with 200x magnification (Olympus BX 53) or screen observation. Individual motility was measured by dripping semen on top of object glass, added with 4-5 drops of physiological NaCl, homogenized and sealed with a glass cover for microscopic observation using a 200x magnification (Olympus BX 53) and heating table (37°C) or through a monitor screen. The Integrity of plasma membrane (IPM) was evaluated with Hypoosmotic Swelling Test (HOS) solution. Semen was mixed with HOS solution (1:100 ratio), homogenized and incubated for 30-45 min at 37°C. Under a microscopic observation (Olympus BX 53) with 400 times magnification, at least 200 spermatozoa were calculated in 10 fields of view. Sperm with an intact plasma membrane is characterized by a circular or bulging tail, whereas the damaged one has a straight tail. Dilution and Equilibration The examined semen was divided into two parts and each diluted with SEY and AND diluents slowly at a dose of 25 million/0.25 ml (minitube). The diluted semen was inserted into the cooltop for 4h equilibration. Filling and Sealing A straw (minitube, 0.25cc) was coded using an automatic printing machine and contained the semen. The end of the straw was closed using an automatic filling sealing machine. Freezing The straw was placed on a special rack and inserted into an automatic freezing machine for 9 minutes. The straw is removed from the rack into the goblet and then immersed in nitrogen. Storage The goblet of frozen semen was inserted into a canister inside a container of liquid nitrogen and let sit for 24 hours. Thawing of the Frozen Semen The semen was thawed 24 hours after being stored at 37°C for 30 seconds, then evaluated for motility and intact plasma membrane (IPM) percentage. Data Analysis The data was analyzed in a factorial randomized block design followed by Duncan Test for difference across treatments and interactions (Gaspersz, 1995). Fresh Semen Characteristic The fresh Bull's semen was good according to the standard quality. Ax et al. (2008) stated that Bull's sperm concentration ranged between 2 x 10 8 and 1.8 x 10 9 sperm/ml, and the average progressive motility was 60 -75% (Table 1). Semen Quality after Treatment During thawing and frozen semen production (collection, dilution, equilibration, freezing and storage) a series of change was observed in the temperature, osmotic pressure changes, ice formation and dissolution in extracellular environments (Watson, 2000) which may damage cell, decrease sperm motility, viability, integrity of the plasma membrane, and damage DNA spermatozoa (Priyanto et al., 2015). Effect of Breeds on Semen Quality Statistical analysis showed a significant difference (P<0.05) between breeds and types of extender. Table 3 illustrates a decreasing percentage of semen motility across breeds. The highest motility decrease occurred in Simmental (SM) (35.00 ± 9.63) which was significantly different from PO and Limousin (26.87 ± 5.30) and Brahman (18.75 ± 12.75). The decrease was within the reasonable limit which, according to Parrish (2003), was 10-40%, and 33.27 ± 5.57% in Pasundan Cattle (Baharun et al., 2017). Mostari et al. (2004) reported that different breeds showed varied percentage of motility due to differences in energy source composition in seminal plasma (Rahmawati et al., 2015). Accordingly, the declining motility was influenced by breeds as a semen producer. The results showed that breeds did not impose a significantly different effect on the decrease of intact plasma membrane (IPM) (P>0.05). The lowest decrease in PO (16.90 ± 1.72) was lower than a study by Gunawan et al. (2004), i.e. 46.68%. The highest rate of decrease in IPM was observed in Simmental (18.07 ± 1.88), which was lower than 21.25 ± 6.86 by Sukmawati et al. (2014). Many changes in this frozen semen processing may damage sperm cells and decrease plasma membrane integrity (Priyanto et al., 2015;Watson, 2000). An extreme temperature and osmolarity changes affected the configuration of lipid membrane structures and lipid compositions that interfere with their permeability and function (Moce and Graham, 2006;Cooter et al., 2005;Watson, 2000). Spermatozoa in each animal species have different membrane compositions; therefore, they demonstrate different resistance to cooling and freezing (Sukmawati et al., 2014). Table 5 shows that there has different not significant effect of the extender or diluent type on the decrease of motility and intact cemented plasma membrane (IPM) of semen (P>0.05). The decreasing rate of motility in SEY diluent was lower than in AND because spermatozoa metabolized glucose in SEY (Gunawan et al., 2004) as an energy source to reserve motility and maintain lifespan (Widjaya, 2011). Effect of extender types on semen quality Skim milk as a buffer can well maintain the changeable pH due to the formation of residual lactic acid metabolism to produce energy. Also, unlike soybeans, egg yolk contains 0.6% carbohydrate that could maintain spermatozoa motility (Kusumawati and Leondro, 2011). According to Gunawan at al. (2004), the rate of motility decrease in Skim-egg yolk dilution was lower than AND in PO cattle. It was in contrast with Arifiantini and Yusuf (2004) who argued that the best motility of AND diluent was on FH bull, and Baharun et al. (2017) on Pasundan cattle. The Rate of IPM decrease in AND extender was lower than SEY diluent which demonstrated the effect of lecithin in egg yolks as an anticold shock (Aboagla and Terada, 2004;Gunawan et al., 2004) and membrane coatings (Rezki et al., 2016) to maintain IPM. However, Moussa et al., (2002) reported that lecithin from soybeans contained lower lipoprotein density (HDL) than egg yolks, which inhibited spermatozoa respiration. Similarly, Ghareeb, et al. (2017) stated that AND diluent demonstrated the best plasma membrane in Brangus-Hereford cattle. Temperature changes induce stress on the membrane because the lipid phase and membrane function also change (Watson, 2000). Lecithin can retain and protect spermatozoa from cold stress (Permatasari et al., 2013) and this could also happen in lipoprotein density (HDL) of IPM (Anwar et al., 2015). Lecithin derived from egg yolk in SEY diluent and from soybean in AND demonstrate similar ability to maintain IPM of spermatozoa. There is interaction between breed and dilution or extender type on the decrease of motility (P<0.05). The interaction between breed and the dilution type on the decrease of motility with the combination of BR using SEY diluent showed the lowest rate of decrease of the motility, while the highest rate of decrease of motility occurred in combination of SM with SEY dilution. Effect of interaction between breed and type of extender on semen quality There is an interaction between breed and dilution type on the decrease of IPM (P<0.05). The interaction between breed and combined LM-AND dilution showed the lowest intersection rate of IPM, whereas the highest rate of IPM occurred in LM using SEY diluent. Conclusions It can be concluded that (1) breed has significant effect on decreasing of the motility, but not on the IPM; (2) types of diluent had no significant effect on the decreasing of motility and IPM; (3) there was an interaction between breeds and type of diluent in decreasing motility and IPM. The best combination was SEY diluents for BR bull, whereas AND diluent for LM demonstrated the optimum decrease of motility and IPM.
2020-03-19T20:12:25.586Z
2020-02-25T00:00:00.000
{ "year": 2020, "sha1": "044d8441ca75cb8755a1937641b2192aa4fdc051", "oa_license": "CCBYSA", "oa_url": "http://www.animalproduction.net/index.php/JAP/article/download/641/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "28bbd23ea46d3724e75b28af86e206ccde066828", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
257028885
pes2o/s2orc
v3-fos-license
Revealing links between gut microbiome and its fungal community in Type 2 Diabetes Mellitus among Emirati subjects: A pilot study Type 2 diabetes mellitus (T2DM) drastically affects the population of Middle East countries with an ever-increasing number of overweight and obese individuals. The precise links between T2DM and gut microbiome composition remain elusive in these populations. Here, we performed 16 S rRNA and ITS2- gene based microbial profiling of 50 stool samples from Emirati adults with or without T2DM. The four major enterotypes initially described in westernized cohorts were retrieved in this Emirati population. T2DM and non-T2DM healthy controls had different microbiome compositions, with an enrichment in Prevotella enterotype in non-T2DM controls whereas T2DM individuals had a higher proportion of the dysbiotic Bacteroides 2 enterotype. No significant differences in microbial diversity were observed in T2DM individuals after controlling for cofounding factors, contrasting with reports from westernized cohorts. Interestingly, fungal diversity was significantly decreased in Bacteroides 2 enterotype. Functional profiling from 16 S rRNA gene data showed marked differences between T2DM and non-T2DM controls, with an enrichment in amino acid degradation and LPS-related modules in T2DM individuals, whereas non-T2DM controls had increased abundance of carbohydrate degradation modules in concordance with enterotype composition. These differences provide an insight into gut microbiome composition in Emirati population and its potential role in the development of diabetes mellitus. pathogens, such as Bacteroides caccae, Clostridium hathewayi, Clostridium ramosum, Clostridium symbiosum, Eggerthella lenta and E. coli 3,[7][8][9][10] . These changes may induce disturbances in host gut barrier, in metabolic homeostasis and low-grade inflammation, in short chain fatty acid synthesis and fat deposition as well as hormonal regulation for involving glucagon-like peptide-1 synthesis. These factors contribute to glucose metabolism alteration, insulin resistance and dyslipidemia in patients with diabetes [11][12][13][14] . While the interaction between gut microbiome and metabolic health has been studied in several populations, exploring these interactions in Middle East countries is of particular interest considering the very high prevalence of diabetes in this region of the world 15 . Researchers have mostly focused on examining the bacterial members of the gut microbiome, but very little is known about the fungal communities which are non-negligible components in the gut. Mycobiota have been described as members of the normal gut flora in 1967 16 . Fungal populations comprise less than 1% of the total gut microbiome. However, recent studies have indicated that these fungi have relevant effects on dampening inflammatory responses in the gut, especially in inflammatory bowel diseases despite their small amount 17,18 . Others have reported their impact on bacterial community composition [19][20][21] . Fungi may represents a key part of the microbial community with significant impact on the gut ecosystem, and possibly the host health 21 . However, the potential role of fungi and their interaction with the host and with other members of the gut community and metabolic health needs further understanding. Research groups have demonstrated a significant impact of T2DM on gut microbial richness and relative abundance 4,22,23 and underscored significant contribution of gut microbiome in T2DM phenotypes as insulin resistance and low-grade inflammation 24 . However, little is known about the relationships between T2DM on gut microbiome in UAE population. Here, we examined bacterial and fungal microbiome composition and possible functional consequences in T2DM individuals from an Emirati population. We performed 16 S rRNA gene and ITS2-based microbial profiling analysis of 50 stool samples from 25 T2DM and 25 non-T2DM individuals. We conducted a phylogenetic investigation of communities by reconstruction of unobserved states (PICRUSt) functional analyses based on 16 S rRNA gene abundance profiles to gain deeper insight on potential functional impact on the host in T2DM from this Emirati population. Materials and methods patient inclusion and ethical statement. The study was performed after receiving the necessary ethical approval from University Hospital Sharjah Ethics Research Committee (UHS-HERC-021-0702). The study was performed in accordance with relevant research guidelines and regulations of the committe. We randomly identified 25 native Emirati subjects with diagnosis of T2DM attending the endocrinology clinic. We also identified 25 otherwise healthy Emirati individuals and had HbA1c level < 6% as controls. All volonteers were provided with information sheet and explanation of study objectives, design, and confidentiality. We obtained written informed consents. We provided to all subjects a sterile stool specimen container with integrated collection spoon and collection instructions. A total of 50 stool specimens, 2 to 4 grams of freshly passed stool was collected in sterile containers. The specimens were stored immediately in liquid nitrogen and transferred to −80°C for storage until further analysis. Liquid (diarrheal) stools and use of antibiotics in the last 3 months were the exclusion criteria for this study. DNA extraction. Faecal samples were subjected to DNA extraction using QIAamp PowerFecal DNA Kit (Qiagen Ltd, GmbH, Germany) following the manufacturer's instruction (Qiagen Ltd). The extracted DNA was stored at −80°C for further analysis. Bacterial and fungal PCR, sequencing, and sequence analysis and Taxonomic composition. Bacterial 16 S rRNA genes were amplified using polymerase chain reaction (PCR) targeting the V4 region with dual-barcoded, as per procedure as described in 25 . Next, amplicons sequenced with an Illumina MiSeq using the 250-bp paired-end kit (v.2). Sequences were denoised, taxonomically classified using Greengenes (v. 13_8) as the reference database, and clustered into 97% similarity operational taxonomic units (OTUs) with the mothur software package (v. 1.39.5) previously described 26 , following the recommended procedure (https://www.mothur.org/ wiki/MiSeq_SOP; accessed August 2018). The resulting dataset had 21257 OTUs (including those occurring once with a count of 1, or singletons). An average of 18383 quality-filtered reads generated per sample. Sequencing quality for R1 and R2 was determined using FastQC 0.11.5. ITS2 region were sequenced on an Illumina MiSeq (v. 2 chemistry) using the dual barcoding protocol as described 25 . Primers and PCR conditions used for 16 S rRNA gene and ITS2 sequencing were identical to those previously described 27 . Bacterial sequences were processed and clustered into operational taxonomic units (OTUs) with the mothur software package (v. 1.39.5) 26 , following the recommended mothur SOP. Paired-end reads were merged and curated to reduce sequencing error as described in 28 . The resulting dataset had 3171 OTUs (including those occurring once with a count of 1, or singletons). An average of 9581 quality-filtered reads were generated per sample. Sequencing quality for R1 and R2 was determined using FastQC 0.11.5. Fungal processing pipeline was identical as the one used for bacteria, except for the following differences: (1) paired-end reads were trimmed at the non-overlapping ends, and (2) high quality reads were classified using UNITE (v. 7.1) as described before as the reference database 29 . A consensus taxonomy for each OTU obtained and the OTU abundances then aggregated into genera. OTU table was rarified to 10000 reads per sample to correct for differences in sequencing depth with rarefy_even_depth function of phyloseq R package 30 , and alpha diversity indexes (Observed species, Shannon, ACE) were computed from rarified OTU table estimate_richness function of phyloseq R package. The R package vegan was used to compute Beta-diversity matrix from rarified OTU table collapsed at genus level (vegdist function) and to visualize microbiome similaritires with principle coordinate analysis (PCoA) (cmdscale function) 31 . Enterotype classification was performed from the same genus abundance matrix used for PCoA analyses following two different approaches. First, samples were clustered using Jensen-Shannon divergence (JSD) www.nature.com/scientificreports www.nature.com/scientificreports/ distance and the Partition Around Medoids (PAM) clustering algorithm as described in Aurumugam et al 32 . Second, samples were clustered from genus abundance data using the Dirichlet Multinomial Mixture (DMM) method of Holmes et al 33 . The DMM approach groups samples if their taxon abundances can be modeled by the same Dirichlet-Multinomial (DM) distribution. Quality control. The possibility for contamination examined by co-sequencing DNA amplified from samples and from four each of template-free controls and extraction kit reagents treated the same way as the samples. Two positive controls, consisting of cloned SUP05 DNA, were also added (number of copies = 2*10^6). Operational taxonomic units were considered putative contaminants (and were removed) if their mean abundance in controls reached or surpassed 25% of their mean abundance in samples as described before 34 . Functional profiling from 16 S rRNA gene data. Gene family abundances from Kegg Orthology (KO) functional space were computed from rarified 16 S rRNA gene OTU abundance matrix and GreenGenes taxonomic annotations with PICRUSt-1.1.3 35 . This includes correction of OTU abundances by 16 copy number of reference GreenGenes taxons with normalize_by_copy_number.py script, compute KO abundance matrix from 16 S rRNA gene copy number-corrected 16 S rRNA gene OTU abundance matrix with predict_metagenomes. py script, and determine OTU contributions to each KO abundance vector with metagenome_contributions.py script. Gut Metabolic Modules (GMMs) were quantified from the PICRUSt KO abundance matrix with GOmixer R package 36 . Statistical analysis. Linear regression analyses was used to evaluate the impact of different clinical variables (age, BMI, weight, diet and gender) and disease state over alpha diversity distribution. The significance of diversity changes after excluding the variability explained by age cofounder was tested with non-parametric Wilcoxon test over the residuals of linear regression analyses of alpha diversity (dependent variable) vs. age (independent variable). To evaluate beta diversity across samples, we excluded genus occurring in fewer than 10% of the samples with a count of less than three and calculated Bray-Curtis indices. Environmental fitting of clinical variables (age, BMI, weight, diet and gender) and disease state over Principal coordinates analyses ordination from Bray-Curtis inter-sample dissimilarity matrix was computed with envfit and cmdscale functions of vegan R package 37 . Dissimilarity in community structure by disease state was assessed with permutational multivariate analyses of variance (PERMANOVA) with non-T2DM v.s T2DM groups as the main fixed factor and using 4,999 permutations for significance testing with adonis function of vegan R package. To identify taxonomic and functional features associated to disease state while accounting for cofounding effect of age generalized linear models (GLM) with negative binomial distribution were fitted with feature abundance as dependent variable and disease state and age as dependent variables with DESEq. 2 38 and Phyloseq. 30 R packages. Functional enrichment analyses of KEGG modules were carried out to identify high-order functional features associated to T2DM transition from KO adjusted P-values and log2 fold changes between health controls and T2DM as effect sizes using the Reporter Feature algorithm as implemented in the Piano R package 39 . The null distribution was used as significance method and P-values were adjusted for multiple comparisons with the Benjamini-Hochberg method 40 . All analyses were conducted in the R environment. Results Gut microbiome profile of T2DM Emirati subjects: compositional differences between non-T2DM and T2DM subjects. We evaluated the intra-and inter-individual variability of gut microbiome among 25 T2DM and 25 non-T2DM subjects, all from Emirati origin. Their clinical characteristics are shown in S1 Table. T2DM subjects were significantly older, had higher BMI and were more sedentary than non-T2DM subjects were (P value < 0.05; Table S1). Further, based on short food frequency questionnaire (DFI-FFQ) 41 , we found higher percentage of T2DM individuals with a high fiber diet compared to non-T2DM individuals (P value < 0.05; Table S1). All T2DM individuals were under Dipeptidyl peptidase-4 inhibitors (DPP4i) and metformin treatment. S1 Table: Clinical characteristics of the study groups. Median and quartiles 1 and 3 are shown for continuous variables. Number and percentage of samples are shown for categorical variables. P values are computed from Wilcoxon rank-sum test for continuous variables and chi-squared or exact Fisher test when the expected frequencies is less than 5 in some cell. False discover rate (FDR) were computed with Benjamini-Hochberg method. Linear regression analyses of individual covariates (age, diet, BMI, weight, and gender) and disease state over alpha diversity (observed species) shows that age has an important effect over microbiome diversity (p value < 0.05; R2 = 0.16), with alpha diversity levels significantly increasing with age (Spearman Rho = 0.4; P value < 0.05) (Fig. 1A). When we take out the variability explained by age no significant differences in microbial diversity were observed between non-T2DM and T2DM individuals ( Fig. 1B; Wicoxon rank-sum test on the residuals of linear regression analyses of observed species by age; P value = 0.66), with a wider variability in microbiome diversity observed in the T2DM group. Similar results were observed with other alpha diversity indexes (ACE, Shannon; Supplemental Fig. 1A-D). We further examined the gut microbiota characteristics in terms of community composition. Sample clustering based on genus-level 16 S rRNA gene abundance data shows the presence of microbial enterotypes that characterize gut microbiome composition in European, Asian and American cohorts 42 . PAM clustering of samples from JSD beta diversity matrix at k = 3 shows the presence of Bacteroides, Ruminococcus and Prevotella enterotypes according to the abundance distribution of these prokaryotic genera (Supplemental Fig. 2A,D-F). DMM clustering with genus abundance matrix splits Bacteroides enterotype into two subgroups (Supplemental Fig. 2B) as previously described 43 (Bacteroides_1 and Bacteroides_2 43 , after additional re-assignments of Prevotella samples to www.nature.com/scientificreports www.nature.com/scientificreports/ Ruminococcus (n = 3) and Bacteroides_1 (n = 2) and Ruminococcus samples to Bacteroides_1 enterotype (n = 7) (Supplemental Fig. 2C). Diversity distributions across these enterotypes confirm in this Emirati population with the high diversity profile associated with Ruminococcus enterotype and the low diversity profile associated with Bacteroides 2 enterotype (Supplemental Fig. 3). Further, T2D and non-T2D groups show significant differences in microbiome composition according to different enterotyping methods. PAM clustering over JSD beta diversity matrix shows that the non-T2D group is enriched in Prevotella enterotype, whereas the T2D group is enriched in Ruminococcus enterotype (Fig. 1C, Fisher's exact test < 0.05). When enterotyping is carried out with the Dirichlet Multinomial Mixture method, we still observe that non-T2D controls are enriched in Prevotella enterotype, whereas an enrichment of the low-diversity Bacteroides2 enterotypes is observed in the T2D group (Fig. 1D, Fisher's exact test < 0.05). We also observed that 7 Ruminococcus samples with PAM clustering has been re-assigned to Bacteroides1 enterotype with the DMM method (Supplemental Fig. 2C), a dysbiotic microbiome composition associated to low microbial cell density and enriched in Crohn and IBD 43,44 . Environmental fitting of disease and other covariates over PCoA ordination space from genus abundance matrix shows a significant impact of disease over microbiome composition (R2 = 0.12; P value = 0.001) together with age (R2 = 0.34, P value = 0.001) and BMI (R2 = 0.13, P value = 0.037) (Fig. 1E,F). Finally, we search for taxonomic features significantly different between non-T2DM and T2DM groups while accounting for cofounding variables detected in environmental fitting analyses by fitting generalized linear models of genus abundance by disease, age and BMI with negative binomial distribution from raw abundance feature counts with DESeq2 38 . Six bacterial genus were significantly associated to disease state (P value < 0.05), four of them increased in T2DM group (Phascolarctobacterium, Mogibacterium, Acidaminococcus and Unclassified www.nature.com/scientificreports www.nature.com/scientificreports/ Victivallaceae; log2 fold change Health vs. T2DM < 0), whereas two of them were decreased in T2DM group (Odoribacter and Lactococcus; log2 fold change non-T2DM vs. T2DM > 0) (Fig. 1G). The association with Unclassified Victivallaceae is reproduced at higher taxonomic levels (from family to phylum; Fig. 1G). None of these features resist P value adjustment by multiple comparisons (FDR > 0.05). Fungal composition is different between T2DM and non-T2DM subjects. Fungi comprise a small percentage of the gut microbiome 16 , but reports have indicated that fungi have surprisingly strong effects on dampening inflammatory responses in the gut 17,18 . Others reported fungi impact on bacterial community composition 19,20 . Here, using ITS profiling we observed no significant difference in fungal diversity between T2DM and non-T2DM controls (P-value > 0.05 Wilcoxon test, Fig. 2A). In contrast with what we observed with prokaryotic diversity, linear regression analyses of individual covariates (age, diet, BMI, weight and gender) shows no significant associations of any of them with fungal diversity (P value > 0.05; Supplemental Fig. 4). We found no significant association between fungal and prokaryotic diversity (rho = 0.13; p value > 0.05, Fig. 2C). However, relating fungal diversity with enterotype composition, we found significant differences in fungal diversity across DMM enterotypes (P-value < 0.05 Kruskal-Wallis test; Fig. 2B), with Bacteroides 2 enterotype showing significant lower levels of fungal diversity in comparison with Bacteroides 1 and Prevotella groups (Fig. 2B). Next, we examined the fungal microbiome composition as previously performed for bacterial composition. Environmental fitting of disease and other covariates over PCoA ordination space from fungal genus abundance matrix shows age (R2 = 0.42, P value = 0.001) and disease (R2 = 0.13, P value = 0.001) as the main variables with significant impact over fungal microbiome composition (Fig. 2D-E). In order to find fungal features associated to disease state while taking into account the confounding effect of age detected by environmental fitting, we follow the same approach as described above for 16 S rRNA gene data (fit generalized linear models of fungal feature abundance by disease and age with negative binomial distribution from raw feature counts). We observe www.nature.com/scientificreports www.nature.com/scientificreports/ a significant association of three fungal genenera with disease state (P value < 0.05), two of them (Malessezia firfur and Unclassified Davidiella) increased in the T2DM group (log2 fold change non-T2DM vs. T2DM < 0) and one (Unclassified Basidiomycota) decreased in the T2DM group (log2 fold change non-T2DM vs. T2DM > 0) (Fig. 2F). At higher taxonomic levels, T2DM groups seems to be characterized by an increase of Ascomycota lineages and a decrease of unclassified Basidiomycota lineages (Fig. 2D). Functional profiling of T2DM and non-T2DM groups microbiomes based on 16 S rRNA gene profiles. We used the PICRUSt tool to project the functional content of the prokaryotic microbiome in the studied samples from 16 S rRNA gene OTU abundance data. In agreement with taxonomy findings, linear regression analyses of individual covariates (age, diet, BMI, weight, and gender) and disease state over functional diversity (observed KO groups) shows that disease (R2 = 0.16, P value < 0.05) and age (R2 = 0.26, P value < 0.001) have a significant impact over functional diversity (Fig. 3A). Functional diversity levels significantly increase with age (Spearman Rho = 0.51; P value < 0.001). When we excluded the variability explained by age no significant differences in functional diversity were observed between non-T2DM and T2DM individuals ( Generalized linear models with negative binomial distribution of KO raw count data by disease state adjusted by age (4129 KOs with at least 10 counts in >20% of the samples) showed 210 KO groups significantly associated to disease state (FDR < 0.05), 32 decreased in the T2DM group (log2 fold change non-T2DM vs. T2DM group > 0) and 178 increased in the T2DM group (log2 fold change non-T2DM vs. T2DM group < 0). In order to find www.nature.com/scientificreports www.nature.com/scientificreports/ higher-level functional associations, we used gene set enrichment analyses of KEGG functional modules with adjusted p-values from age-adjusted GLM models and log2 fold changes of KO abundances of non-T2DM vs. T2DM as indicators of effect size. Four KEGG modules were significantly enriched in differentially abundant KOs (p value < 0.05), all of them enriched in KOs significantly increased in T2DM group (mean module KO log2 fold changes health vs. T2DM < 0). Among these we found M00064 (ADP-L-glycero-D-manno-heptose biosynthesis), a module representing the biosynthesis of glycero-manno-heptoses found in the lipopolysaccharides (LPS) of most Gram-negative bacteria, capsules and O-antigens of some Gram-negatives, and in the S-layer of certain Gram-positive bacteria 45 . Also we observed an enrichment of M00176 (assimilatory sulfate reduction), which was previously identified as signature of T2DM 9 , and an enrichment of pyruvate oxidation module (M00307) representing the pyruvate dehydrogenase complex, a key enzymatic complex linking glycolysis to TCA cycle in central metabolism during aerobic respiration 46 . Finally, quantification of Gut Metabolic Modules (GMM) 47 based on KO abundance data shows 14 GMMs associated to disease state ( Fig. 3D; FDR < 0.05; GLM models based on negative binomial distribution of module abundance by disease state adjusted by age). This analyses shows marked differences in the functional profile of gut microbiome of T2DM and non-T2DM controls, with non-T2DM controls showing significantly increases in different carbohydrate degradation modules (arabinoxylan, pectine and melibiose degradation modules, log2 fold change non-T2DM vs. T2DM > 0), whereas T2DM group showing significantly increases in several aminoacid degradation modules (isoleucine, proline, valine, cysteine, glutamine and aminobutyrate; log2 fold change non-T2DM vs. T2DM < 0), confirming also the increases in pyruvate dehydrogenase compex in T2DM group observed in the KEGG module enrichment analyses (Fig. 3C,D). Discussion In this study, we characterized for the first time, the prokaryotic and fungal microbiome profiles associated with T2DM and non-T2DM controls in an Emirati population where the study population was unmatched for age, BMI, and diet. When we evaluated the impact of these covariates together with disease state on microbiome diversity and composition, we observed that age had an important effect over microbiome diversity and composition. However, when we adjusted for age, there were no significant differences in microbial diversity between non-T2DM and T2DM controls. Remarkably and in contrasts with results of previous studies in westernized populations, where several factors impact gut microbiome composition and can be seen as confounders such as dietary habits, lifestyle and age [48][49][50][51][52][53] . One explanation can be related to dietary factors that are known to strongly impact gut microbiome composition 54 . For example, an Australian group demonstrated a significant effects of nutritional counseling on gut microbiome abundance and diversity among T2DM and obese individuals 55 . In our study, all T2DM individuals were subjected to rigorous dietary counselling as part of their clinical follow-up with a nutritionist. Furher, dietary aspects may contribute to some genera enrichment. For example, it is well known that fibers impact on Prevotella abundance which aids in polysaccharide breakdown 56,57 . In our study, we noticed an enrichment in Prevotella in the non-T2DM controls despite lower fiber intake based on the DFI-FFQ evaluation (Table S1). This observation is consistent with significant increase in carbohydrate degradation modules observed in the GMM modules analyses. Further, we detected an increase in aminoacid degradation modules in the T2DM group, which is in line with the observed enrichment of Bacteroides 2 enterotype and the proteolytic character of Bacteroides group 58 . Moreover, among the taxonomic features that resist age adjustment, we reported an increase of Victivallaceae lineage belonging to Lentisphaera phylum in the T2DM group and was notably identified from genus to phylum level. This lineage has been associated with gestational diabetes melitus in children 59 and has been described to significantly increase in individuals consuming gluten-free diet 60 , again suggesting a potential association with the dietary counseling among T2DM group. The genus Phascolarctobacterium has also been associated both positively [61][62][63] and negatively 64 with markers of insulin sensitivity, whereas the genus Odoribacter, which includes butyrate producing bacteria that has been described negatively associated with hypertension in obese pregnant woman 65 . This genus also decreases in response to pre-natal metformin exposure in mice experiments 66 . Acidaminococcus genera has been also associated with modestly lower risk of T2DM in a mendelian randomization study 67 . However, the particularities of our study cohort in terms of ethnicity, and age and nutritional counseling between groups makes it difficult to extrapolate additional conclusions without further experimental evidences. All together, these findings underscores an important contribution of dietary counselling in driving these compositional changes 68 . Another explanation to the observed difference from previous studies in westernized populations can be related to metformin administration among all T2DM subjects. We observed increased releative abundance of Escherichia, Akkermansia muciniphila and other unclassified Enterobacteriales lineage in T2DM subjects receiving metformin treatment. However, these differences do not resist adjustment by age. The increase in Escherichia coli and A.muciniphila in T2DM have been repeatedly reported in literature, and often associated with metformin intake 69,70 . Next, we determined the presence of enterotypes that characterize microbiome composition. Prevotella enterotype is enriched in non-T2DM control group and Ruminococcus and Bacteroides 2 enterotypes is enriched in T2DM group. The compositional profile of T2DM group was also found to be heterogeneous, with enrichment of Ruminococcus enterotype that is usually associated with a more diverse microbiome profile 32 and Bacteroides 2 enterotype, which generally shows an opposite association, being characterized by low microbial diversity and microbial loads and enriched in Crohn's disease and ulcerative colitis patients 43 . This is also reflected in the wider range of prokaryotic diversity observed in the T2DM group in comparison with non-T2DM controls indicating a more heterogeneous microbiome profile in T2DM group, that could be attributed again to lifestyle habits as well as differences in T2DM severity. The definition of discrete community types is a challenging task given the complexity in the landscape of community composition existing in the gut microbiome and the wide within and between-individual diversity existing in the human's gut, which makes difficult extrapolation of conclusions based on discrete clusters to www.nature.com/scientificreports www.nature.com/scientificreports/ individuals in the boundary of different groups 71,72 . Also, and more importantly, sample clustering is strongly dependent of the other samples analyzed at the same time, which makes discretization dependent of the compositional landscape of the analyzed cohort, difficulty comparisons across studies. However, multiple studies have reproduced the presence of enterotypes with similar compositional properties across large datasets from different origins 42 , and the split of Bacteroides groups into two subgroups with the DMM method and the dysbiotic profile of the Bacteroides 2 group has been reproduced also in different studies and cohorts 43,44,73,74 . Thereby, a larger cohorts would be necessary to evaluate the strength of these community types across the Emirati population or if alternative community types could be defined. Finally, we explored the gut microbiome functional contribution. Interestingly and in spite of the cofounding effects of age, we still observed signals at the functional level that have been identified in other quantitative metagenomic studies of T2DM, suggesting a more inflammatory profile in T2DM individuals 53 . For example, we noted an enrichment of ADP-L-glycero-D-manno-heptose biosynthesis module in T2DM group, a component of the bacterial LPS, associated with T2DM individuals and in agreement with other studies 69 . This molecule corresponds to one of the most antigenic part of the LPS, associated with low-grade inflammation that usually take place in obesity and T2DM 69,75 . In addition, it has been recently demonstrated as a potent pathogen-associated molecular pattern (PAMP) recognized by ALPK1 receptor and iducing NF-κB activation and cytokine expression 76 . Additionally, the formate conversion GMM significantly increased in the T2DM group (Fig. 3D) corresponding to the formate dehydrogenase complex responsible for formate oxidation, a metabolic signature of a dysbiosis-induced intestinal inflammation 77 . Regarding fungal microbiome effect, we observed no significant differences in fungal diversity between T2DM and non-T2DM subjects. However, we detected a significant impact of disease state over fungal microbiome composition, even after normalizing the confounding impact of age. Remarkably, we found that Bacteroides 2 enterotype was associated with decreased levels of fungal diversity, in addition to its known dysbiotic phenotype, in terms of microbial diversity and loads in different pathologies like IBD and UC 43 . This observation extends previous findings showing that the deleterious B2 enterotype also associates with a decrease in fungal diversity. Thus, fungal diversity might been seen as an additional and novel signature of this dysbiotic microbiome composition that would need further validation in larger cohorts with fungal metagenomic data. Furthmore, we observed a shift from Candida albicans (known opportunistic) to Candida glabrata in the T2DM patients. Presence of C. glabrata has been linked to supressing genes involved in mannan biosynthesis, an important component of fungal cell wall with known protective benefits to the host 78,79 . Whether, this compositional shift from known commensal fungi to their virulent counterparts and the dissimilarity in mannan biosynthesis, significantly alters the intestinal barrier is yet to be explored. In conclusion, we report a shift in gut microbiome composition and function among individuals affected by T2DM as compared to non-T2DM controls in a pilot study of Emirati people. The study population was distinctively unmatched for age, BMI, and diet, thereby providing a unique pattern and more challenging approach. Gut microbiome peculiarities have been linked to T2DM across the globe based on variation in diet, medication and ethnicity among other factors. Remarkably, our study revealed no significant differences in taxonomic and functional diversity among T2DM group, in contrast to what has been reported elsewhere, but we observed significant differences in microbiome composition (enterotypes) and functional content between study groups despite the added complexity by the unmatched confounders. We recognize that our results can be influenced by the divergence in mean age, diet intervention and highly individualized gut microbiome composition. We attributed these differences to dietary counselling provided to T2DM patients. Further, we showed that the enterotype B2 appears linked to fungal diversity that could be an additional and novel signature of this dysbiotic microbiome. We acknowledge potential limitations of this study, including relatively small sample size, detailed information regarding lifestyle and more advanced functional analysis. However, despite these limitations, this study provides meaningful insight into links between gut microbiome and its fungal community in T2DM subjects in native Emirati people. These aspects will be important to understand functional role of gut microbiome and its alterations to support host-homeostasis against metabolic and inflammatory disorders. Data availability Sequencing data have been deposited in the European Bioinformatics Institute (EBI) European Nucleotide Archive (ENA) under accession number XXXX (Private access until paper acceptance). All other data generated or analyzed during this study are included in this published article (and its Supplementary Information files).
2023-02-20T14:58:32.590Z
2020-06-15T00:00:00.000
{ "year": 2020, "sha1": "9319ffddab0470fd6f6c260955c8509a6eb3a93b", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-66598-2.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "9319ffddab0470fd6f6c260955c8509a6eb3a93b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
53528732
pes2o/s2orc
v3-fos-license
Misdiagnosis of Thoracolumbar Posterior Ligamentous Complex Injuries and Use of Radiographic Parameter Correlations to Improve Detection Accuracy Study Design Retrospective study. Purpose To evaluate radiological parameters as indicators for posterior ligamentous complex (PLC) injuries in the case of limited availability of magnetic resonance imaging. Overview of Literature Traumatic thoracolumbar spinal fractures with PLC injuries can be misdiagnosed on X-rays or computed tomography scans. This study aimed to retrospectively assess unrecognized PLC injuries and evaluate radiographic parameters as indicators of PLC injuries requiring surgery. Methods In total, 314 patients with type A and type B2 fractures who underwent surgical treatment between 2001 and 2010 were included. The frequency of misdiagnosis was reassessed, and radiographic parameters were evaluated and correlated. Results The average age of the patients was 51.8 years. There were 225 type A3/A4 and 89 type B2 fractures; 39 of the type B2 fractures (43.8%) had been misdiagnosed as type A fractures. Type B fractures presented with a significantly higher kyphotic wedge angle and Cobb angle and a lower sagittal index (SI) than type A fractures. In addition, the normalized interspinous distance was higher in type B2 fractures. The significant mathematical indicators for PLC injuries were as follows: Cobb angle+kyphotic wedge angle >29°; Cobb angle2 >170°; and vertebral angle/SI >25. Conclusions The results demonstrated that PLC injuries are frequently misdiagnosed. Correlations between certain radiological parameters associated with PLC injuries can be useful indicators of the presence of such injuries requiring surgery. Introduction A multicenter study conducted in Germany found that AO type B distraction injuries of the spine represent 20.9% of all thoracolumbar spinal injuries [1,2]. The post� traumatic spinal instability due to these AO type B dis� traction injuries largely depends on the injury to the pos� terior ligamentous complex (PLC), which comprises the supraspinous ligament, interspinous ligament, ligamentum flavum, and facet joint capsules [3,4]. A PLC injury usu� ally requires surgical intervention to prevent progressive and persistent deformity due to the loss of spinal tensile strength [4]; therefore, correct interpretation of the spinal stability is essential for appropriate treatment. The Magerl classification of thoracolumbar spinal fractures has certain limitations; therefore, to account for PLC integrity, Vaccaro et al. [5] introduced the thoracolumbar injury classification and severity score (TLICS). However, in the absence of typical radiological findings, such as an interspinous gap or a severe dislocation, interpretations of plain radiographs or computed tomography (CT) scans can lead to a high rate of misdiagnosis and the underestimation of fracture sever� and the underestimation of fracture sever� the underestimation of fracture sever� ity [4,6]. To focus on possible surgical interventions in the case of limited access to magnetic resonance imaging (MRI) and indeterminate ligamentous injuries, the AOSpine tho� racolumbar spine injury classification system included a patient�specific modifier (M1) [2]. However, international spine surgeons have varied opinions with regard to the identification of PLC injuries in patients with presumed type A fractures [7]. This study aimed to retrospectively assess the frequency of unrecognized PLC injuries in our department and eval� uate radiographic parameters and algorithms to improve the accuracy of PLC injury detection. Patients and methods We retrospectively reviewed data of 317 patients with thoracolumbar spinal fractures who underwent surgery in our department between 2001 and 2010. During that decade, MRI was not used to detect type B2 spinal injuries as extensively as it is now. Surgery was performed accord� ing to the recommendations of the German Society for Orthopaedics and Trauma (DGOU) in cases of type A3 and type A4 fractures (categorized according to the recent AOSpine thoracolumbar spine injury classification system [2]) with segmental kyphosis of more than 20°, significant spinal canal encroachment, or significant vertebral body destruction, in addition to type B and type C fractures [8]. Specifically, patients with type A3, type A4, and type B2 fractures were identified, who underwent open dorsal in� strumentation. (For reasons of simplification, type A3 and type A4 fractures will be referred to as type A fractures throughout the report.) Patients with type C fractures, tu� mors, or infections were excluded from the study. There� fore, the actual number of patients was 314. In part 1 of the study, we reassessed the documented pre� and intraoperative classifications of the fractures in the medical records in order to identify patients who were intraoperatively reclassified from type A to type B2 during open dorsal instrumentation and thereby determine the frequency of unrecognized PLC injuries. In part 2 of the study, two experienced spine surgeons and one experienced radiologist used picture archiving and communication system�implemented measuring tools to classify the fractures and analyze the preoperative anteroposterior and lateral radiographs and CT scans with reconstruction images. We evaluated the established ra� diographic parameters [3,9]: (1) The kyphotic wedge angle was formed by lines drawn parallel to the upper and lower endplates of the fractured vertebra ( (Fig. 2). A previous study described the interspinous distance (ID) as a radiographic sign of a PLC injury [5]. In this study, according to the report of Hiyama et al. [10], we evaluated the normalized ID as the distance of the spinous processes of the injured segment (s) normalized to the distance of one caudal segment (c) and expressed it as a percentage (ID [%]=s/c) (Fig. 2). Statistical analyses were conducted with IBM SPSS for Windows ver. 21.0 (IBM Corp., Armonk, NY, USA), Stu� dent t�test, and analysis of variance. All p<0.05 were con� ere con� con� sidered statistically significant. In part 3 of the study, we performed Pearson correla� tion analysis and evaluated combinations of radiographic parameters in mathematical formulas in order to increase the accuracy of PLC injury detection. Ethical approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research commit� tee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. For this type of study, formal consent is not required. This study was approved from the Ethics Committee of the Lande� saerztekammer of Rhineland�Palatinate, Mainz, Germany (approval no., 837.088.07 from 03 April 2007). Results In this study, a total of 317 patients underwent surgery for thoracolumbar spinal fractures between 2001 and 2010. The classification of the injuries according to the TLICS [5] revealed 225 type A fractures, 89 type B2 fractures, and 3 type C fractures (which were excluded from the study). The average age of the 176 male (56%) and 138 female (44%) patients was 51.8 years (range, 20-88 years). A review of the patients' pre� and postoperative medi� cal records indicated that 39 of the 89 type B2 fractures (43.8%) were initially misclassified as type A fractures and were not correctly identified until surgery. Radiological analysis revealed a significantly higher mean kyphotic wedge angle of 18° (standard deviation [SD], ±7.4°] for type B2 fractures relative to the corresponding kyphotic wedge angle of 11.1° (SD, ±6.0°) for type A fractures (Fig. 3A). In addition, the mean Cobb angle was significantly higher for type B2 fractures (16.85°±5.99°) than for type A fractures (10.38°±7.81°) (Fig. 3B). Vertebral heights were significantly reduced to a mean SI of 0.63 (SD, ±0.24) for type B2 fractures compared to 0.73 (SD, ±0.14) for type A fractures (Fig. 3C). The normalized ID was found to be 108% for type B2 fractures and 101% for type A fractures; this difference was not significant. In this study, Pearson correlation analysis revealed a significant positive correlation between kyphotic wedge The SI showed a significant negative correlation with the kyphotic wedge and Cobb angles for both type A and type B2 fractures (p<0.01). The next step was to mathematically combine the pa� rameters to find the highest odds ratios (ORs) for the pre� diction of PLC injuries (Table 1). First, we added up the parameters, and the OR was found to be 6.9 for the sum of Cobb and kyphotic wedge angles; analysis of fracture types showed that (Cobb angle+kyphotic wedge angle) >29° for 25% of type A fractures and 75% of type B2 frac� tures (Fig. 4). To give each parameter a higher weighting, next, we squared the parameters, and the OR was found to be was 3.5 for the square of the Cobb angle; analysis of fracture types showed Cobb angle 2 >170° for 25% of type A fractures and 75% of type B fractures (Fig. 5). For the SI, we performed more basic calculations until the kyphotic wedge angle divided by the SI showed the high� est OR of 4.5; analysis of fracture types showed kyphotic wedge angle/SI >25 for 25% of type A fractures and 75% of type B2 fractures (Fig. 6). Discussion The various classification systems for thoracolumbar spinal fractures are intended to facilitate surgeons' treat� ment decisions by providing correct assessments of spi� nal instability. The TLICS uses an additional modifier to emphasize the key role of PLC integrity [5,8]. However, if the classification pathologies are identified on plain X�rays or CT scans, evaluating PLC integrity in primary discoligamentous injuries still remains difficult. Leferink et al. [6] retrospectively evaluated 160 patients with 49 type B fractures for whom plain X�rays and CT scans with two�dimensional reconstructions had been used as pre� reconstructions had been used as pre� operative diagnostic tools. They found that approximately 30% of the type B fractures had been misdiagnosed pre� operatively. Schnake et al. [11] observed that 41.9% of 93 type B injuries in a group of 361 patients were not recog� nized. These results are consistent with our misdiagnosis frequency of 43.8%. In a retrospective study involving 65 patients and 85 vertebral fractures, Schröder et al. [12] demonstrated the advantage of using postprocessing algo� rithms in multidetector CT: the percentage of incorrectly classified type B fractures was reduced to 12.5%. On the basis of early biomechanical and clinical studies, a loss of vertebral body height of >50%, kyphosis of the thoracolumbar junction of >20°, or increased ID implies a PLC injury [4,10,13,14]. For example, Schnake et al. [11] observed a reduction in the anterior vertebral body height and a segmental angle of >15° in 31% and 44% of type B fractures, respectively; in 29% of the examined cases, typi� cal PLC injury indicators were absent. Hiyama et al. [10] evaluated associations between radiological parameters in 40 thoracolumbar burst fracture cases involving potential PLC injuries that were diagnosed using MRI. The authors demonstrated that local kyphosis of >20° and increased ID correlated with PLC injury. Our results of a kyphotic wedge angle of 18°, a Cobb angle of 17°, and a reduction in vertebral body height to 63% were significantly cor� related with PLC injury, whereas there was no significant difference in the ID between type A and type B fractures. Part 3 of this study was distinct from prior investiga� tions in that to increase the accuracy of potential PLC injury detection, radiological indicators were correlated and combined in mathematical terms. The results dem� combined in mathematical terms. The results dem� onstrated that local kyphosis and a reduced vertebral body height appear to have a high predictive value. However, we could not evaluate a clear, simple formula for definitively identifying PLC injuries. To date, MRI is recommended for assessment of PLC injuries because of its high sensitivity (79%-100%) [5,10], although reports of low specificity and low interobserver reliability sug� and low interobserver reliability sug� low interobserver reliability sug� gest potential overdiagnosis and overtreatment of PLC injuries [10,15�17]. Additional disadvantages of MRI are its limited availability and high cost; according to the Or� ganization for Economic Cooperation and Development, 60% of MRI devices annually sold worldwide are bought in the United States and Japan [18]. As a result of these limitations regarding MRI, recent studies have evaluated ultrasound as a feasible tool for detecting PLC injuries and have reported achieving ultrasound sensitivity close to that of MRI [19,20]. Our study had a few limitations. The sample size was small (n=314), the number of observers was small (n=3), and our focus was on operative cases alone, which could have biased evaluations because of the greater severity of such cases. Future follow�up investigations involving both non�operative and operative cases are required in order to evaluate parameters with the highest reproducibility and reliability and therefore improve precision and agreement among spine surgeons. Conclusions Correct diagnosis of PLC injuries is important for select� ing the appropriate treatment of thoracolumbar spinal fractures, but these injuries are frequently unrecognized by using plain X�rays and CT scans. In the absence of MRI, spine surgeons should assess established radiologi� cal parameters, such as a high kyphotic wedge angle, a high Cobb angle, and a low SI. In addition, mathematical correlations of these parameters can also be a helpful tool to determine the requirement of surgical treatment in pa� tients with thoracolumbar spinal fractures. Conflict of Interest No potential conflict of interest relevant to this article was reported.
2018-11-01T20:38:12.800Z
2018-10-18T00:00:00.000
{ "year": 2018, "sha1": "5071a4c62ebb82c5ced91605ec08d2da11b6afb6", "oa_license": "CCBYNC", "oa_url": "https://www.asianspinejournal.org/upload/pdf/asj-2017-0333.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5071a4c62ebb82c5ced91605ec08d2da11b6afb6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225584471
pes2o/s2orc
v3-fos-license
Research on Environmental Quality Evaluation System of Coordinated Development of the Beijing-Tianjin-Hebei Region Since the coordinated development plan of Beijing-Tianjin-Hebei region was put forward, environmental protection and pollution control have become the primary breakthrough of coordinated development. The ecological environment problem in Beijing- Tianjin-Hebei region has been the bottleneck factor restricting the integrated development. It is of great significance for environmental protection and economic development to establish a coordinated assessment mechanism for ecological environment quality in Beijing-Tianjin- Hebei region. This study analyzed the progress and effectiveness of the assessment of ecological environment quality in Beijing-Tianjin-Hebei region, analyzed the problems and challenges in the assessment of ecological environment in Beijing-Tianjin-Hebei region, and put forward some suggestions on the establishment of the mechanism of joint environmental prevention and control under the coordinated development of Beijing-Tianjin-Hebei region. Introduction As the third economic growth pole following the Yangtze River Delta and the Pearl River Delta, Beijing-Tianjin-Hebei region is mainly driven by the secondary industry dominated by heavy industry, especially the energy consumption for long time. In addition, the flourishing industrialization and urbanization also brings huge pressure to the ecological environment [1] . Since the central government proposed to advance "integration of Beijing, Tianjin and Hebei" as soon as possible, the region has been working faster in its coordinated development and governance of ecological environment. However, the three local governments, in practice, demonstrate an obvious non-institutionalized coordination, without a relatively fixed system [2] as mechanism to evaluate cross-regional ecoenvironment quality targets. The coordinated development of the area is the strategy of the CPC Central Committee with Xi Jinping as the core to promote regional development in a coordinated manner and create new economic growth pole [3] . Therefore, Beijing-Tianjin-Hebei ecological environment quality assessment requires a multi-agent linkage mechanism to speed up the high-quality development of the region [4] . IOP Conf. Series: Earth and Environmental Science 513 (2020) 012019 IOP Publishing doi:10.1088/1755-1315/513/1/012019 2 Many problems and challenges are presented to environmental quality assessment in the coordinated development of Beijing-Tianjin-Hebei. Despite the gradual appearance of environmental quality assessment concept in the performance management of government departments [5][6][7] , the ecological environment is still not valued enough in terms of position and proportion to steer local governments towards the strengthened role of environment quality target assessment in environment protection and to capture what ecological civilization construction orients on. The evaluation system of ecological environment quality targets is meant to shift the evaluation focus from objective-oriented environmental management into specific assessment that is intuitive, viable, quantifiable, comparable and comprehensive. In this way, it can help decision makers better understand how the ecological environment management turns out, align the relevant policies and reprioritize their efforts. The ecological environment quality assessment needs to be included into the performance assessment system of governments at all levels in Beijing-Tianjin-Hebei region. A set of top-down incentive and constraint mechanism shall be shaped through the establishment of a scientific environmental quality assessment system that sets the effect of environmental quality target assessment as a key indicator of the ecological performance assessment across municipal governments [1] . Carrying out environmental quality assessment in Beijing-Tianjin-Hebei region is a part of exploring new forms of ecological environment management, which is of theoretical and practical significance. 2.1Progress and Effects of Ecological Environment Quality Evaluation in Beijing (1) Progress has been made in improving the comprehensive evaluation system for the economic and social development of each district. At present, a number of assessments for each district are available in Beijing, evaluating such sectors as economic development, air pollution control, water pollution control, energy consumption etc. Despite the positive effects, they still have limitations. Different districts have their own strengths and weakness assessed by different indicators. Therefore, it needs a more comprehensive measurement to evaluate the effectiveness of ecological civilization construction in a district as a whole. The comprehensive assessment method and corresponding indicator system for the construction of ecological civilization is thus in need to reflect the consumption of resources ( Figure 1), environmental damage, ecological benefits etc. in each district across the board, and assess the development quality and benefits of each district in a more balanced way, especially the green development. (2) It has worked faster in the ecological civilization system reform in Beijing to better satisfy the need for the system. The CPC Central Committee and the State Council have proposed the establishment of eight systems, including property rights for natural resources, land development and protection, spatial planning system, total resource management and conservation, paid use and compensation of resources, environmental governance system, market system, performance appraisal and accountability. The construction of performance appraisal system is among them as one of the multiple pillars to support ecological civilization system. Beijing has issued the "Roadmap for the Reform of the Ecological Civilization System in Beijing", which sets out the specific tasks and expected results. At present, the tasks are well underway, such as the release of the Beijing Urban Master Plan, the definition of ecological protection red line area, the determination of 3 binding red lines namely construction land, water consumption, energy consumption, the practice of water environmental area compensation, the release of Opinions on Implementation of Third-party Treatment of Environmental Pollution, and all-round advance in trials for trading carbon emission rights. 2.2Progress and Effects of Ecological Environment Quality Evaluation in Tianjin (1) Air quality improvement has been included as one of the key indicators. On the basis of the existing evaluation and accountability, Tianjin Municipal Government conducts weekly open interviews on the districts where the air quality ranks last and the composite index, PM2.5 concentration rise, and supervises last three districts every month to take stricter measures to work their way up. Every month, the relevant departments at the municipal level who are behind the schedule, underperformed in special rectification and ineffective in supervision and punishment shall be criticized publicly. Those who have been criticized three times in total shall be subject to accountability. The improvement of air quality and the internal ranking results of Beijing-Tianjin-Hebei region are included as a supervision focus of municipal government to evaluate annual performance, the results of which are taken as a key reference for the evaluation of the leading bodies and cadres. (2) It has strengthened administrative efficiency responsibility of departments. According to the requirements of the state, the environmental protection supervision plan of Tianjin was formulated and implemented accompanied by the supervision system. With each district as the supervision object, the responsibilities are broken down to some key towns and industrial parks. In this way, localities taken on main responsibilities to protect environment. Tianjin Environmental Protection Bureau and Supervision Bureau set up a municipal environmental protection supervision group to supervise localities and hold those with ineffective and poor performance accountable. Efforts are made to make districts and departments more self-driven in assessment and accountability. Relevant departments responsible for the assessment and ranking of the districts shall earnestly perform their duties and conduct assessment strictly. The ranking results shall reflect varying levels and be true to the performance. Each district shall ramp up efforts on evaluating township, streets at all levels, grid administrator chiefs (grid administrators) and full-time grid supervisors, and resolutely put an end to "no one held accountable". 2.3Progress and Effects of Ecological Environment Quality Evaluation in Hebei (1) It has built grid regulation system of environment. In accordance with the principle of "localization administration, assigned of responsibility to different levels, comprehensive coverage and individual responsibility", governments at all levels hold themselves accountable with the focus on solving the blind spots in the supervision of the atmosphere, water, soil and rural environment. In this way, Hebei has comprehensively pushed forward the management in "province, city, county, township and village" levels and the three-level grid administration system of "county, township and village", shaping a grid administration system of environment featuring deepest and broadest coverage. At present, all cities and counties in Hebei Province have completed grid division and system establishment, with 194 primary grids (counties, cities, districts, development zones and parks), 2477 IOP Conf. Series: Earth and Environmental Science 513 (2020) 012019 IOP Publishing doi:10.1088/1755-1315/513/1/012019 4 secondary grids (townships, towns and streets), 50101 third grids (administrative villages and neighborhood committees), which have been made public as required (Table 1). It has clearly defined the environmental supervision responsibilities of local governments and relevant departments. According to the requirements that local governments at all levels are responsible for the environmental quality of the administrative region and the governments at or above the county level are responsible for the environmental supervision and law enforcement, responsible persons of county (city, district, park), township (town, street) and village (residential) committees shall also be in charge of all grids, clarifying the authorities' responsibility in grid administration of environment. Especially township (town) governments and sub-district offices, have extended their environmental supervision responsibility. A grid administration network of environment thus comes into being with governments at all levels responsible for implementation, environmental protection departments responsible for coordination, relevant departments' roles clearly defined and engagement of all sectors. Tertiary grid Administrative villages 50101 Neighborhood committees (2) Hebei has strictly supervised environmental law enforcement. Hebei conducts reviews on issues proposed by the Central Environmental Protection Supervision Group. In accordance with the principle of "clearing up the responsibility of each case and holding accountable for rebound", the province has reviewed 2856 reported environmental problems in 31 batches assigned by the Central Environmental Protection Supervision Group. It defines the specific responsible departments and personnel to ensure rectification in place and prevent rebound. It also further strengthens the mid-term and post-supervision of key environmental issues. Once any rebound is found, it shall be banned according to law and the relevant responsible organizations and persons shall be held accountable in accordance with the requirements of grid administration of environment. (3) The local governments have improved environmental management efficiency. When carrying out performance appraisal, all regions and organizations in Hebei Province have made work plans and specific measures to break down the key tasks of the government based on target management in the beginning of the year, and at the end of the year, the target progress will be assessed, the results of which are referred to for the selection, appointment, reward and punishment of cadres. At the same time, the performance appraisal helps clarify the roles of all regions and departments, reduce buckpassing and urge better performance of the lower level governments and departments. It improves the working style and efficiency of the organs. (4) It has completed key work and tasks. Governments and authorities at all levels in Hebei Province are presented with complicated work and tasks, and the indicators and scores of performance evaluation are a navigation. By using floating scores, the scores of evaluation items are aligned with the priorities of various tasks in that year, which can motivate and stimulate organizations and staff to complete the priorities or key tasks. IOP Conf. Series: Earth and Environmental Science 513 (2020) 012019 IOP Publishing doi:10.1088/1755-1315/513/1/012019 5 Problems of Beijing-Tianjin-Hebei Environmental Quality Evaluation (1) The status and proportion of environmental quality objectives in environmental performance management are relatively low. Despite the gradual appearance of environmental quality assessment concept in the performance management of government departments, the ecological environment is still not valued enough in terms of position and proportion to steer local governments towards the strengthened role of environment quality target assessment in environment protection and to capture what ecological civilization construction orients on. The performance evaluation and management still focus on the performance evaluation within the environmental protection system and environmental protection administrative departments. The evaluation and management system for administrative regions are still yet to be rolled out. (2) The environmental protection departments bear the brunt of the target assessment pressure. As the environmental performance evaluation is mainly about the related target responsibility of local governments, it, in essence, is about evaluating environmental management behavior of environmental protection sector. Therefore, it can't play a guiding, binding and incentive role for the other environmental protection government departments. The environmental protection departments are also responsible for various related assessment work in the environmental protection system. The current management system gives environmental protection departments more responsibilities than rights, so the assessment work puts these departments under great pressure. (3) Diversified environmental quality assessment mechanism has not yet been in place. The environmental protection quality target responsibility assessment in Beijing-Tianjin-Hebei region is dominated by the internal assessment of the environmental protection system. In the government performance evaluation system, the environmental protection functional departments are evaluated mainly based on their performance management in the way that the government is assessed by its superior department. This mode helps improve the efficiency and save the cost of assessment, but the assessment results are prone to be inconsistent with public perception, which affects the objectivity and transparency of the assessment results. The public's satisfaction with environmental quality is not included in the assessment system yet. Social forces have not been involved in the assessment work Policy Suggestions (1) Push forward the ecological environment quality evaluation in Beijing, Tianjin and Hebei institutionally and legally. The State formulates relevant laws and regulations, improves policies to provide a clear orientation based on pilot experience and analysis of problems in order to move forward the ecological environment quality evaluation in Beijing, Tianjin and Hebei. In the primary stage of rolling out the assessment, the central government can formulate laws, regulations and policies which provide guidance for governments at all levels from top to bottom, and then gradually improve the system of laws and regulations. First of all, when developing relevant laws and regulations, it is suggested to cover performance assessment and management of ecological environment quality targets. Secondly, it is necessary to increase the proportion of environmental related indicators to highlight the importance of environmental performance in specific circumstances. Thirdly, it needs to develop special laws, standards and guidelines for government environmental performance management, such as the technical guidelines on promotion of environmental performance and management, and environmental performance management methods, which can lay the institutional foundation for the follow-up implementation in various regions. Finally, the government environmental performance management should be combined with the existing "accountability" system, target management and performance evaluation system. It can offer a clear guidance and remove the restrictions on the use of government environmental performance evaluation results in the appointment of cadres. It can also make the application of multi-sector results easier, and apply the government performance evaluation results in a legal, full, open and transparent manner. (2) Research and explore the methods, indicators and data collection technology of environmental quality evaluation. Environmental quality evaluation is still in its infancy in China. Giving full play to the role of it in environmental management requires to systematically build a theoretical and technical IOP Conf. Series: Earth and Environmental Science 513 (2020) 012019 IOP Publishing doi:10.1088/1755-1315/513/1/012019 6 method system of environmental assessment, and take a deep dive into key issues such as the theory, method, assessment framework and indicators. It can clarify basic issues such as reason and contents of the evaluation, the evaluator and how to evaluate as well as application of evaluation results. It requires research on basic technology issues such as the collection, statistics, quality control and information sharing of environmental quality assessment data and information in Beijing-Tianjin-Hebei. At the same time, an effective assessment information system is needed to timely report data at a given time and further share information resources. The system can save the cost of repeated collection, achieve real-time performance comparison, dynamically monitor government performance, spot problems and rectify at any time, which will improve government executive ability. (3) Improve the proportion of eco-environmental indicators in the government performance appraisal system. Raising the status and proportion of environmental protection in the government performance evaluation system can further push forward environmental protection and ecological civilization construction through performance management. Beijing-Tianjin-Hebei should continue to improve the comprehensive performance evaluation system of leading cadres, raise the weight and score of ecological environment quality evaluation indicators in the system, and combine qualitative and quantitative indicators to highlight quantitative indicators. Local governments at all levels in Beijing, Tianjin and Hebei should, in light of the major environmental problems in various regions, focus on the comprehensive improvement of the environment in regions, river basins and industries, and take all tasks of environmental protection as "mandatory indicators". The government target responsibility system for environmental protection shall cover environmental quality indicator, total emission control indicator of major pollutants, environmental protection input indicator, pollution prevention and control projects, ecological environment construction and protection to realize targetoriented, quantitative and institutionalized management. Local governments at all levels shall be steered to put environmental protection on high agenda so as to timely solve major related problems in their respective regions. (4) Improve the communication and coordination mechanism of environmental quality target performance evaluation featuring through connectivity in Beijing, Tianjin and Hebei. The ecological environment quality evaluation is a systematic project with complicated contents, wide range of involvement, involving multiple departments and sectors. It is necessary to establish a coordination mechanism between the upper and lower levels of governments and among different departments. Beijing, Tianjin and Hebei shall be interconnected and share information to help solve the difficulties, improve the administrative efficiency and the performance of the government. It is suggested that the central government should set up a management organization with clear function, organize coordination among the departments by the dint of joint conference system of environmental protection and establish a top-down government performance management system. This organization management system can improve environmental quality assessment in Beijing, Tianjin and Hebei region in an all-round way. By creating an integrated assessment mechanism of multi-agent linkage, we will sustain the development of regional ecological environment and shape a long-term green development mechanism system based on it. These efforts can constantly reduce the damage of pollutants to the ecological environment, enhance the ability of Beijing-Tianjin-Hebei to spur the development of surrounding areas. We shall strive to build a shared and harmonious joint governance mechanism to engage multiple sectors in environmental management. (5) It is suggested to establish a diversified evaluation model for eco-environment quality assessment in Beijing, Tianjin and Hebei. The Ministry of Ecological Environment should lead the evaluation and engage other departments (such as collaboration between environmental protection and water conservancy departments) and even third parties (even international organizations) in the process. The evaluation related work shall be done more openly such as the evaluation scope, method and standard, and the release of report. We can establish an eco-environment quality evaluation system that engages both internal and external parties, timely and orderly consulting professional third-party institutions and exploring channels and ways for the public to get involved. Public representatives should be assessed as part of the group. The assessment process and results shall be released in a
2020-07-09T09:12:28.976Z
2020-07-08T00:00:00.000
{ "year": 2020, "sha1": "ddaa52aeed93b1284b9d85f3021caac95b4b3a0f", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/513/1/012019", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "005cea18a7a4686302773b22dbb4fbf8537e1f08", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Business" ], "extfieldsofstudy": [ "Physics", "Environmental Science" ] }
213181070
pes2o/s2orc
v3-fos-license
Developmental Timing Determines the Protective Effect of Maternal Electroacupuncture on Perinatal Nicotine Exposure-Induced Offspring Lung Phenotype Introduction. Environmental exposure of the developing offspring to cigarette smoke or nicotine is an important predisposing factor for many chronic respiratory conditions, such as asthma, emphysema, pulmonary fibrosis, and so forth, in the exposed offspring. Studies showed that electroacupuncture (EA) applied to maternal “Zusanli” (ST36) acupoints during pregnancy and lactation protects against perinatal nicotine exposure- (PNE-) induced lung damage. However, the most effective time period, that is, prenatal vs. postnatal, to attain this effect has not been determined. Objective To determine the most effective developmental timing of EA's protective effect against PNE-induced lung phenotype in the exposed offspring. Methods Pregnant rats were given (1) saline (“S” group); (2) nicotine (“N” group); (3) nicotine + EA, exclusively prenatally (“Pre-EA” group); (4) nicotine + EA, exclusively postnatally (“Post-EA,” group); and (5) nicotine + EA, administered both prenatally and postnatally (“Pre- and Post-EA” group). Nicotine was injected once daily (1 mg/kg, 100 μl) and EA was administered to bilateral ST36 acupoints once daily during the specified time-periods. At the end of the experimental periods, key hypothalamic pituitary adrenal (HPA) axis markers in pups and dams, and lung function, morphometry, and the central molecular markers of lung development in the offspring were determined. Results After nicotine exposure, alveolar mean linear intercept (MLI) increased, but mean alveolar number (MAN) decreased and lung PPARγ level decreased, but glucocorticoid receptor (GR) and serum corticosterone (Cort) levels increased, in line with the known PNE-induced lung phenotype. In the nicotine exposed group, maternal hypothalamic corticotropin releasing hormone (CRH) level decreased, but pituitary adrenocorticotropic hormone (ACTH) and serum Cort levels increased. In the “Pre- and Post-EA” groups, PNE-induced alterations in lung morphometry, lung development markers, and HPA axis were blocked. In the “Pre-EA” group, PNE-induced changes in lung morphometry, GR, and maternal HPA axis improved; lung PPARγ level decreased, but glucocorticoid receptor (GR) and serum corticosterone (Cort) levels increased, in line with the known PNE-induced lung phenotype. In the nicotine exposed group, maternal hypothalamic corticotropin releasing hormone (CRH) level decreased, but pituitary adrenocorticotropic hormone (ACTH) and serum Cort levels increased. In the “Pre- and Post-EA” groups, PNE-induced alterations in lung morphometry, lung development markers, and HPA axis were blocked. In the “Pre-EA” group, PNE-induced changes in lung morphometry, GR, and maternal HPA axis improved; lung PPAR Conclusions Maternal EA applied to ST36 acupoints during both pre- and postnatal periods preserves offspring lung structure and function despite perinatal exposure to nicotine. EA applied during the “prenatal period” affords only limited benefits, whereas EA applied during the “postnatal period” is ineffective, suggesting that the EA's effects in modulating PNE-induced lung phenotype are limited to specific time-periods during lung development. Introduction Despite well-established dangers of tobacco to human health, exposure of pregnant women to mainstream or sidestream smoke remains extremely high [1]. Although among the high-income women, the number of smokers is decreasing, among low-income women, this number is increasing [2]. Importantly, over half of the smokers continue to smoke while pregnant [3]. Considerable evidence supports that nicotine is the main harmful substance in cigarettes, which rapidly crosses the placenta and accumulates in the fetus in concentrations much higher than maternal serum concentrations [4]. Prenatal exposure to nicotine not only affects the survival and birth weight of infants [5,6], but also adversely affects many developing systems including but not limited to the nervous, circulatory, immune, and respiratory systems [7][8][9][10]. Its effects are especially pronounced on the developing lung [11], as it predisposes the exposed offspring to many chronic respiratory conditions such as asthma, emphysema, pulmonary fibrosis, and so forth. [12][13][14][15]. ese effects appear to be permanent, lasting to adulthood and some can even be potentially transmitted to future generations [16,17]. Nicotine's effects on the developing lung have been largely attributed to a disruption in epithelial-mesenchymal paracrine signaling, the central component of which is the nuclear transcription factor peroxisome proliferator-activated receptor-c (PPARc). PPARc is centrally involved in alveolar and airway development [18][19][20] and is a key determinant of the alveolar fibroblast differentiation to lipofibroblasts, which are essential for alveolar development, homeostasis, and injury repair [18,21]. Lung-specific PPARc knockout mice show enlarged alveolar sacs, increased apoptotic cells, and an enlarged lung volume, highlighting PPARc's indispensable role in lung development [19,20]. Nicotine, by down-regulating PPARc, drives alveolar lipofibroblasts to transdifferentiate to myofibroblasts, which are the hallmarks of all chronic lung conditions including the perinatal nicotine exposure-(PNE-) induced lung damage [22,23]. Supporting these observations, in experimental animal models, blocking lipofibroblast-to-myofibroblast differentiation, using PPARc agonists blocks and/or reverses the PNE-induced lung damage in the exposed offspring [24,25]. Hypothalamic pituitary adrenal (HPA) axis, by regulating the production of glucocorticoids, also performs an essential role in lung development and maturation [26]. Glucocorticoids act on the glucocorticoid receptor (GR), expressed in the developing lung, stimulating alveolar epithelial-mesenchymal cross-talk, and increase surfactant production. However, excessive glucocorticoids, either endogenous or administered exogenously, can hinder lung development, predisposing to conditions such as childhood asthma [27] and emphysema [28]. Evidence suggests that perinatal nicotine exposure disrupts maternal and offspring HPA axes, increasing maternal and offspring serum corticosterone (Cort) levels, which impacts offspring growth and development negatively [29][30][31][32]. ereby, PNE-induced lung damage, at least, in part, can be attributed to altered maternal and offspring HPA axes. Currently, there is no clinically safe and effective pharmacologic intervention to prevent or treat PNEinduced lung damage [25,[33][34][35][36][37][38][39]. Interestingly, electroacupuncture (EA) is known to treat a number of respiratory conditions, such as allergic asthma and acute lung injury [40,41]. By regulating HPA axis, EA also improves airway inflammation associated with asthma [42]. More importantly, experimentally, we have recently shown that EA applied to maternal "Zusanli" (ST36) acupoints during pregnancy and lactation (from embryonic day 6 [E6] to postnatal day 21 [PND21]) protects against PNE-induced lung damage [31,32]. However, the most effective time-period, that is, prenatal vs. postnatal, to attain this effect has not been determined. Since lung morphogenesis is a complex, finely orchestrated program with specific signaling pathways involved at specific stages during development, we hypothesize that the EA's effect in modulating PNE-induced lung phenotype is limited to specific time-periods during lung development. Here we compare EA's protective effect against nicotine-induced lung phenotype, when it is administered exclusively "prenatally" (embryonic, pseudoglandular, canalicular, and early saccular stages of lung development), exclusively "postnatally" (late saccular and alveolar stages of lung development), or both "pre-and postnatally" (all stages of lung development). Animals. Approval was obtained from the Beijing University of Chinese Medicine experimental animal Ethics Committee in 2017 and all animal procedures were performed in accordance with the "Guide to the Care and Use of Experimental Animals" of the China Animal Welfare Commission. irty female and ten male specific pathogen-free Sprague-Dawley rats (11 weeks old) without prior mating history were obtained (SPF, Beijing, Biotechnology Co., Ltd., production license number: SCXK (Beijing) 2006-0002). Animals were housed at a constant temperature and humidity environment with 12 hours of alternate light and dark cycle, with the provision of ad lib food and water. e feeding cages and water bottles were regularly disinfected. Experimental Protocol. In line with a well-established model [31,32], saline or nicotine injections (saline: 100 μl volume once daily and nicotine: 1 mg/kg in 100 μl volume once daily) were started on E6, and continued throughout pregnancy and lactation, that is, up to PND21 (except on the day of delivery). e saline group ("S" group) was injected saline once daily. e nicotine group ("N" group) was injected nicotine once daily. For the prenatal EA group ("Pre-EA group"), nicotine injection was the same as in the "N" group, but these dams were also administered EA to bilateral ST36 acupoints from E6 to the day of delivery. For the postnatal EA group ("Post-EA" group), nicotine injection was the same as in the "N" group, but these animals were administered EA to bilateral ST36 acupoints from PND1 to PND21. e prenatal and postnatal EA group ("Pre-and Post-EA" group) was administered nicotine similar to the "N" group, but these animals also received EA at bilateral ST36 acupoints from E6 to PND21 (except on the day of delivery). On PND21, pulmonary function testing was performed before sacrificing pups for lung tissue and serum collection and dams for the hypothalamus, pituitary, and serum collection. Electroacupuncture Protocol. e ST36 acupoints were identified at the posterolateral side of knee-joint about 5 mm below the head of the fibula, as detailed in "Experimental Acupuncture Science" [43]. Disposable sterile acupuncture needles (0.20 mm × 13 mm, Beijing Hanyi Medical Instruments Centre, China) were pierced to a depth of ∼0.7 cm at bilateral ST36 acupoints (connecting to negative pole) and horizontally to a depth of ∼0.2 cm into the skin below ST36 (connecting to positive pole). e EA parameters were, frequency 2/15 Hz; intensity 1 mA; and duration 20 minutes, administered once a day. For consistency, acupuncture was performed by the same operator between 10 a.m. to twelve noon throughout the study period. Pulmonary Function Testing. Pulmonary function testing was performed by the Respiratory Function Instrument with Buxco FinePointe software (Buxco, USA). e pups were intraperitoneally injected with 2% pentobarbital (5.5 mg/100g) for anesthesia, tracheotomized, cannulated, and connected to a ventilator for plethysmography. After a period of steady breaths, the lung resistance (RL), dynamic compliance (Cdyn), minute ventilation volume (MV), and peak expiratory flow (PEF) were recorded. Lung Morphology. At sacrifice, pup lungs were fully inflated with 4% paraformaldehyde (PFA) in PBS with constant pressure; after ligation, the lungs were submerged in 4% PFA for about 5 h, followed by immersion in 30% sucrose in PBS. e left lung was used for paraffin embedding, cut into 5 μm slices, which for lung morphometry were stained with hematoxylin and eosin (H&E). Subsequently, lung tissue morphology was assessed by determining mean linear intercepts (MLI) and mean alveolar numbers (MAN) using previously described methods [44]. Radioimmunoassay for Serum Corticosterone Levels in Offspring and Mother. Serum Cort levels in the mother and offspring rats were performed using radioimmunoassay as manufacturer's instructions (BioSino Bio-Technology and Science Inc. Catalog#: HY-068B). Offspring Lung PPARc mRNA Expression by Real-Time PCR. e method for RNA extraction, Real-time PCR, and the primers information of PPARc and GAPDH have been described previously [31]. Statistical Analysis. e data are expressed as mean ± SD. Statistical analysis was performed using SPSS statistical software (SPSS Inc., USA). One-Way ANOVA-Bonferroni test was used for the comparison of differences between groups, and P < 0.05 was considered statistically significant. Effect of Maternal EA during Different Developmental Time-Periods on PNE-Induced Changes in Offspring Lung Morphometry. e photomicrographs of the H&E-stained sections showed that the alveolar structure in group "S" was intact and the alveolar septum relatively complete. Compared with the "S" group, the alveolar volume in the "N" group was significantly larger, as determined by the greater MLI (P < 0.01), accompanying a lower MAN (P < 0.01), and in parts ruptured and fused alveolar walls. Compared with the "N" group, the "Pre-EA" and the "Pre-and Post-EA" group had smaller alveolar volumes (P < 0.05 and <0.01, respectively, vs. the "N" group) and more alveoli (P < 0.01 vs. the "N" group), the rupture and fusion of alveolar walls improved; however, the "Post-EA" group was not different from the "N" group (P > 0.05 vs. the "N" group) (Figure 2). Effect of Maternal EA during Different Developmental Time-Periods on PNE-Induced Changes in Offspring Lung PPARc mRNA and Protein Levels. Using Real-time PCR and ELISA, compared to the "S" group, PPARc mRNA (Figure 3(a)) and protein (Figure 3(b)) levels decreased significantly in the "N" group (P < 0.05, and <0.01, respectively). Both of these changes were blocked in the "Preand Post-EA" group (P < 0.05, and<0.01 vs. the "N" group); although the "Pre-EA" group showed a significant increase, it did not reach statistical significance (P > 0.05 vs. the "N" group); however, the "Post-EA" group was not different from the "N" group in both PPARc mRNA and protein levels (P > 0.05 for both). Effect of Maternal EA during Different Developmental Time-Periods on PNE-Induced Changes in Offspring HPA Axis. e results showed that serum Cort (Figure 4(a)) and lung GR (Figure 4(b)) levels in the "N" group were significantly higher than in the "S" group (P < 0.05, and <0.01, respectively), which normalized in the "Pre-and Post-EA" group (P < 0.05, and <0.01 vs. the "N" group). Furthermore, in the "Pre-EA" group, compared with the "N" group, though the lung GR decreased significantly (P < 0.05), serum Cort was not significantly different (P > 0.05). Furthermore, the "Post-EA" group was not different from the "N" group in both (lung GR and serum Cort levels) of these parameters (P > 0.05). Effect of Maternal EA during Different Developmental Time-Periods on PNE-Induced Changes in Maternal HPA Axis. Discussion Exposure to mainstream or sidestream smoke during pregnancy is an important healthcare risk worldwide. It adversely affects offspring development, especially having a long-term detrimental effect on the respiratory health of the exposed offspring [45][46][47]. Considering nicotine's strong addictive effect and the extensive advertising by the tobacco companies to target teens, the problem of smoke exposure during pregnancy is unlikely to go away soon. Hence, finding novel, safe, and effective intervention strategies to mitigate the impact of perinatal tobacco exposure is of great public health significance. Electroacupuncture is a modification of acupuncture that stimulates acupoints with low-frequency pulsed electrical current. Biologically, it is a combination of acupuncture stimulation and its consequent electrophysiological effects. As a nonpharmacologic therapy, EA is easy to operate and has minimal side effects [48]. ST36 is an acupoint of the "Stomach Meridian" and has been identified to be important for general improvement in health. It is effective in treating diseases of many organ systems, including the respiratory system [49,50]. It also modulates HPA axis stability [51]. In general, the effects of acupuncture are determined by factors, such as the functional state of the body, stimulation parameters, acupoint selection, and the timing and duration of treatment. Regarding the timing of treatment, it was demonstrated that acupuncture treatment 4-7 days after the onset of facial paralysis is better than its administration either within the first 1-3 days or after 8-10 days of the onset of facial paralysis [52]. Similarly, for establishing a more efficient bladder control of a neurogenic urinary bladder following spinal cord injury, earlier intervention is better than later [53]. ese studies indicate that the efficacy of acupuncture at different disease stages is different. Mammalian lung morphogenesis is a complex, finely orchestrated program, which progresses through well defined, sequential stages to result in fully functional lung; for example, the rat lung development proceeds through the embryonic (E11-13), pseudoglandular (E13-18.5), canalicular (E18. [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20], saccular (E20-PND4), and alveolar (PND4-21) stages. Specific growth factors and signaling mechanisms regulate each stage and drive its progression to the next stage [54]. By comparing EA's protective effects against nicotine-induced lung phenotype, administered exclusively during the "prenatal period" (embryonic, psuedoglandular, canalicular, and early saccular stage of lung development), "postnatal period" (late saccular and alveolar stages of lung development), or both "prenatal and postnatal periods" (all stages of lung development), we found that the PNE-induced lung morphometric (MLI and MAN) and functional (Cdyn, PEF, MV, and RL) changes were effectively blocked only when EA was administered during both "prenatal and postnatal periods." is is in line with our previous findings [31,32]. However, its application exclusively during the "prenatal period" resulted in incomplete mitigation of perinatal nicotine-induced pulmonary functional changes, for example, nicotine's effects on Cdyn and RL were blocked, but not on PEF and MV. e application of EA exclusively during the "postnatal period" Values are mean ± SD; n � 5 per group; * * P < 0.01 vs. control; # P < 0.05, ## P < 0.01 vs. nicotine. had even fewer effects; that is, it only blocked PNE-induced changes in RL but not in other pulmonary functional indices. ese data suggest a graded efficacy of EA's beneficial effects when administered during both "prenatal and postnatal periods," exclusively "prenatal period," or exclusively "postnatal period," with administration during both "pre-and postnatal periods" providing the maximum beneficial effect, while its administration exclusively during the "postnatal period" had the least beneficial effect. PPARc is a ligand-activated transcription factor that plays a key role in regulating lipid storage and metabolism in various organs including the lung [55][56][57]. Experimentally, in a rat model, PNE down-regulated PPARc expression in the developing lung along with the associated nicotine-induced pulmonary structural and functional phenotype [25,58]. EA applied to maternal ST36 acupoints during "pre-and postnatal periods" completely prevented the nicotine-induced decrease in pulmonary PPARc protein levels, in conjunction with blockage of the perinatal nicotine-induced pulmonary structural and functional changes. Interestingly, EA applied exclusively during the "prenatal period" only slightly blocked the PNE-induced decrease in pulmonary PPARc protein levels, which, not surprisingly, was accompanied by incomplete protection against PNEinduced pulmonary effects; that is, although the lung morphology improved, it only partially blocked nicotine's effects on pulmonary function. In contrast, administration of EA exclusively during the "postnatal period," neither improved pulmonary PPARc protein levels nor nicotine's effects on lung structure and function. To understand the mechanism of EA's effects on nicotine-induced pulmonary morbidity in the developing lung, it is important to understand nicotine's effects on maternal and fetal HPA axes and how these are affected by EA. Glucocorticoids are key players in mediating stress response on the HPA axis, both before and after birth [59,60]. In general, maternal and fetal/neonatal glucocorticoid levels correlate closely. High maternal glucocorticoid levels can result in high blood circulatory levels in the fetus and infant through the placenta and breast milk, respectively [61,62]. Nicotine increases glucocorticoid synthesis in maternal adrenals, decreases placental 11β-HSD-2 activity, and compromises the placental barrier to maternal glucocorticoids, which leads to fetal overexposure to maternal glucocorticoids, which in turn affects fetal HPA axis and growth [29,30]. In line with our previous studies, with nicotine exposure, we found decreased maternal hypothalamic CRH, but increased pituitary ACTH and serum Cort levels; in addition, fetal serum Cort and lung GR levels increased [31,32]. Previously, it has been shown that the negative feedback from elevated serum Cort and ACTH levels during pregnancy results in inhibited maternal hypothalamic CRH secretion, which normalizes after delivery [63]. It is likely that perinatal smoke/nicotine-induced lung injury in the exposed offspring, at least in part, is causally related to maternal glucocorticoid overexposure. In contrast, EA applied at ST36 throughout pregnancy and lactation results in increased maternal hypothalamic CRH, but decreased pituitary ACTH and serum Cort levels. is effectively restores the maternal HPA axis, avoiding offspring overexposure to maternal glucocorticoids, which normalizes the offspring's serum Cort and lung GR levels, thereby preventing nicotineinduced lung injury. Our data suggest that maternal EA during pregnancy can have lasting effects on the maternal HPA axis, that is, at least until the end of lactation. Long lasting effects after acupuncture have been demonstrated in other conditions as well [64,65]. For example, in a rat model, it has been demonstrated that inhibition of morphine withdrawal syndrome lasted 7 days after the end of the treatment [64]. As another example, the beneficial effects of acupuncture anesthesia have been shown to last well into the postoperative recovery period [65]. However, these effects gradually wane, which might explain the lack of beneficial effects in pulmonary function and in PPARc and serum Cort levels at PND21 following prenatal EA. We also found that although EA applied to ST36 acupoints during lactation modulated maternal HPA axis, it had no apparent effect on offspring rats. It is likely to be due to relatively limited transfer to maternal glucocorticoids via breast milk to offspring. A previous study showed that PPARc agonists administered during lactation (PND1-PND21) could reverse nicotine-induced lung damage in rat offspring [24]. e contrasting data from that study and our present study are possibly related to the fact that in the previous study the PPARc agonist was directly administered to rat pups, whereas in the present study, the protective effect was dependent upon transmission of protective factors via breast milk. Overall, our data support that for the optimal benefit of EA at ST36 acupoints against perinatal nicotine-induced lung damage, it needs to be administered both pre-and postnatally. Conclusion In conclusion, in an experimental rat model, maternal EA applied to ST36 acupoints, during both "pre-and postnatal periods," preserves offspring lung structure and function despite perinatal exposure to nicotine. is effect is accompanied by blockage of PNE-induced changes in HPA axes in both the mother and the offspring, thus preventing offspring exposure to excessive maternal glucocorticoids, which occurs with perinatal nicotine exposure. Maternal EA at ST36, administered exclusively during the "prenatal period," affords only limited benefit, while its administration exclusively during the "postnatal period" does not afford obvious protection. Data Availability e data used to support the findings of this study are available from the corresponding author (Bo Ji) upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
2020-03-05T10:53:17.467Z
2020-02-27T00:00:00.000
{ "year": 2020, "sha1": "f2f455493c56cbc4f2dad0e8e3ac13fc60afe506", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/bmri/2020/8030972.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "99246d4d3a2e39678ae2bfc7d8eaa205687791f5", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
80575487
pes2o/s2orc
v3-fos-license
Typhoid perforation in children: an unrelenting plague in developing countries Introduction: Despite global scientific development, typhoid fever and subsequent typhoid perforation have continued to be common in developing countries. The aim of this study was to re-evaluate the pattern of presentation and management outcomes as well as morbidity and mortality of typhoid perforation among children. Methodology: Children aged 15 years and under with clinical diagnosis of typhoid perforation were retrospectively studied by reviewing their hospital records between January 2006 and December 2015. Demographic and clinical data were analyzed with SPSS using descriptive statistics and the chi-squared test or Cramer’s V for continuous and categorical variables respectively. Results: 105 children had typhoid fever, 56 (53.3%) of them were diagnosed with typhoid perforation and 49 were confirmed intra-operatively. Of the children, 55.1% (n = 27) were school-aged while the remaining were adolescents; a majority had the classical triad of persistent fever (100%), abdominal pain (100%) and abdominal swelling (93.9%). Anaemia and hypokalaemia were common. The mean time duration for resuscitation was 16 hours (range 6-36 hours). Most perforations were single (n=36, 73.5%). There were more perforations in the school-aged than adolescent children (p = 0.845; V = 0.298). Wound infection (34.7%) was the most frequent morbidity but faecal fistula (10.2%) was most troublesome to manage. Death followed severe sepsis and chest infections in four children (8.2%). Conclusion: Typhoid perforation continues to cause morbidity and mortality in children in developing countries. To stem this endemic disease, community health education and improved living conditions are required. Introduction Pierre Charles Alexandre Louis (1787-1872), working in Paris in 1829, identified pathologic lesions of typhoid fever in the intestines, mesenteric lymph nodes and spleen [1]. Today typhoid fever remains a major health problem, particularly in developing countries where the lack of a potable water supply, poor environmental sanitation, increasing population and urbanization as well as poor health care delivery systems are rife [2]. The bacterial agent responsible for the spread of typhoid disease among humans is Salmonella enterica subspecies enterica serovar Typhi (S. Typhi) [3]. To date, the greatest burden of typhoid fever occurs in children [4]. Similarly, children comprise more than 50% of all cases of typhoid perforation (TP), the commonest severe complication of the disease [5]. Once TP has occurred, the overall management outcome becomes a function of several factors [6][7]. The aim of this study was to re-evaluate the pattern of presentation and management outcome as well as morbidity and mortality of TP among children treated at University of Calabar Teaching Hospital, Nigeria. Methodology This was a retrospective study of children aged 15 years and under with clinical diagnoses of typhoid perforation (TP) seen at University of Calabar Teaching Hospital between January 2006 and December 2015. The Paediatric Out-Patient (POP) clinic, Children Emergency Room (CHER) and ward registers were searched to identify all cases of typhoid fever seen during this period. The names and hospital numbers of those with clinical diagnoses of TP were retrieved. Their case notes were then obtained from the Health Records Department and reviewed to identify patients for whom the clinical diagnoses of TP were confirmed intra-operatively (i.e by surgical finding of anterior mesenteric perforation of the distal ileum). This finding was matched with the well-known clinical presentation of the disease as well as compatible results of laboratory and radiological investigations. Patients who had no ileal perforations or whose perforations were due to appendicitis, duodenal ulcer or trauma were excluded from the study. The age, gender, clinical presentation, time interval between onset of abdominal pain and surgical intervention, operation findings, procedure performed, length of hospital stay, overall outcome and complications were then extracted and analyzed. Institutional consent was obtained from the hospital's Research/Ethics committee. Data analysis was carried out using Statistical Package for Social Sciences (SPSS) for Windows (IBM Corp. NY, USA) and Computer Programs for Epidemiologic Analysis (CPEA). Descriptive statistics (percentage tables, mean, median, standard deviation and interquartile range) were used to summarize variables. Chi-squared test was used for categorical variables. Cramer's V [8], a chi-squared based adjustment with values as follows: small effect = 0.01, medium = 0.30, large = 0.50 was used to determine the strength of the relationship between categorical variables. Statistical significance was defined as p < 0.05. Results One hundred and five children aged 15 years and under with typhoid fever were managed during the study period, out of which 56 were clinically diagnosed as having TP, but only 49 had intraoperative confirmation of the disease. Thus, seven children were excluded because their clinical diagnoses were at variance with intraoperative findings. The age range was 5-15 years with a median age of 10 years. There were 29 males and 20 females giving a male: female ratio of 1.5:1. The median duration of abdominal pain before presentation was 72 hours with an interquartile range (IQR) of 48-96 hours. The majority (n= 32; 65.3%) presented 48 to 72 hours after onset of severe abdominal pain ( Table 2). Only a few (n= 3; 6.1%) reported within 24 hours of onset of abdominal pain. Forty-six (93.9%) children presented with the classic triad of TP: namely persistent fever, abdominal pain and abdominal swelling ( Table 2). There were no cases with atypical presentation. The major physical findings exhibited by all patients were pallor, dehydration, elevated body temperature and generalized abdominal tenderness with guarding ( Table 2). Abdominal distension and rigidity were the next most prevalent physical signs. All (n = 49; 100%) patients were anaemic, but hypokalaemia and air under the diaphragm were observed in 28 (57.1%) and 20 (40.8%) children respectively (Table 3). The mean duration for resuscitation was 16 hours (range 6 to 36 hours). The estimated period from time of presentation to surgery ranged from about 18 hours to 4 days with an average of 2.2 days. Surgical operations delayed beyond 72 hours were associated with increased mortality. The majority (n= 36; 73.5%) of the perforations were single. The highest number of perforations in a single patient was six (Figure 1), all limited to the terminal 60 cm of the ileum. The overall perforation rate (n= 49/105) was 46.7%. The mean size of the perforations was 1.5 ± 0.48cm. There were more perforations in the school-aged children (n= 27; 55.1%) than in those who were adolescents (n = 22; 44.9%). A chi-squared test for independence indicated no significant association between age and number of perforations. 2 = (2, n= 49) = 4.35, p = 0.845, V = 0.298 (Table 1). Similarly, more perforations occurred in males (n= 29; 59.2%) than females (n= 20, 40.8%). However, this observation failed to establish any statistically significant association between gender and number of perforations, and the effect of gender following Cramer's V adjustment was small. 2 = (2, n = 49) = 0.34, p = 0.845, V = 0.083. (Table 1) There was also no significant association between age and gender with degree of faecoloid peritoneal collection and the effect of these variables on this parameter was also small (Table 1). In all, 30 (61.2%) patients had surgical operation within 24 hours of presentation. The surgical procedures employed were excision of the edges of the perforations and simple closure in 36 (73.5%) children, segmental resection with primary end-to end anastomosis in 11 (22.4%) and right hemi-colectomy with ileo-transverse or ileo-colic anastomosis in only two (4.1%) patients. The mean duration of hospital stay was 17.45 days ± 8.85 days (range 8 to 45 days). Overall, 45 (91.8%) children recovered and were discharged home. The mortality rate was 8.2% (n=4/49) and was associated with severe sepsis. Of the four (8.2%) children who died, two (4.1%) had troublesome uncontrolled faecal fistula while the remaining two (4.1%) had severe chest infections in association with burst abdomen. Discussion Typhoid fever is a common infection that has remained a public health problem in many developing countries [9][10]. In endemic areas, the disease is said to predominantly affect school-aged children [11]. The results of our study were in support of this fact, as they demonstrated that school-aged and adolescent children were those commonly diagnosed with TP. Aside from poor sanitation and limited availability of clean and potable water, patronage of food hawkers at school who may be carriers of the disorder [12] are most commonly associated with a high prevalence of the disease among school-aged children. Similar to previous studies of TP in children, both boys and girls were equally affected [5,13]. The finding that there were no cases of TP among the under 5 age group of children agreed with those of others [13][14]. However, Sinha et al. [15] have challenged the view that typhoid fever was a disease of school-aged and adolescent children. They maintained that the incidence and age distribution of the disease varied between developing countries and even within the same country, and that typhoid was a common and significant cause of morbidity of children between 1 and 5 years of age. Consequently, reports from other studies [16][17], have shown that children under 5 years represent a high-risk group for TP. We therefore agree to the call for a reassessment of the optimum age of typhoid immunization and the choice of vaccines [15]. The current practice is that the injectable vaccine is approved for children aged two years and above while the oral one is approved for children aged six years and above [18]. The occurrence of increasing abdominal pain in typhoid fever is thought to signify onset of intraabdominal complication, most likely perforation [11]. Therefore, judging by the duration of onset of severe abdominal pain in the children prior to presentation, majority of them might have perforated several hours and even days previously, at home. This agreed with the observation that of the patients who perforate, most of whom are children, 90% do so outside the hospital [19]. This state of events may have been exacerbated by initial experimentation by parents with other drugs and native medications, as found in other studies [17,20]. Patients were then brought to hospital only when the experimental treatments failed. Hosoglu et al. [21] reported that inadequate and improper treatment increases the risk of perforation in typhoid fever. This fact and the attendant delay in commencement of treatment may have accounted for the very high perforation rate found in this series. Our study showed that almost all patients exhibited the classical triad of TP symptoms. This finding was similar to that of Uba et al. [22] where all patients manifested with the classical features of TP. Perhaps this could be attributed to late presentation of the patients at which time the clinical features were already well established. However, a high index of suspicion is required in all instances for early and accurate diagnosis because of the possibility of confusion with other diseases entities [11]. Among other risks factors of perforation reported by Hosoglu et al. was anaemia [21]. All our patients were already anaemic as at the time of presentation and blood transfusion formed part of the resuscitative measures in most instances. The pathophysiology of anaemia is said to be multifactorial and includes bone marrow suppression, haemophagocytosis, sepsis and malnutrition [11,17] . Similar to many other reports [5,17,[22][23], the majority of perforations were singular in both genders. Our findings, in agreement with those of the Zaria series [5], showed that the perforation rate increased with age, though there was no significant statistical difference following this observation. Similarly, our study showed that the volume of faecoloid peritoneal fluid collection was independent of age and gender. However, it is a known fact that faecoloid peritonitis, overwhelming septicaemia from S. Typhi and other intestinal bacteria are important prognostic factors [24][25]. Several operative modalities have been advocated in the management of children with TP [23,26]. In this study, as in others [5,17,[22][23], excision of the edge of the perforation and simple closure was the predominant surgical procedure performed. This is in agreement with the position that the simple, quick and effective procedure was the best choice because these were high surgical risk patients [17,27]. Overall, the majority of our patients survived and were discharged home. The mortality rate recorded in this series compared with that of an earlier report by Archibong et al. [20] from the same centre but is far lower than those from others [5,22]. Aside from variation in sample size, the difference in mortality rate may be due to availability of more recent and potent antibiotics to combat the systemic effects of the disease following surgical intervention. Nevertheless, the morbidity still remained high, with the most frequent complication being wound infection as in other studies. [5,17,[22][23]. However, the most challenging complication was entero-cutaneous fistula, as was also observed in other series [17,22]. The abdominal wound following surgery for TP is usually heavily contaminated [28]. This formed the basis for the practice of delayed wound closure by some authors [29][30]. All the wounds in this study were however closed primarily with acceptable outcome because of the logistic challenges of performing delayed wound closure in our setting, in agreement with earlier studies [31][32]. Conclusion Typhoid perforation, the commonest severe complication of typhoid fever, continues to cause high morbidity and mortality in children in developing countries. To stem this disease, community public health education is required to facilitate early presentation and management of children with persistent fever. There is need for improved sanitation and provision of safe water supplies as well as overall living conditions of those who are vulnerable.
2019-03-17T13:08:36.655Z
2017-10-31T00:00:00.000
{ "year": 2017, "sha1": "a54fd27dbad535b17d16cae6347de387dd108179", "oa_license": "CCBY", "oa_url": "https://jidc.org/index.php/journal/article/download/31600146/1755", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5225dd40e7cfd84411ee8710e3acc8d08790fbaa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226508780
pes2o/s2orc
v3-fos-license
MULTICHANNEL POLARIZATION LIDAR MEASUREMENTS OF AEROSOLS AND CIRRUS CLOUDS In this paper we report using a 6-channel polarization detector to measure optical properties of aerosols and clouds. The polarization lidar system is designed to measure Stokes vectors and Mueller matrices from back-scatterings of air, aerosols and clouds by using several polarizers of setting at different angles, and a retarder to measure circular polarization. The 4 components Stokes vectors of the scattering media are constructed and a case of tropopause cirrus cloud and stratospheric aerosols are measured with the Mueller matrix derived. INTRODUCTION Polarization lidar system is an important tool to measure aerosols and clouds by deriving the depolarization ratios based on two component parallel and perpendicular polarization states. However, as demonstrated a century ago, the 4 component Stokes vector is a more complete description of the polarization state which can be used to derive the Mueller matrix to provide information about optical properties of various subjects. Stokes vector consists of four parameters S=[I, Q, U, V] T defined in terms of the components of the electric fields of the light waves. Here a vector A T indicates the transpose of the A. The components of T are determined by the polarization filters and the retarder of the lidar system as will be describe later. The Mueller matrix Mc describes the state of the cloud which consists of randomly oriented particles, As shown in previous works [3][4][5] the Mueller matrix for randomly oriented particles can be simply shown as EPJ Web Conferences 237, 07020 (2020) ILRC 29 https://doi.org/10.1051/epjconf/202023707020 which can be further simplified as: where d is the depolarization ratio [3][4][5]. The lidar system The lidar system include a 532 nm laser and a Cassegrain telescope of 20 cm diameter. Signals are measured by the detector system which consists of a photomultiplier tube (Hamamatsu R9880) and filter systems which include polarizers and interference filters (0.3 nm FWHM at 532 nm). Signals are treated by a transient recorder (Licel system) with a spatial resolution of 7.5 m. Each profile is accumulated for 30-60 sec. The lidar system has been described in previous papers [1][2]. The signals are accumulated 1 min for each channel and takes about 5 min for a complete measurements of 5 channels. The 5 polarization channels are defined by using linear polarizers at orientations of 0 o (#6), ±45 o (#3,4), 90 o (#5), which are arranged by comparing their intensities with the laser emission whose polarization direction exiting to the sky is set by using a half-wave plate. The 532 nm filter is put in front of the polarizing filter system to receive the laser backscattering light. The sixth channel is a dark channel to check the background. Signals are recorded by a multichannel analyzer LICEL system. The telescope and photomultiplier system are considered as mainly constant in response to any polarizer. We assume Sout=[I0,Q0,U0,V0] as the Stokes vector of the back-scattered signals from the cirrus clouds. As shown previously, the measurements produce another Stoke vector T: The combined Mueller matrix for the detector as Mx=M1(T) M2(I) ,with M2 and M1 the Mueller matrices for the retarder and linear polarizer at a specific angle (T=0,90,45). When the measurement involves only linear polarizer without retarder (such as channels 1 and 2) we have I=0. T=M1M2Sout (3) For example, M1= 1 0 0 0 The four component of T=[To,T1,T2,T3] are measured by the lidar system as shown in Fig. 1. The 1st term, To is the most interesting since it is the intensity term [6]. We can derive T0 for a liner polarizers at angle T and a retarder of phase I S/2. After the expansion, we get: To(TI)= (1/2)(I0+Q0 cos 2T + U0 sin2 T cos I + V0 sin 2T sin I) where T is the orientation of the linear polarizer. Again, I =0 means without the retarder. To(TI) is the signals read from the transient recorder shown in Fig. 2. Therefore, a complete determining of the Stoke Vector of Sout can be made by a few measurements with a quarter wave plate and a linear polarizer setting at 0 o , 90 o , ±45 o . Normally, we will normalize these quantities to set Io=1. So the measurements are relative. In order to determine a3, b4, we have to use circular polarization as the light source. a2-a3+a4=1 For atmospheric application, the circular polarization is very small as shown in Hansen and Travis [7]. In practice, the 4th row and column can be ignored with Mc left as 3x3 matrix. Under this condition, we find a3=a2~0.5. So the Mueller matrix of the 16 km cloud is:
2020-07-09T09:15:03.495Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "599afc0fd7ffaedec3754b2aa7e41a87c4bd695e", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2020/13/epjconf_ilrc292020_07020.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1bba1f237273a5344c3a1252dfe97e34d718cd96", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
253266819
pes2o/s2orc
v3-fos-license
SPARE-Tau: A flortaucipir machine-learning derived early predictor of cognitive decline Background Recently, tau PET tracers have shown strong associations with clinical outcomes in individuals with cognitive impairment and cognitively unremarkable elderly individuals. flortaucipir PET scans to measure tau deposition in multiple brain areas as the disease progresses. This information needs to be summarized to evaluate disease severity and predict disease progression. We, therefore, sought to develop a machine learning-derived index, SPARE-Tau, which successfully detects pathology in the earliest disease stages and accurately predicts progression compared to a priori-based region of interest approaches (ROI). Methods 587 participants of the Alzheimer’s Disease Neuroimaging Initiative (ADNI) cohort had flortaucipir scans, structural MRI scans, and an Aβ biomarker test (CSF or florbetapir PET) performed on the same visit. We derived the SPARE-Tau index in a subset of 367 participants. We evaluated associations with clinical measures for CSF p-tau, SPARE-MRI, and flortaucipir PET indices (SPARE-Tau, meta-temporal, and average Braak ROIs). Bootstrapped multivariate adaptive regression splines linear regression analyzed the association between the biomarkers and baseline ADAS-Cog13 scores. Bootstrapped multivariate linear regression models evaluated associations with clinical diagnosis. Cox-hazards and mixed-effects models investigated clinical progression and longitudinal ADAS-Cog13 changes. The Aβ positive cognitively unremarkable participants, not included in the SPARE-Tau training, served as an independent validation group. Results Compared to CSF p-tau, meta-temporal, and averaged Braak tau PET ROIs, SPARE-Tau showed the strongest association with baseline ADAS-cog13 scores and diagnosis. SPARE-Tau also presented the strongest association with clinical progression in cognitively unremarkable participants and longitudinal ADAS-Cog13 changes. Results were confirmed in the Aβ+ cognitively unremarkable hold-out sample participants. CSF p-tau showed the weakest cross-sectional associations and longitudinal prediction. Discussion Flortaucipir indices showed the strongest clinical association among the studied biomarkers (flortaucipir, florbetapir, structural MRI, and CSF p-tau) and were predictive in the preclinical disease stages. Among the flortaucipir indices, the machine-learning derived SPARE-Tau index was the most sensitive clinical progression biomarker. The combination of different biomarker modalities better predicted cognitive performance. Introduction Alzheimer's disease (AD) is neuropathologically defined by the presence of tau neurofibrillary tangles and Aβ plaques [1]. Among these defining histopathological lesions, neurofibrillary tangles have been associated with a faster clinical progression than Aβ plaques [2,3]. Tau has been historically measured on cerebrospinal fluid (CSF); however, this method does not provide sufficient information on the spatial distribution of tangle accumulation throughout the brain. On the other hand, Positron Emission Tomography (PET) advances offered several tau tracers, which have recently become available to quantify precise regional brain neurofibrillary tangle deposition. These new tracers can detect protein deposits present years before cognitive decline manifests. Tau tangles have been shown to capture stages of Alzheimer's disease [4], leading to diagnostic frameworks enabling the categorization of subjects along the AD continuum [5] using a biomarker-based definition of AD [5,6]. Neuroimaging techniques capture changes across the whole brain that can be successfully summarized using machine-learning derived approaches [7][8][9]. Machine-learning algorithms generate optimal weighting for the different brain regions deriving summary indices with better classification accuracy and conversion predictions than simple anatomical-based summary metrics [10,11]. Previous work has previously developed neuroimaging-based machine learning indices using magnetic resonance imaging (MRI) [7][8][9]. These indices have multiple uses in clinical practice and trials, in which they can facilitate recruitment and evaluate outcomes [12][13][14]. However, studies relied on a priori defined anatomical composites (i.e., meta-temporal regions of interest (ROI)), to evaluate the association with longitudinal outcomes [15][16][17][18][19][20]. This selection might not provide the optimal weighting of the individual brain regions involved throughout the disease. There is also limited information regarding biomarker- design and implementation of ADNI and/or provided data but did not participate in the analysis or writing of this report. A complete listing of ADNI investigators can be found at: http://adni.loni.usc. edu/wp-content/uploads/how_to_apply/ADNI_ associated outcomes [15]. In this work, we developed a new machine-learning derived tau PET index, the SPARE-Tau (Spatial Pattern of Abnormality for Recognition of Early Tau pathology) and compared it to previously established biomarkers. We evaluated the clinical associations and prognostic value of CSF p-tau, a priori-defined regional tau PET indices (metatemporal ROI and average Braak score), and a machine learning-derived MRI index (SPA-RE-AD) [8,9]. We hypothesize that [1] machine learning derived flortaucipir PET imaging composites offer stronger associations with cross-sectional and longitudinal clinical measures than a priori-defined tau PET ROIs, and [2] correlate better with clinical outcomes compared to MRI-defined indices and CSF p-tau. or florbetapir PET Aβ testing during the same study visit were included (S1 Table). Our study included 344 cognitively unremarkable (CU), 182 MCI, and 61 dementia participants. Participants had yearly neuropsychological battery testing and clinical assessments [21]. The median follow-up was 1.9 years (IQR: 0.79-2.21 years). Further details on the clinical core, recruitment, and diagnostic methods have been previously published [22,23], and details can be found at (http://adni.loni.usc.edu/). All the data is available at http://adni. loni.usc.edu/. Participants were stratified as normal (Aβ-) and pathological (Aβ+) Aβ biomarker values if either their cerebrospinal fluid (CSF) or florbetapir PET scan indicated pathological Aβ values (see PET and CSF sections below). The demographic and biomarker information of the participants I summarized in S1 Table. We downloaded the anonymized data from the ADNI website. Patient gave written informed consent; no minors were recruited into the study. The study was approved by the local institutional review boards (IRBs). MRI acquisition and processing 3T sagittal MP-RAGE scans for each subject were selected at the same clinical visit as the flortaucipir scan and were segmented and parcellated with Freesurfer (v 5.3) [24]. Additional details for the imaging processing can be found on the ADNI website (http://www.adni-info. org/). The Spatial Pattern of Abnormality for Recognition of Early Alzheimer's disease [25] (SPARE-AD) index is a previously validated imaging signature used to estimate Alzheimer's disease-like atrophy patterns in the brain [8,11]. A support vector machine (SVM) was used to differentiate between dementia and CU participants maximally. The SVM classifier with a linear kernel was trained with structural MR scans to classify participants as dementia and CU. The training data included only healthy controls with known-negative Aβ status and only dementia participants with known-positive Aβ status. Higher positive SPARE-AD values indicate a more Alzheimer's disease-like brain structure, and lower negative values indicate normal brain structure. The SPARE-BA model was trained with CU data only and applied to all participants included in this study. A model having a radial basis function kernel was evaluated with leave-one-out cross-validation using structural region of interest volumes from 352 CU participants and had a mean absolute error of 4.22. The predicted brain age for the CU participants was then adjusted for age using a linear regression model, like previous work [9]. PET acquisition and processing For the flortaucipir PET scans, 370 MBq (10.0 mCi) ± 10% of 18F-flortaucipir were administered, with 30-minute (6X5 minutes frames) acquisition at 75-105 min post-injection. Each flortaucipir scan was co-registered to its corresponding MP-RAGE scan, and mean flortaucipir uptake within each Freesurfer-defined brain region was calculated. Data were corrected for partial volume effects using the geometric transfer matrix approach1. Mean regional uptake was normalized by inferior cerebellar gray matter as a reference region to generate the flortaucipir SUVRs. Further information can be found on the ADNI website (http://adni.loni.usc. edu/). We included partial volume corrected ROIs, which were normalized to the inferior cerebellum. Meta-temporal ROI was calculated as previously described (see supplementary material). The average Braak score was calculated as the average of Braak I, Braak III-IV, and Braak V-VI areas. For the florbetapir PET scans, 370 MBq (10.0 mCi) ± 10% of 18F-florbetapir were administered, with 20 minutes (4X minute frames) acquisition at 50-70 min post-injection. SPM8 software was used to co-register the florbetapir PET scans with the corresponding MRI scans. Florbetapir means from the gray matter in subregions were extracted within four large regions (frontal, anterior/posterior cingulate, lateral parietal, lateral temporal) [26,27], and weighted means for each of the four main regions were created. A composite was used as a reference region, based on the whole cerebellum, brainstem/pons, and eroded subcortical white matter (http://www.adni-info.org/). A value �0.78 in the summary composite florbetapir index classified participants as Aβ+. SPARE-AD training. A support vector machine (SVM) was used to differentiate between dementia and CU participants maximally. The SVM classifier with a linear kernel was trained with structural MR scans to classify participants as dementia and CU. The training data included only healthy controls with known-negative Aβ status and only dementia participants with known-positive Aβ status. Higher positive SPARE-AD values indicate a more Alzheimer's disease-like brain structure, and lower negative values indicate normal brain structure. The SPARE-Tau index. A classification model using a support vector machine (SVM) with a linear kernel was developed and trained to predict the clinical status of 367 participants defined as control group (n = 218, CU individuals with normal Aβ biomarker values) or pathologic group (n = 149, MCI and dementia individuals with pathological Aβ values). The model was trained with 50-fold cross-validation and used the Freesurfer parcellated ROIs' SUVR values. Similar machine learning models have been previously described and validated on MRI [8,11]. More positive SPARE-Tau indices indicate pathological tau deposition, and more negative indices imply lower tau deposition. Areas included in the final model are summarized in S1 Fig. Cerebrospinal fluid collection and Aβ1-42 measurements CSF samples were obtained in the morning after an overnight fast and processed as previously described [28,29] (http://adni.loni.usc.edu/). Roche Elecsys Aβ 1-42 and tau CSF immunoassay measurements were performed at the UPenn/ADNI biomarker laboratory following the Roche Study protocol [22]. The cutoff for pathological values was 977 pg/mL for Aβ 1-42 and 27 pg/mL for p-tau [30]. Measurements performed during the same ADNI visit as the flortaucipir scans were selected (12 days median time interval between CSF draw and PET scans). Statistical analysis We calculated median and interquartile range (IQR) values to summarize quantitative variables and proportions for categorical variables. Kruskal-Wallis analyses and chi-square tests were applied to compare continuous and categorical variables between the groups. Spearman rank correlations evaluated the associations between the different measures. CU participants with normal Aβ biomarker values and MCI and dementia individuals with pathological Aβ values were included in the SPARE-Tau training. CU participants with pathological Aβ biomarker values were not used in the training of the SPARE-Tau index and therefore served as an independent testing group. Multivariate analyses included standardized biomarker values to compare the coefficients. We applied multivariate adaptive regression splines (MARS) models to evaluate the association between the different biomarker values. For each biomarker, we performed 1,000 bootstraps with replacement. We analyzed 1,000 bootstrapped linear regression models with biomarker values as dependent variables and age, gender, education, and clinical diagnosis as predictors. We compared the R 2 and coefficients values from the bootstrapped models using Friedman tests, followed by post-hoc comparisons with Wilcoxon signed-rank tests to evaluate which biomarker offered the best fit. A linear discriminant model with 10-fold cross-validation identified cutoffs to define normal and pathological tau PET indices and SPARE-AD scores used in longitudinal analyses. Cox hazards models evaluated the progression from CU to MCI (sex, age, and education included as covariates). We used mixed-effects models that included ADAS-Cog13 as the outcome to evaluate longitudinal disease progression. These models included time, sex, age, education, clinical diagnosis, and biomarkers as fixed effects. We included clinical diagnosis and biomarkers interactions with time. Participants and time were included as random effects. Power transformations were used in parametric analyses as needed to achieve normal distribution. P-values <0.05 (two-sided) were considered statistically significant. Bonferroni-Holm multiple comparison correction was applied to correct for multiple comparisons and the post hoc comparisons. Analyses were performed using R version 4.2. Correlation between AD biomarkers We evaluated correlations between biomarkers included in this study (SPARE-Tau, average Braak areas, meta-temporal ROI, CSF p-tau, SPARE-AD, and florbetapir composite score) in groups stratified by Aβ status. Associations were stronger in the Aβ+ participants than in the Aβ-participants (Fig 1A and 1B). Aβ+ participants showed strong correlations between tau PET indices and moderate correlations of the tau PET indices with the other biomarkers (CSF p-tau, SPARE-AD, and florbetapir composite score). Aβ-participants presented moderate correlations between the different tau indices, but correlations with the other biomarkers (CSF ptau, SPARE-AD, and florbetapir composite score) were weak or absent (�0.25). Baseline clinical associations SPARE-Tau best explained ADAS-Cog13 values in the Aβ+ participants when we compared the R 2 values (explained ADAS-Cog13 variance) of the bootstrapped MARS splines (Fig 1C and 1D, S2 and S3 Tables). In the Aβ-participants, only the florbetapir summary composite showed a similar association with ADAS-Cog-13 (Fig 1D and S3 Table) as SPARE-Tau. In contrast, all other indices explained lower ADAS-Cog13 variance (p-value<0.0001). Combining SPARE-Tau and SPARE-AD (global 1) led to an increase in the explained ADAS-Cog13 variance in the Aβ+ (R 2 difference 0.14, p-value<0.00001) and Aβ-participants (R 2 difference 0.10, p-value<0.0001). Further adding the florbetapir summary composite (global 2) led to an increase in the explained ADAS-Cog13 variance in the Aβ-participants (R 2 difference 0.14, p-value<0.00001), with a minimal but significant improvement in the Aβ+ participants (R 2 SPARE-Tau difference 0.009, p-value<0.0001). We excluded CSF p-tau from further analyses due to its weak association with the clinical measures. All flortaucipir indices were higher in Aβ+ participants (including the Aβ+ CU group for the SPARE-Tau and meta-temporal ROI), with a progressive increase in the Aβ+ MCI and dementia participants (Fig 1E). SPARE-Tau presented the highest z-scored differences in all the Aβ+ groups compared to the Aβ-CU group (p-value<0.0001). The average Braak score showed the highest value for the Aβ-MCI group (p<0.0001) and also was the only index that showed higher values in the Aβ-MCI than the Aβ+ CU group. SPARE-Tau showed the highest R 2 (0.48, IQR = 0.45-0.51), compared to average Braak (R 2 = 0.41, IQR = 0.38-0.44) and meta-temporal ROI (R 2 = 0.41, IQR = 0.38-0.44). Longitudinal clinical associations To evaluate the association with the longitudinal changes, we estimated SPARE-Tau, average Braak score, meta-temporal ROI, and SPARE-AD cutoffs based on classifying CU Aβ-participants versus Aβ+ MCI and dementia participants. For the florbetapir Aβ PET, we used the previously derived florbetapir composite score. All the biomarkers predicted progression from CU to MCI/dementia when all the CU participants were included (Table 1), but when we evaluated the clinical progression in the Aβ + CU participants, the meta-temporal ROI did not predict clinical progression, and SPARE--Tau remained the strongest association. All three flortaucipir PET measures and SPARE-AD predicted longitudinal changes in ADAS-Cog13 in the whole cohort (Table 2), but only SPAR-E-Tau predicted longitudinal changes in the Aβ+ CU participants. None of the biomarkers Discussion Among the three tested flortaucipir measures (SPARE-Tau, meta-temporal ROI, and average Braak score), our novel SPARE-Tau index offered the best classification accuracy. SPARE-Tau showed the largest differences between the Aβ+ and the Aβ-CU participants, best-predicted baseline ADAS-Cog13 scores, and presented the strongest association with longitudinal clinical progression (including the CU Aβ+ participants). AD biomarker models and studies of participants with AD autosomal dominant mutations indicate that Aβ biomarkers precede tau biomarkers [4,31]. About 30% of CU elderly individuals are Aβ+ in the seventh decade of life [32,33]. In turn, tau changes are closer to the onset of cognitive decline and have been considered a marker for the disease [5]. Neuropathological studies showed a stronger association of tau pathology with cognition and explained a large part of cognitive changes present in cognitively impaired individuals compared to other individuals [34]. Flortaucipir binding correlates with neurofibrillary tangle deposition in AD and regional neurofibrillary pathology burden [35]. Therefore, we expected tau PET tracers to outperform Aβ biomarkers to predict clinical outcomes. Imaging-based biomarkers reflect changes across the whole brain. This information needs to be summarized to facilitate its clinical application. Previous flortaucipir PET measures have been developed on averages of ROI [27,36]. This follows previous MRI approaches that identified hippocampal atrophy as a measure of neurodegeneration in AD. A limitation of these analyses is that they select a subset of the regions and do not weigh them according to their importance. We previously developed a support vector machine-derived MRI index, SPARE-AD, which showed improved classification and prediction of clinical progression compared to ROI-based MRI indices [10]. Here we expanded the SPARE framework to include the SPARE-Tau index. These machine-learning approaches combine the information derived from multiple brain regions to provide a global, easily interpretable, sensitive and specific measures compared to single ROI, like the hippocampus. Flortaucipir has shown an inverse correlation with brain atrophy, stronger than the one observed for Aβ PET scans [17,18], in line with our finding. We identified a correlation (r = 0.62) between our SPARE-AD and SPARE-Tau indices. Our previously developed MRI index (SPARE-AD) underperformed all flortaucipir indices when evaluating clinical progression and cognitive decline. This finding might be counterintuitive because structural MRI reflects atrophy related to AD-specific regions, and those might be injured later in the AD timing model [37]. Additionally, potential interactions with cognition should be studied in future work, evaluating in-vivo the different mechanisms of tau-related cognitive impairment (local structural damage versus functional network dysfunction). Nevertheless, neuropathological studies indicate that AD pathology is the primary driver of cognitive impairment [38]. Among the flortaucipir indices, the meta-temporal ROI (or other ROIs) is the most commonly used measure when clinical associations of flortaucipir scans are evaluated [15,19,35,39]. We also included an index reflecting global flortaucipir burden, the average Braak index, based on the staging defined by Braak [40]. We evaluated several cross-sectional metrics (clinical diagnostic accuracy and ADAS-Cog13) and clinical progression (clinical progression of CU and MCI participants and cognitive decline measured using ADAS-Cog13). SPARE-Tau outperformed these commonly used ROI-based indices (meta-temporal ROI and the average Braak indices). Moreover, SPARE-Tau identified the largest effect size difference when we compared Aβ+ CU participants (not used for training) to Aβ-CU participants and was the strongest predictor of clinical progression and cognitive decline in the Aβ+ CU participants (hold-out validation group). We also stratified our analyses by Aβ status, analyzing Aβ-and Aβ+ separately in several analyses, whereas our training groups evaluated CU Aβ-versus cognitively impaired Aβ+ participants. CSF p-tau underperformed all flortaucipir indices in our cross-sectional analyses, and we, therefore, excluded it from the longitudinal analyses. It can be expected that CSF tau measurements underperform ligand-based PET tau estimates as CSF tau represents a more indirect measure of overall brain tau deposition, and tau is deposited intracellularly in the form of neurofibrillary tangles. Other studies found a stronger cross-sectional association of PET ROI metrics with clinical than those observed for CSF tau assays [41]. Alternatively, it is possible that CSF p-tau identifies changes at an earlier preclinical stage than SPARE-Tau (23.6% abnormal CSF p-tau and 12% abnormal SPARE-Tau in the CU Aβ+ group). This could also explain why CSF p-tau underperforms SPARE-Tau in the case of a short follow-up. One study has described inconsistent findings of CSF p-tau better predicting cognition in CU participants than tau PET [42]. These differences might be to differences in cohort composition, CSF assays, and length of follow-up. Further studies with longer longitudinal follow-up that include plasma, CSF and PET tau measures in CU participants in CU participants are needed. We expand the previous findings by additionally evaluating with the ADAS-Cog13 scale, predicting clinical conversion in CU participants, and assessing CSF p-tau, which surprisingly showed the lowest clinical associations. One recent study evaluated the longitudinal correlates of structural MRI and flortaucipir PET [15]. This study indicated that meta-temporal flortaucipir ROI showed the strongest association with longitudinal MMSE scores, followed by MRI (using predefined temporal lobe ROI), and least associated with Aβ PETs. Adding MRI information led to increased MMSE variability explained by the biomarkers. The authors acknowledged several limitations, like the lack of more detailed clinical measures, the lack of diagnostic conversion outcomes, and the need to evaluate biofluid biomarkers. Other studies have considered flortaucipir scans in preclinical stages, selecting a single ROI and identifying longitudinal clinical decline based on increased uptake in a single ROI [16,43]. In addition, we included sophisticated machine-learning derived measures that improve the diagnostic performance over a priori-defined ROIs. The meta-temporal ROI also underperformed the average Braak score. We also confirmed that including our MRI measure, SPARE-AD, improved the model evaluating longitudinal ADAS-Cog13 changes; however, when we looked at the model's different components, SPARE-AD only showed an association with baseline ADAS-Cog13 values and was not associated with longitudinal ADAS-Cog13 changes. In agreement with previous neuropathological studies and disease models, recent studies have confirmed that flortaucipir PET scans (based on predefined ROIs) have a stronger association with longitudinal outcomes than Aβ PET scans [16,43]. The finding of larger SPARE-Tau and average tau PET changes during follow-up in Aβ + participants agrees with recent findings of a ceiling effect in lower Braak stage regions as disease progresses [44]. Therefore, it is expected that indices that track areas beyond the temporal lobe will identify AD-related tau deposition better. This manuscript's strengths are the large sample size, the comparison of multiple tau indices (including CSF and PET), florbetapir composite, and MRI structural measures using machine-learning derived indices. We also evaluated ADAS-Cog13 which offers more information than MMSE, used in other analyses, and we evaluated longitudinal outcomes. There are several limitations to our study: first, only a small number of participants progressed from CU to MCI (Table 1). Second, CSF tau and florbetapir scans were not available for all participants. Finally, although we used leave one out cross-validation, a commonly used procedure to ensure generalization of results, there was no independent validation cohort accessible to us to confirm our results. Furthermore, we designed our study to leave Aβ+ CU participants out of the training sample and this sample served as an independent sample test sample. This manuscript presents a novel machine-learning derived flortaucipir index that outperforms other previously utilized flortaucipir indices in multiple cross-sectional and longitudinal clinical outcomes, detecting changes and better prognosticating changes in the preclinical disease stages. We further compared its performance to other biomarker modalities, confirming that SPARE-Tau showed the best prediction and that MRI, but not florbetapir, added value to predicting baseline cognitive scores in Aβ+ participants. Table. Cross-sectional prediction of ADAS-Cog13 scores using multivariate adaptive regression splines models. Table presents
2022-11-04T06:18:04.809Z
2022-11-03T00:00:00.000
{ "year": 2022, "sha1": "247a7e8daf1c77c6147d881d52716397a25ff7b5", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "4643c0960d2a3bebfb9967754a148f2a571637e5", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
221015188
pes2o/s2orc
v3-fos-license
Iatrogenic Delayed Pneumothorax After Transbronchial Biopsy Transbronchial biopsy (TBB) is one of the commonly performed procedures by pulmonologists in everyday practice. Although the procedure has a very low-risk profile, complications often develop in certain patients. Pneumothorax is one such complication pertaining to TBB. As only a small percent of procedures get complicated by pneumothorax, handful of cases have been reported with its delayed occurrence in the past 5 decades. The purpose of our report is to highlight another uncommon yet interesting case of delayed iatrogenic pneumothorax in an immunocompromised patient after TBB. Although the chain of events behind the pathophysiology of delayed pneumothorax largely remain a mystery, its development has been linked to altered immune mechanics as they are frequently recognized in immunocompromised patients. Introduction Transbronchial biopsy (TBB) is a procedure routinely performed by pulmonologists for diagnosing conditions such as sarcoidosis, infections, and cancerous etiologies. Under rare circumstances, it can lead to pneumothorax as its potential complication. The incidence of pneumothorax following a TBB ranges from 1% to 5%. [1][2][3][4] While pneumothoraces occurring shortly after a TBB are suspected and diagnosed readily, serious problems may arise when a pneumothorax presents as a delayed complication-defined as pneumothorax presenting 4 hours after a TBB. The reported incidence of these delayed pneumothoraces varies from 1% to 4%. [5][6][7] In our case report, we acknowledge a case of delayed pneumothorax that came to recognition about 22 hours following TBB. Case Description A 39-year-old man presented with worsening shortness of breath. The patient had a past medical history significant for follicular lymphoma (diagnosed 3 years ago) for which he had completed chemotherapy. He had a relapse of follicular lymphoma, which continued to progress despite chemotherapy with rituximab, cyclophosphamide, hydroxydaunorubicin, vincristine, and prednisone (R-CHOP); eventually received an allogenic stem-cell transplant 1 year ago. The posttransplant course was complicated by graft-versus-host disease of the gastrointestinal tract and skin, cytomegalovirus viremia, BK virus-associated hemorrhagic cystitis, varicellazoster dermatitis, polymicrobial blood stream infections, and hypogammaglobinemia. He was being treated with oral budesonide, ruxolitinib, and tacrolimus for graft-versushost disease. The patient was a former smoker with 5 packyear smoking history with no other primary lung diseases such as chronic obstructive pulmonary disease, bronchiectasis, emphysema, or pulmonary fibrosis. His pulmonary function test showed a normal spirometry and lung volumes with mildly reduced DLCO (carbon monoxide diffusing capacity). One month prior to this presentation, computed tomography (CT) scan of his chest showed multifocal pulmonary nodules suggesting invasive fungal infection. The patient was started on AmBisome and isuvaconazole and scheduled for bronchoscopy with TBB. Multiple TBB were performed in the lateral-basal segment of the right lower lobe of the lung. The procedure was performed without any complications, but no infectious etiology was identified, and the patient was discharged. When he presented again to the hospital with worsening shortness of breath, a repeat chest CT was ordered, which in comparison with the CT scan 1 month ago, showed variably changed pulmonary opacities, with new/increased areas of more focal consolidation the right upper lobe ( Figure 1A) and left lower lobe ( Figure 1B), and with decreased nodular and ground glass opacities elsewhere in the lungs. We repeated the bronchoscopy with TBB-obtaining a total of 7 biopsies from the lateral-basal segment of the left lower lobe of the lung. Chest X-ray (CXR) obtained 35 minutes after the procedure was negative for pneumothorax ( Figure 2). Twenty-two hours postprocedure, the patient complained of acute onset of left-sided pleuritic chest pain. CXR revealed a small apical pneumothorax that remained unchanged on serial CXR evaluations and the patient was discharged home. A follow-up CXR performed the following day on an outpatient basis revealed a worsening pneumothorax (Figure 3) and the patient was re-admitted for further management. A 12-French pigtail chest tube was placed under ultrasound guidance that led to the resolution of pneumothorax. The patient was discharged home after a hospital stay of 4 days. Discussion Pneumothorax is an uncommon but known complication of TBB. Rarely, it can present as a delayed complication; however, time interval to its presentation often varies remarkably. 2,3 Narula et al described a case where the patient was reported to present with respiratory distress and was diagnosed with iatrogenic pneumothorax secondary to TBB as late as 7 weeks after undergoing a TBB. 8 Their patient had a past medical history of germ-cell tumor and persistent pulmonary infiltrates refractory to multiple courses of antibiotics. Kwan et al illustrated another similar case in which the patient (18 months post-lung transplant) developed delayed pneumothorax 5 months after TBB. 9 Table 1 provides an overview of reported cases of delayed pneumothoraces following transbronchial biopsies. The exact mechanism behind this delayed occurrence is yet to be determined. Levy et al hypothesized that TBB procedure results in the formation of a bronchopleural fistula that gets secured by a temporary fibrin clot formation. When this fibrin plug undergoes fibrinolysis gradually over days, it can cause a delayed pneumothorax as a result of air egressing through the defect. 3 But no concrete evidence exists to support their hypothesis. Other proposed mechanisms and associated risk factors that are known to contribute to the development of delayed pneumothorax include absence of emphysematous changes in the lung parenchyma, persistence of a tissue flap after biopsy (obstructing the air flow), and microbial seeding through the puncture site. 1,9 Interestingly, majority of these delayed presentations of pneumothoraces are attributed to altered immune mechanics (typically seen in immunocompromised patients) and poor wound healing-as repeatedly evidenced by their occurrences in organ transplant recipients and tuberculosis patients. 3,9,10 The experience we gained from our patient scenario also suggests that immunologically weak patients are more prone to this serious complication of TBB. The guidelines laid down by the British Thoracic Society recommend obtaining a post-biopsy CXR in symptomatic patients and that patients be advised of potential delayed complications of TBB. 11 Ahmad et al recommend that 1-hour observation after TBB prior to obtaining a CXR may seem reasonable in outpatients, although complications rarely arise in this subset. 12 Izbicki et al based on their level of evidence suggest that performing CXRs in asymptomatic patients after routine TBB adds minimal diagnostic value and avoiding them can be considered safe. 4,13 However, there are no strict guidelines that mandate meticulous attention to duration (for which a patient should ideally be observed) or any "high-risk" patient characteristics so as to minimize the occurrence of delayed pneumothoraces. We are of the opinion that physicians should remain cautious about patients undergoing transbronchial biopsies; moreover, patients should be educated about the possible risk of delayed pneumothoraces; and encouraged to seek immediate medical attention should they develop symptoms. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article. Ethics Approval Our institution does not require ethical approval for reporting individual case or case series. Informed Consent Verbal informed consent was obtained from the patient's mother for anonymized patient information to be published in this article because the patient had expired.
2020-08-06T09:06:48.236Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "c4a536a247393c2c9440e11de6d0969f4ac1f7df", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2324709620947634", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "59469f5b774a525797a95fc10bf4fb2be714c7b3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
91663345
pes2o/s2orc
v3-fos-license
Four new endemic species of Nolana (Solanaceae-Nolaneae) from Arequipa, Peru Abstract In preparation of a monographic treatment for Nolana L. ex L. f. (Solanaceae-Nolaneae), four new species are described from department of Arequipa, southern Peru: N. bombonensis Quip. & M. O. Dillon, prov. Islay, district of Punta de Bombon, Lomas de Alto La Punta; N. callae Quip. & M. O. Dillon, prov. Islay, district of Punta de Bombon, Lomas de Jesus; N. quicachaensis Quip. & M. O. Dillon, prov. Caraveli, dist. Quicacha; and N. tricotiflora Quip. & M. O. Dillon, prov. Camana, dist. Quilca, Lomas de Quilca. These species are diagnosed, described, illustrated and compared to nearest geographic neighbors in southern Peru. To aid in recognition, a key to Nolana species reported from Arequipa is provided. Keywords: Nolana , Nolaneae, new species, department of Arequipa, Peru, Solanaceae. Resumen En la preparacion para la publicacion de la monografia de Nolana L. ex L. f. (Solanaceae-Nolaneae), se describen cuatro especies nuevas en el departamento de Arequipa del sur de Peru: N. bombonensis Quip. & M. O. Dillon, prov. de Islay, distrito de Punta de Bombon, Lomas de Alto La Punta; N. callae Quip. & M. O. Dillon, prov. de Islay, distrito de Punta de Bombon; N. quicachaensis Quip. & M. O. Dillon de la prov. Caraveli, dist. Quicacha; y N. tricotiflora Quip. & M. O. Dillon de la prov. Camana, dist. Quilca, Lomas de Quilca. Ademas de las diagnosis y descripciones, se realizan las ilustraciones y se comparan con las especies geograficamente vecinas mas cercanas del sur de Peru. Para ayudar al reconocimiento, se proporciona una clave para las especies de Nolana reportadas en departamento de Arequipa. Palabras clave: Nolana , Nolaneae, endemicas, especies nuevas, departamento de Arequipa, Peru, Solanaceae. The area occupied by Nolana species in the department of Arequipa is an arid strip bordering the Pacific Ocean, ca.500 kms long and ca.50 kms wide or an area of approximately 25,000 km 2 (Dillon, 1997;Dillon et al., 2003).The desert is essentially continuous, but there are changes in physiognomy and discontinuities provided by intervening river valleys.The major rivers to dissect the coast are Río Ocoña, Río Camaná, Río Quilca, Río Tambo, and Río Osmore.The distribution of the 32 species recorded from the department contains some taxa with wide distributions, essentially occurring throughout, for example Nolana spathulata Ruiz & Pav., or the aforementioned species recorded from northern Chilean localities (i.e., N. adansonii, N. gracillima, N. lycioides), but most are narrow endemics. The amazing species diversity in Nolana has been stimulated by ecological changes within the study area.Both long-term (glacial cycles, ~15,000 years) and shortterm (ENSO events, ~15 years) causational phenomena (Dillon et al., 2003).Currentday distributions belie the dynamic history of the coastal region over the last 4 my (Dillon et al., 2009).It is assumed that taxa are products of allopatric speciation models, where isolation plays an important role in insuring geographic fidelity in breeding populations.Today, instances of sympatry at a specific localities are deamed to be the result of transport of mericarps downslope by rain and wind.While rare, the periodic coastal rains move mericarps and provide germination opportunities. Nolana is easy to identify as a genus with its unique mericarp fruits (Knapp 2002); however, species delimitations are open to interpretation and recognition of the number of accepted species has varied widely (Mesa, 1981;Dillon et al., 2009).This variation in the number of species recognized is due to widespread homology in easily observed characters and the dramatic loss of discriminant characters only observable in living material.Upon drying, Nolana specimens lose characters, i.e., working with herbarium or dried material is much more difficult.In this study and in the preparation of a monograph, virtually all taxa have been examined and photographed in the living state.Recent discoveries during field studies in the Department of Arequipa have led to the recognition of four new species considered morphologically distinct and geographically circumscribed. As illustrated in Dillon et al. (2009), members from three clades are represented in Arequipa, i.e., Clades D, E, and F. It is surmised that the four species described here would all fall within Clade F. Clade F was recovered as a well-supported but poorly resolved group (Dillon et al., 2009); 27 species confined to Peru, and one Chilean species, N. intonsa.Of the new species proposed here, only one has been included in phylogenetic analyses (Dillon et al., 2007;2009;Tu et al., 2008).Comments concerning phylogenetic relationships are largely postulated from comparative morphology. Materials and methods Descriptions were made from both living material encountered during field studies and dried herbarium specimens deposited in HSP, F, and USM.All acronyms follow those in Index Herbariorum (http://sweetgum.nybg.org/science/ih/).Conservation status was assigned using IUCN criteria (2017) combined with field observations and geographic distribution based on herbarium specimens. We utilize the "morphological cluster" concept in recognition of species in Nolana (see Mallet, 1995), defined as "assemblages of individuals with morphological features in common and separate from other assemblages by correlated morphological discontinuities in a number of features".In addition to the diagnoses provided for the new species, specific characters useful in recognition of species are detailed in the Key to Species of Arequipa. Diagnosis Nolana bombonensis is most similar to N. volcanica and differs from that species in cinereous habit, oblong densely lanuginous leaves, calyx lobes unequal, apically obtuse or blunt, and pale lilac or light lavender corollas lacking prominent dark-purple nectar guides in the inner throat. Etymology The specific epithet is derived from the geographic area of Punta de Bombón, near the town of Cocachacra in southern Department of Arequipa. Distribution and ecology Nolana bombonensis has been recorded from several locations south of Punta de Bombón, Department of Arequipa (Figure 4) at the mouth of the Río Tambo.The type locality is the most northwestern population with additional localities extending about 30 kms to the south along a low strip of land near the ocean. Putative relationships Nolana bombonensis is distinctive among its congeners in Peru with its dense, gray, tomentose pubescence and lite lavender corollas.It is apparently a narrow endemic restricted to a small environmentally distinct habitat, and sympatric at some localities with other Nolana species, e.g., N. adansonii, N. pilosa, N. spathulata, and N. thinophila I. M. Johnst.When this plant was first encountered in 2003, it was mistaken for N. volcanica, a species originally described from above Mollendo, i.e., Lomas of Yuta (Quipuscoa et al., 2016).When detailed sampling more clearly defined the range of phenotypic variation in N. volcanica, the population at Punta de Bombón was deemed distinct. This species was included in the molecular studies under the name, N. volcanica (Quipuscoa et al. 2930) and its relationships were with other southern Peruvian species.Utilizing a variety of DNA markers, N. volcanica was recovered with congeners, i.e., GBSSI sequences (Dillon et al., 2007) found N. lycioides as its sister taxon; LEAFY second intron (Tu et al., 2008) recovers it in a clade with N. cerrateana, N. intonsa (Chilean), and N. lycioides; and a variety of chloroplast markers recovered it in an unresolved clade with other over a dozen other species from Arequipa (Dillon et al., 2009). The growth form is not unique among Nolana, but the gray The character of dense tomentose pubescence is not common in the genus.Among Peruvian taxa, only Nolana tomentella Ferreyra shares the character.There are tomentose taxa found in Chile, e.g., Nolana diffusa I. M. Johnst., N. tocopillensis (I.M. Johnst.)I. M. Johnst. N. sedifolia Poepp., and N. villosa (Phil.)I. M. Johnst.; however, these species are very different in their floral and vegetative morphology; they have no clear relationships with any Peruvian species (Dillon et al., 2009). Conservation status Critically Endangered (CR); overall distribution <10 km 2 (CR) and perhaps <250 individuals.See IUCN (2017) for explanation of measurements.Agriculture and poultry farming is expanding rapidly in this area severely impacting coastal ecosystems; the future of this and other plants is very uncertain. Notes Nolana bombonensis was initially confused with Nolana volcanica Ferreyra (1960), a species based upon a collection by Ms. Dora B. Stafford (holotype: K000532281) from a locality ca.40 kms north of the Río Tambo.That collection was gathered from the quebrada above Mollendo at ca. 600 m (2000 ft) from habitats of "sand and volcanic ash" in the Lomas of Yuta.Sampling N. volcanica throughout its range and over a period of years illustrated that the density of pubescence is variable with glabrescence typical.The floral morphology and corolla coloration pattern in Nolana volcanica is significantly different from N. bombonensis.In contrast to N. bombonensis, N. volcanica is composed of spreading perennials appearing green, and flowers with attenuate calyx lobes, and shorter, pale blue corollas with a dark purple band and nectar guides within the throat. Etymology The specific epithet is dedicated in homage to the professor of Botany of the National University of San Agustin de Arequipa, Abraham Calla Paredes, for his dedication to the teaching of algae and shared friendship for many years. Distribution and ecology Nolana callae is considered endemic to Arequipa and is restricted to dry, rocky slopes at the lower part of the Lomas de Jesus, between Punta de Bombón and Ilo (Figure 4).To date, it has only been recorded from the type locality together with Nolana adansonii, N. bombonensis, N. spathulata, and Solanum peruvianum L. It was discovered in disturbed roadside localities and likely, with continued exploration, it is anticipated that the distribution may be expanded upslope. Putative relationships Nolana callae has not been included in phylogenetic analysis and its putative relationships are here based upon comparative morphology and distribution.It has similarity with N. cerrateana, sharing habit and lanuginous leaves; however, N. cerrateana has longer pedicels to 50 mm, more fasciculate leaves, and a calyx with purple coloration, and 10-14 mericarps. Notes Nolana callae most closely resembles N. cerrateana, a species from the area of Camaná, further north in Arequipa; however, it also shares some superficial similarity to N. intonsa I. M. Johnst.from northern Chile.These species also have a prominent dark band and nectar guides in the throat of the corolla (Figure 6B). Etymology The specific epithet is derived from the geographic area of Quicacha, near the town of Cháparra in north Department of Arequipa. Distribution and ecology Nolana quicachaensis is only known from the type between the towns of Caramba and Quicacha (Figure 4).It was found growing between granitic rocks in the lower part of the south-facing slopes.Associates included members of desert vegetation such as, species of Cactaceae (Melocactus, Cumulopuntia, Weberbauerocereus), Asteraceae (Helogyne, Baccharis) and annual grasses. Putative relationships Nolana quicachaensis has not been included in phylogenetic analysis and its putative relationships are here based upon comparative morphology and distribution.It has similarity with N. lycioides, but differs in a range of characters. Conservation status Critically Endangered (CR); overall distribution <10 km 2 (CR) and perhaps <250 individuals.Before rational status can be determined, further studies in the area are needed to determine population size and distribution. Diagnosis Nolana tricotiflora differs from all other members of the genus with a unique combination of characters not encountered.Its erect crooked, woody trunks to 50 cm tall; numerous spirallyarranged leaves, and terminal, three-branched, scorpioid cymes. Etymology The specific epithet refers to the inflorescence of three terminal, lax scorpioid cymes. Putative relationships Nolana tricotiflora has not been included in phylogenetic analysis and its putative relationships are difficult to establish upon comparative morphology and distribution. Notes Nolana tricotiflora contains a combination of characters not to be met in any other member of the genus.No woody species approach its overall habit with crowded cauline leaves and very long villous pubescence with glandular apical cells.But most unusual, is the inflorescence of threebranched weak scorpioid cymes with large flowers.In the majority of Peruvian Nolana species, the flowering stems are unmodified and flowers are borne as solitary in leaf axils.The only exceptions are southern Peruvian species, which have modifications of flowering stems into recognizable inflorescences with modified bracts subtending individual flowers, but arising from a basal rosette of modified leaves, N. inflata and N. weissiana.In N. scaposa, the condition reaches it maximum development where the inflorescence is a modified branch with subtending floral bracts.None of these species remotely resemble N. tricotiflora. Table 1 . Alphabetical List of Accepted Names and Authorities, Distribution, and phylogenetic position as suggested by membership in clades of Nolana in South America.Membership in clades is adapted from Dillon et al 2019.[* designates distribution recorded from Department of Arequipa] Quipuscoa & Dillon: Cuatro nuevas especies endémicas de Nolana (Solanaceae-Nolaneae) de Arequipa, Perú N. plicata I.M.Johnst.
2018-12-05T12:12:43.088Z
2018-08-10T00:00:00.000
{ "year": 2018, "sha1": "d3942e52a076b048599d4e5888978dccd68f1a88", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.22497/arnaldoa.252.25201", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "d3942e52a076b048599d4e5888978dccd68f1a88", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Biology" ] }
55664934
pes2o/s2orc
v3-fos-license
Issues on Applying Knowledge-Based Techniques in Real-Time Control Systems : At the time being knowledge-based systems are used in almost all life aspects. The main reason for trying to use knowledge-based systems in real-time control is to reduce cognitive load on users (overload), their application proving to be important when conventional techniques have failed or are not sufficiently effective [1]. The development of automated diagnosis techniques and systems can help also to minimize downtime and maintain efficient output. This paper presents some issues of applying knowledge-based systems to real-time control systems. It describes and analyzes the main issues concerning the real-time domain and provides possible solutions, such as a set of requirements that a real-time knowledge-based system must satisfy. The paper proposes a possible architecture for applying knowledge-based techniques in real-time control systems. Finally, a way of employing knowledge-based techniques for extending the existing automatic control and monitoring system for the geothermal plant from the University of Oradea is presented. Introduction Real-time systems generally consist of a series of complex, heterogeneous and critical processes.They are also closely coupled systems, consisting of a physical system part and a control computer system part.While the physical part reacts to the control signals from the computer, the control computer part and its software must interact with the dynamic properties of the physical part.Real-time systems are also, by definition, reactive systems.To increase their efficiency, different monitoring programs, tools, algorithms, and rules could be utilized.Generally, these programs are used for detecting abnormal behaviors, tracing workflow progress, and generating alerts and reports during the different phases of the system. However, using knowledge-based techniques in real-time applications represents a major challenge because of several reasons [10]: problems related to time representation and reasoning about time; problems related to deadline because a knowledge-based system should provide the best solution within a given deadline; problems related to asynchronous evens handling that could lead to interruption of the inference process; problems related to integration of conventional real-time programming and knowledge-based programming. Moreover, specific nature of real-time systems that implies interaction with external physical system that also implies specific features when including knowledge-based components [7]: the knowledge-based decision making system is hardly related to the external system; the knowledgebased decision making system should be also a real-time system in order to be sure that the decision making are performed before deadlines; implies both knowledge of control and of the Issues on Applying Knowledge-Based Techniques in Real-Time Control Systems 167 external physical system each of them having a specific form; when dealing with complex realtime systems consisting of several sub-components, the problem of decision making need to be based on distributed knowledge. Knowledge-based and real-time control technologies are complementary, rather than competitive technologies.Control technologies generally are oriented to quantitative processing while knowledge-based integrates both qualitative and quantitative processing [6].Separating the description (the knowledge) of a process from the control algorithm allows knowledge to be more explicit, visible and analyzable, instead of being hidden inside of the procedural programming code. Knowledge-based systems could be used for different purposes in real-time process control, the main domains of their applicability include [5] [9]: fault diagnosis, that implies detection, cause analysis and repetitive problem recognition; complex control schemes; process and control performance monitoring and statistical process control; real time Quality Management (QM); control system validation.For real-time control systems, an important issue is represented by their capability in fault detection and diagnosis because the availability and productivity can be significantly improved by shortening their downtime.Moreover, because personnels ability for observations could be incomplete or wrong and leading to incorrect diagnosis, intelligent system approaches need to be investigated and applied. Knowledge-based techniques in real-time applications Real-time applications could have different structures, and consequently, different approaches for using knowledge-based techniques can be employed.One typical structure of real-time control systems comes from the requirement of meeting the quest for automation and flexibility in complex manufacturing systems and is based on PLC (Programmable Logic Controller) usage.Such real-time control systems architectures are widespread because PLCs offer an adaptable and modular solution to the control problem.However, there are some shortcomings of this approach that are generated by PLCs inflexible programming system that do not support automatic analysis of logic circuits in order to look for a fault.Even if in todays modern PLCs there are some diagnosis functions available, their usage is limited and need to be extended.Consequently, developing a knowledge-based system for diagnosis purposes could represent a solution for implementing automated diagnosis techniques into complex manufacturing real-time systems.In order to create efficient solutions, several specific issues should be considered: knowledge representation and acquisition, real-time reasoning, knowledge validation, integration with real-time software [11] [12]. Knowledge representation and acquisition The main reason for using real-time knowledge-based systems is to reduce cognitive load on users.Therefore, such systems require a knowledge representation that integrates several kind of knowledge taken from several sources: analytical models developed by using differential equations, material or energy balances or overall process behavior kinetics.Generally, each object could have a behavior that is represented by a combination of analytic (model-based) and heuristic (rule-based) statements.In PLC-based systems automatic monitoring elements available for faults are represented by discrete state signals into the PLC memory.These signals are indicating different operation states of the controlled plant and based on their values further diagnosis can be carried out.The values can be obtained by accessing PLC memory via a linkage from the computer that has implemented the diagnosis system.The diagnosis system must then use specific reasoning algorithms for searching all possible fault causes under the help of relevant knowledge and real-time data.Therefore, the knowledge acquisition task is very important, and could be made in two ways: artificial knowledge acquisition, model-based knowledge acquisition [3]. Artificial knowledge acquisition is obtained by knowing specific issues about the controlled plant.For example, in each plant there are several alarms that have the purpose of protecting plants equipments or preventing the plant to work in error conditions.These alarms are normally indicated by one or a combination of PLC signals.For example: temperature too high, pressure too low or a combination of these.Model-based knowledge acquisition is based on knowledge achieved during modeling the system behavior and constructing the PLC program, resulting in improvement of knowledge acquisition and diagnosis efficiency.Let define S the space that represents the fundamental set for all possible configurations of the control system variables.A specific configuration set s is given as: Let define the behavior space B that represents the fundamental set of all determinable behavioral attributes: Every subset F of B that comprise specific required profile for faulty behavior could be expressed by a combination of PLC signals associated with correspondent control system variables and their specific state.From formal point of view, these could be represented in the following way [2]: With every fault defined into the faulty behavior profile, all possible sources generating the fault are considered and specified.For example, if for a specific fault occurrence from the fault space F, device mapping dealing and inferring the functional relationship is considered: where D represents the device space and Crelevant represents relevant cause space.The relationship between behavior space B, configuration space S and required profile for faulty behavior F is presented in Figure 1.Based on evaluation mapping between a specific behavior and profile of faulty behavior, if a match is found then the corresponding fault could be identified together with all possible causes. Knowledge validation In traditional real-time control systems, the control problem and its implementation through control algorithms is based on the exact knowledge of the control plant that is determined usually based on the mathematical model of the plant.Real-time knowledge-based control systems combine the analytical process model with conventional process control while reasoning about current, past and future situation in order to assess on-going developments and plan appropriate actions.Such systems allow the application to be structured into a model that is capable to behave and use its reasoning when taking a decision as human specialists do.Generally, a full control strategy requires not only variable (parameters) identification, state estimation and control, but also to check the validity of the data and process models before they are used in estimations.However, there is a relatively high degree of uncertainty concerning the plant starting from the mathematical model itself: there is not apriori knowledge of some parameters (for Figure 1: The relationship between behavior space B, configuration space S and required profile for faulty behavior F example, parameters for achieving stability conditions of a feedback control) or, the plant behavior may be not deterministic [4].There is an important concern when using knowledge-based techniques for real-time control systems: the need for validating systems knowledge, allowing determination if it accurately represents an experts knowledge in the particular domain.In this idea, simulation, when available, could represent a very useful tool that provides a general overview of system dynamics. Integrating knowledge-based software with real-time software The real-time control software and rule-based software differ in their underlying execution models.Procedural software uses generally an imperative model in which software engineer determines the sequence of actions while rule-based systems represent a general control scheme of matching, selecting and execution of rules.Consequently, a knowledge-based real-time diagnosis system should be thought as an extension of existing control software that interacts both with knowledge database constructed by using artificial and model-based acquisition and with the real-time data acquired from PLC source code execution.A possible structure is presented in Figure 2. Diagnostic reasoning is based on the knowledge base as well as real-time data from realtime database.The reasoning mechanism is based on logic control of faults.Thus, it uses the logical expression for faults presented in (3) where each term represents a possible cause of fault indicated by a specific PLC signal.By comparing the fault state with the current state of signals from real-time database in PLC, an occurrence of a fault state could be identified.Furthermore, by using systems profile mapping to devices and associated causes, as presented in (4), associated devices and causes of the fault could be shown. Real-time reasoning First attempt for using knowledge-based systems for real-time process control involves using static expert systems that take a snapshot of plant data.Static expert systems use pattern matching of a set of facts and rules.With no time constraints, this approach proves to be practical but when time constrains come into picture, this might not be a good idea.The Figure 2: Structure of a diagnostic system for a real-time control system elements that should be considered in this situation are: temporal reasoning and responding within a given response time. In the controlled system, variables such as temperatures and pressures may vary in time; therefore, a time representation together with the possibility of reasoning about time is essential to be included.So, to the elementary entity of knowledge base additional time information should be attached.Also, the rules defined into the expert system may be extended with specific, temporal-extended rules, such as: operators always, to formulate the premise of a rule; qualitative statements that refer to the relation of time points or time intervals, without exact time specification: earlier, after and similar; quantitative statements that allow expressing conditions with specification of exact point of time: for example at 12:00:00 p.m. Generally, the basic characteristic of a system that guarantees a certain response time is represented by its determinism.Knowledge-based systems are by their nature non-deterministic because the time of inference is dependent on the given situation.If imposed real-time deadlines are lower than the maximum searching time that is needed for a certain inference, the response time requirements cannot be met. In order to meet those requirements, three strategies could be applied: implementing algorithms to quantitatively estimate the maximum searching time, reducing the inference searching time or define an embedded diagnosis system approach which will integrate the diagnosis models into the PLC control program so that faults could be diagnosed in real-time. The problem of including all diagnosis into the PLC has also disadvantages by creating a much more complex control program.Integrating inference rules into the control program itself complicate the rules and makes introduction of new rules more difficult.In addition, the process of integration of different knowledge and information of these systems represent a tedious process.A mixed approach could be a possible better solution: only for critical faults, the diagnosis part with corrective actions should be included into the PLC and the rest of them should remain in charge of the knowledge-based system.In the next chapter, a way of employing knowledgebased techniques for extending the existing automatic control and monitoring system for the geothermal plant from the University of Oradea is presented. The automatic control and monitoring system structure The automatic control and monitoring system of the geothermal plant from the University of Oradea is an example of a real-time control system that has been developed using a combination between a PLC (Programmable Logic Controller) and a PC (for the user interface and supervisory control).From the structural point of view, the controlled plant is composed of 3 parts: the well station, the pump station and the heat station. The system functions in the following way: first, the geothermal water is extracted from the well station using a deep well pump if the necessary flow rate is greater than the artesian one; the water is then stored into a reservoir tank, which acts as an accumulator and also separates the production network from the distribution network.Then, from the reservoir tank the water is pumped, through the pump station to the heat station and further, in the heat station, the water is not directly utilized, but through 4 heat exchangers; the water that comes out from these heat exchangers flows into the distribution network and heats the university campus buildings. The structure of the control system consists of a PLC for controlling the geothermal heating system based on a control program embedded to the controller, connected with a PC computer and contains the user interface for the operator that is implemented using Wonderware InTouch software [15].The InTouch display management subsystem handles display call up, real-time display update, data entry and process schematics.It also maintains the PLC real-time and historical database that could be used to follow the time evolution of certain parameters or for statistical calculations.The real-time database provided includes maintenance of historical data in addition to current values of process variables.In the current implementation approach, the decision process in order to achieve possible solutions for several faults, such as sensors and other equipments faults implies changing the plant operation into a safety mode and employing the operator for tracing the fault, its effects and related causes.Also, PLC control program deals with diagnosis to only of few critical faults.Based on current situation of the geothermal plant and the current research in the domain, a proposal for extending the existing control system with a knowledge-based system is developed and described further in this paper. Analysis phase. Knowledge acquisition and validation The main issue when constructing a knowledge-based system is the way on which description (knowledge) is built up in accordance with the plant behavior and structure.Consequently, the development of a knowledge-based system for the existing plant and control system implies selecting the most important characteristics of the system that will be used in order to construct the knowledge database.Moreover, there is the need for validating systems knowledge, that means determination if it accurately represents an experts knowledge in the particular domain.In this idea, the simulator developed for the geothermal plant of the University of Oradea proves to be of great help.The simulator, which was previously developed, provides a simplified physical model of the plant dynamics together with PLC control, which is formulated into an easier-tooperate computer simulation: the ACSL solution to the model equations.Key elements of the improved easy-of-operation are the use of general-purpose simulation language ACSL (Advanced Continuous Simulation Language), and pre-programmed modules of all important plant components, including control elements [16].In the development phase of control system, the simulator has provided a useful tool for testing the system specifications, including the adopted control strategy; it could be employed also into the process of step-by-step knowledge acquisition and for further validation and updating.Based on information gathered from simulation and from the control program development, knowledge acquisition could be achieved.For example, if we refer to possible faults, a structured notation could be used when doing knowledge aquisition, based on tables, as shown in Table 1. Design phase. Knowledge representation Knowledge acquisition is the most important part when developing a knowledge-based system, but an important point is represented also by the way on which knowledge representation is done.Representation is tied to the production rules that should be expressed accordingly to the knowledge system that is used.Current knowledge-based industrial systems are generally built within shells, which package a combination of tools.Different shells may include different features useful for real-time control applications, such as: hierarchy for objects, associative knowledge, relating objects in the form of connections and relations, rules and associated inference engine, analytic knowledge, such as functions, formulas, and differential equation simulation, real-time features such as time stamping and validity intervals for variables, history-keeping, run-time environment.In this idea, current literature presents several expert systems that could be used for real-time process control systems, the most known being G2 and JESS. G2 real-time expert system [13] allows integration of models and rules for combining modelbased and artificial knowledge representation and it is based on an inference engine that can use generic forms of knowledge, interpreted for specific instances in the domain.It is specifically designed for process control and related applications and allows the process engineer to implement and manage the expert system.But, even if G2 is claimed to be real-time, there is no mention of verifiability from temporal point of view. Java Expert System Shell or JESS [13] is inspired by the artificial intelligence production rule language CLIPS being a fully developed Java API for creating rule-based expert systems.Even if it is architecturally inspired by CLIPS, it exhibits a LISP-like syntax.It consists of three components: the rules (knowledge base), the working memory (fact base corresponding to real-time base) and an inference engine (rule engine).JESS uses the Rete (ree-tee) algorithm to match patterns.Rete algorithm is not time predictable but two newer algorithms Treat and Leaps have introduced some optimization from this point of view; there are as well not time predictable but, with some restrictions, it would allow to be time predictable. Consequently, we consider to develop a solution based on JESS for our system.Thus, a knowledge model can be constructed starting from observing the main problems associated with every specific item (for example, pumps P3/P4 from the pump station) in the operational system and creating the model (Table 2) that presents correlation between problems and items that can be solved by expert system., where NF states from Not Found.JESS basis of knowledge composed from rules could be further built based on the previously frame definitions.These slots in the JESS rules structure will be unified by using pattern-matching by the inference engine through the Rete algorithm [8]. Overall proposed architecture The proposed modifications of the existing architecture are illustrated in Figure 3.It employs as additional element an additional computer on which the JESS Expert System is running that is connected with the PC on which the user interface was developed and on which PLC historical and real-time database resides.Historical and real-time databases are, in our situation, created and updated by InTouch in real-time, based on PLC associated inputs.In the proposed architecture, the main issue is represented by the way in which collaboration of several components is achieved, because PLC real-time database has a specific format for storing, defined by InTouch.There is a need of conversion from this format in order to be able to interact with the knowledge base that resides on the expert systems PC computer.A possibility is represented by using an XML file that stores all the tagnames of the faults, generated from PLC database.The idea is to use the XML file to feed the knowledge base and to interact as an intermediate level between expert system and PLC real-time database.Then, this file could be used as input by a JESS function in order to check is value; afterwards, for all tagnames that are acted (indicating a fault) rules pattern should be evaluated.Consequently, the architecture colaboration of components could be achieved by integration of JESS engine with the real-time database through the XML file. There are several issues that are not completely defined in our architecture.For example, the way in which the XML file is generated based on PLC real-time database in still in question.Also, after completing the implementation, system performance should be evaluated in order to prove that the solution provides satisfactory. Conclusions and Future Works Knowledge-based systems are making significant contributions to real-time process control applications.Their applications are often in areas which complement traditional process control technology, like, for instance, diagnosis and handling abnormal situations.They integrate knowledge-based techniques with conventional control, having significant benefits in overall quality management.But, a knowledge-based system operating in a real-time situation will typically need to respond to a changing environment involving asynchronous flow of events and dynamically changing requirements with limitations on time, hardware, and other resources.Determining how fast this system can respond under all possible situations is a difficult problem that requires using flexible software architecture in order to provide the necessary reasoning on rapidly changing data.In this paper, various issues on applying knowledge-based techniques in real-time control systems have been presented; starting from this foundation, an implementation structure of a knowledge-based system for the existing automatic control and monitoring system for the geothermal plant from the University of Oradea was analyzed and an overall architecture is proposed for further implementation.The actually proposed knowledge-based system approach is focused mainly on general architecture and component colaboration.The architecture does not include time-constrains validation, it relies only on the performance of the JESS engine; a further solution for creating a specific expert system that includes temporal reasoning will be also investigated in the future. Figure 3 : Figure 3: The Knowledge System architecture Table 1 : Faults and associated switches in PLC (partial) Table 2 : Correlation model SLOT ITEM PROBLEMS Situation (for pumps P3/P4) Engine Overheating, Vibration, Lack of voltage Afterwards, for each identified problem, the possible causes are identified and associated, generating a knowledge model (Table3). Table 3 : After establishing the knowledge model, the design phase is dedicated for structuring the rules according to requirements of JESS inference engine.JESS uses the notion of frames that are hierarchical representation that includes several components (slots, facet, datum and how).
2018-12-11T17:45:55.395Z
2012-11-13T00:00:00.000
{ "year": 2013, "sha1": "61091ba483fec609fae7d4715861abe2da93ac6a", "oa_license": "CCBYNC", "oa_url": "http://univagora.ro/jour/index.php/ijccc/article/download/181/5", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "61091ba483fec609fae7d4715861abe2da93ac6a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
213357553
pes2o/s2orc
v3-fos-license
Investigation of antifouling properties of polypropylene/TiO2 nanocomposite membrane under different aeration rate in membrane bioreactor system Graphical abstract Introduction Today water scarcity is a serious problem all over the world due to the increasing in population and the expansion of industry activities [1]. Therefore, it seems that wastewater treatment and reuse are necessary. Among several wastewater reuse and recycling processes, membrane-based technologies show great potential to overcome water scarcity [2]. In this case, membrane bioreactor (MBR) technology widely used for the treatment of various municipal and industrial wastewater, because of small footprint demand and high quality of effluent compared to other conventional wastewater treatment systems [3,4]. However, membrane fouling and prohibitively costly compared with the more established conventional technologies is the major problem impeding the widespread adoption of MBR to full-scale plants [5][6][7]. According to the literature, two strategies including of operational conditions and membrane modification was widely used to improvement in antifouling properties of membrane in MBR system. Among operational conditions, many studies have been focused on the effect of aeration rate on the membrane fouling. It is well known that the aeration intensity strongly impacts the mixed liquor organic matter fractions and correspondingly influences the membrane fouling rate [8]. Ivanovic et al. [9] showed that the relationship between sufficient aeration to minimize membrane fouling. Also, they proposed an approach to define optimal operating conditions with respect to aeration rates. Meng et al. [10] investigated the effect of aeration rate on membrane fouling in submerged MBR. They concluded that aeration had a positive effect on cake layer removal, but pore blocking became severe as aeration intensity increased to 800 L/h. In other words, under low aeration rate, foulants on the membrane surface was not removed effectively, while, high aeration could induce a severe breakage of sludge flocs. Among all modification methods in membrane preparation, nanocomposite membranes demonstrate promising performances and are predicted to gather the intrinsic properties of both polymeric and inorganic membranes and give interesting advantages of the hybrid membrane such as great thermal and chemical resistances, good antifouling and separation performances, and excellent adaptation to the severe operating conditions [11]. In this regard various nanoparticles were used in order to improvement in antifouling properties of membrane in MBR systems. On the other hand, using low cost membrane such as polypropylene (PP), can reduce the MBR costs. PP are good candidates for preparation of membrane due to high mechanical strength, high chemical stability, thermal resistance and low cost [11,12]. Therefore, PP is a very promising material for separating membranes. Although the prepared PP membrane exhibit many advantages, it still has several disadvantages, such as low porosity, poor hydrophilicity and high fouling [12,13]. The disadvantages reduce the water flux of the PP membrane and limit applications of this membrane in wastewater treatment applications. Therefore, it seems that PP membrane modification is essential in order to using in MBR system. For this purpose, as mentioned above, incorporation of hydrophilic nanoparticles into polymer matrix is one of the effective methods to enhance the membrane antifouling properties. Among different inorganic nanoparticles, titanium dioxide (TiO 2 ) has received most of the attention because of its unique specifications such as stability under harsh conditions, commercial availability and easiness of preparation. In our previous work [13], PP/TiO 2 nanocomposite membrane was prepared via thermally induced phase separation (TIPS) method and it was tested in MBR system. However, in the current work, the effect of aeration rate on antifouling properties of PP/TiO 2 nanocomposite membrane was investigated using oil refinery wastewater influents obtained from Tabriz Oil Refinery Co. in MBR system. In this case, the fouling mechanism as well as antifouling performance of both fabricated membrane (neat and nanocomposite membranes) was investigated. Materials Isotactic PP with commercial grade (EPD60R) was supplied from Arak Petrochemical Co., Iran. The melt flow index of PP was 0.35 g/10 min. TiO 2 nanoparticles (particle size of ca. 21 nm) were purchased from Sigma-Aldrich (Germany). Mineral oil as diluent, acetone as extracting agent and Irganox 1010 as heat stabilizer were purchased from Acros Organics (Belgium), Merck (Germany) and Ciba Co. (Switzerland), respectively. All materials were used as-received unless otherwise described. Membrane preparation The neat PP and nanocomposite membrane was fabricated by TIPS method using a sealed glass vessel kept in silicone oil bath. A certain amount of TiO 2 nanoparticles (0.75 wt.%) with Irganox (1 wt% of solid phase) were dispersed into 60 g of mineral oil using bath sonication (Woson, China) for 60 min. Then, PP was added to the diluent-TiO 2 suspension and melt blended at 170 C for 90 min. The solution was then allowed to degas for 30 min and cast on a preheated glass sheet using a doctor blade with the film thickness of 250 mm. The plate was immediately quenched in the water bath (30 AE 3 C) to induce phase separation. The membrane was then immersed in acetone for 24 h to extract its diluent. Membranes characterization The microscopic morphology of the neat PP and nanocomposite membranes was characterized by scanning electron microscope (SEM) (VEGA3, TESCAN). The hydrophilic properties of membranes was evaluated by measuring the contact angle between membrane surface and water droplet using a goniometer (PGX, Thwing-Albert Instrument Co., USA). All reported data of contact angle are the average of five different tests from each membrane sample. Membrane porosity was measured using gravimetric method. In this method, PP membranes were immersed in i-butanol for 24 h and then immediately weighed after removing i-butanol from the surface. The porosity was calculated using the following equation [12]: where W dry is the initial membrane weight, W wet is the membrane weight after 24 h immersion in i-butanol, D p and D i are the density of PP (0.91 g/cm 3 ) and i-butanol (0.8 g/cm 3 ), respectively. Tensile strength of the membranes was analyzed via tensile test machine (STM-5, SANTAM) at an extension rate of 50 mm/min. At least three measurements were carried out and the mean value for each case was reported. MBR set-up In this study, a lab-scale submerge MBR (12 L working volume) was used. The flat sheet membrane modules had a volume of 50 mL and an effective membrane filtration area of 14.7 cm 2 . Fig. 1 shows flat sheet modules submerged in the MBR test system. An air diffuser was installed beneath the membrane module to provide dissolved oxygen as well as efficient agitation of activated sludge in the MBR. Transmembrane pressure (TMP) was maintained constant at 0.1 bar. Mixed liquor suspended solid (MLSS) concentration was about 7000 mg/l. Hydraulic retention time (HRT) and sludge residence time (SRT) were maintained at 24 h and 20 days, respectively. Real wastewater with chemical oxygen demand (COD) of 178 mg/ l was supplied from Tabriz Oil Refinery Company (TZ.O.R.C), Tabriz, Iran. The literature showed that air injection reduced fouling in an submerged MBR up to a critical flow rate corresponding to a specific aeration demand per membrane area (SADm) of 0.25 m 3 /m 2 h [15,16]. Therefore, in this study, the lower value of SADm was corresponding to 0.5 m 3 /m 2 h. The effect of aeration rate on the membrane fouling was examined, and in this case, SADm was selected for three different rates; 0.5, 1, and 1.5 m 3 /m 2 h. Antifouling performance of membranes Antifouling performance of neat PP and PP/TiO 2 (0.75 wt.%) membranes were evaluated by filtrating activated sludge. After pure water flux tests (J w1 , L/m 2 h), filtration experiments were carried out for 360 min at 0.1 bar and the flux for activated sludge (J AS , L/m 2 h) was measured. Then the membrane was taken out for simple cleansing under running deionized water and the pure water flux of cleaned membranes J w2 (L/m 2 h) was measured again. The flux recovery ratio (FRR) was calculated as follows: Furthermore, the antifouling property of membranes was also evaluated by the total fouling ratio (TFR), reversible fouling ratio (RFR) as well as irreversible fouling ratio (IFR) according to the following equations [4,17]: COD removal was estimated by measuring COD of effluent (COD E ) and influent (COD I ) based on absorbance method as described elsewhere [18] and using the following equation [18]: Analysis of fouling mechanisms According to Hermia's model, under constant pressure filtration condition, four fouling mechanisms blamed for flux decline could be explained using the following mathematical equation [19]: where t (h) is filtration time, V (m 3 ) is the filtrate volume, k is the resistance coefficient and specific formulation of these fouling mechanisms could be characterized with different values of m: m = 0 for cake filtration, m = 1 for intermediate blockage, m = 1.5 for standard blockage and m = 2forcompleteblockage [20]. Using the fluxexpression (Eq. (9)), the flux decline expression can be expressed by Eq. (10): In Eqs. (9) and (10), A is the effective membrane area (m 2 ). Membrane morphology SEM images of surface of neat PP and PP/TiO 2 (0.75 wt.%) membranes, is shown in Fig. 2a and b. By addition of TiO 2 nanoparticles the nucleation density and porosity increased in PP nanocomposite membrane when compared with neat PP membrane. In other words, due to the fact that TiO 2 nanoparticles acted as the crystal nuclei and a certain amount of TiO 2 dosage could increase the number of spherulites, decrease the size of spherulites and caves between the spherulites and make spherulites more uniform. Similar results was found in elsewhere [11]. Fig. 2a and b also show that the addition of TiO 2 nanoparticles increased the number and the size of pores in the surface of membrane. This may be due to the heterogeneous nucleation effect of TiO 2 nanoparticles. Fig. 2c and d show the SEM images of cross section of membranes. From the SEM images of cross section it can be seen that the both membrane has sponge like porosity and symmetric structure. Hydrophilicity, porosity and tensile strength of membranes The results of contact angle measurement which is responsible for membranes hydrophilicity, are shown in Table 1. It can be seen that the hydrophilicity of PP membrane improved with the addition of TiO 2 nanoparticles in which the water contact angle decreases. The relatively higher hydrophilicity was found for PP/ TiO 2 nanocomposite membranes due to the presence of hydroxyl functional groups on the TiO 2 nanoparticle surface [21]. Overall porosities of the fabricated membranes are presented in Table 1. By addition of 0.75 wt.% of nanoparticles, the porosity of PP membrane increased from 31.48 % to 50.74 % decreasing its pore tortuosity. Addition of TiO 2 nanoparticles could accelerate the crystallization rate and act as the crystal nuclei at the low quenching temperature [22], the average pore size and the porosity of nanocomposite membrane could be higher than those of the neat PP membrane. The tensile strength of neat PP and nanocomposite membrane are shown in Table 1. According to the obtained results, the tensile strength of neat PP membrane increased by following the addition of nanoparticle. Similar to our previous findings [23], this manner can be attributed to the crystallinity change in PP and the reinforcement effect of the inorganic nanoparticles due to the addition of nanoparticle Fouling analysis and membrane performance In order to evaluate the effect of aeration rate on antifouling properties of neat PP and PP/TiO 2 (0.75 wt.%) membrane, the permeate flux is plotted against time in Fig. 3 for various SADm, i.e. 0.5, 1, and 1.5 m 3 /m 2 h under TMP of 0.1 bar. As shown in Fig. 4a and b, for both membranes, flux at the end of filtration was decreased in the extreme low and high aeration rates. By increasing SADm from 0.5 to 1 m 3 /m 2 h and for both membranes, the flux through the membrane increased at the whole of permeation time. At higher aeration rates, i.e. 1.5 m 3 /m 2 h, membrane permeability decreased. The results confirm the importance of aeration as a means to mitigate fouling in immersed membrane systems. By comparing Fig. 3a and b, it is clear that the in comparison with neat PP membrane and for all aeration rates, flux has increased for nanocomposite membrane. That indicates that the membrane hydrophilicity and porosity as well as surface pore size (according to SEM images) played the vital role in the improvement of the activated sludge flux. Fouling analysis was made by calculating the reversible fouling ratio (RFR), irreversible fouling ratio (IFR), total fouling ratio (TFR), and flux recovery ratio (FRR) of membranes under different aeration rates after activated sludge filtration test. These parameters are shown in Fig. 4. A higher FRR shows a better flux recovery while a lower IFR demonstrates a better performance controlling the total fouling [24]. In the aeration rate of 1 m 3 /m 2 h, the neat PP and PP nanocomposite membrane shows the lowest IFR and TFR among other aeration rates. The IFR values for neat PP membrane were calculated to be 69.9 % and 61.3 % when the SADm were 0.5 and 1 m 3 /m 2 h, respectively (see Fig. 4a). Also, the values for FRR were 30.1 %, 38.7 %, and 32.2 % for SADm of 0.5, 1, and 1.5 m 3 /m 2 h, respectively. Similar trend was found for PP nanocomposite membrane under different aeration rates (see Fig. 4b). Higher aeration rates more efficiently remove the fouling deposition or cake layer on the membrane surface due to the higher shear force of bubble air, and simultaneously increases the breakage of components that have been identified as major contributors to fouling. Therefore, under high aeration rates, the membrane fouling intensifies. Meanwhile, the small matters from the occurrence of floc and particle breakage can penetrate the membrane pores, during which membrane pore blockage or irreversible fouling occurs. Fig. 5 shows the microscopic images of sludge flocs in mixed liquor under low (SADm = 0.5 m 3 /m 2 h) and high (SADm = 1.5 m 3 /m 2 h) aeration rates. It is clear that a low aeration rate results in larger floc and particles sizes, while a higher aeration rate creates smaller particle and flocs due to the floc breakage [25,26]. Comparing the RFR and IFR values of the neat PP and PP/TiO 2 membranes however, shows that the RFR and IFR values of the nanocomposite membrane are higher and lower than the neat PP membrane, respectively, which confirms the improvement of the antifouling property of nanocomposite membrane due to hydrophilicity improvement. Generally, if the foulants (such as colloidal particles, sludge flocs and cell debris) are weakly bound on the membrane surface or within its pores, reversible fouling occurs, which can be easily eliminated by water rinsing. While, irreversible fouling occurs, when the foulants are strongly attached within the pores or membrane surface and chemical cleaning is seriously needed to remove these reagents [27,28].Therefore, it seems that reduction in IFR is important for membrane separation process due to chemical cleaning and subsequently resulting in high cost. Fig. 6 illustrates the fitting of the obtained experimental data after using the neat PP membrane in MBR system under various aeration rate conditions to different predicted fouling mechanisms, including complete pore blocking (m = 2), standard pore blocking (m = 1.5), intermediate pore blocking (m = 1), and cake formation (m = 0). In this study, in order to identify the mechanism of fouling during activated sludge filtration, the model k parameter was estimated by linear regression method. The adjusted values of k and correlation coefficient; R 2 , for m = 0, 1, 1.5, and 2 were used to solve the respective Hermia's equations and the obtained results are shown in Table 2. According to Fig. 6a, it is observed that under lower aeration rate (SADm = 0.5 m 3 /m 2 h), a cake filtration model provides the best fit with neat PP membrane. In Hermia's model, according to Zhang et al. [29] study, a kvalue can be used to estimate the degree of membrane fouling. As shown in Table 2 and for neat PP membrane, for cake formation model, the increased k values under different aeration rates follow the following order: SADm = 1.5 m 3 /m 2 h < SADm = 1 m 3 / m 2 h < SADm = 0.5 m 3 /m 2 h, which indicates that the thickness of cake layer formed on the membrane surface under different aeration rates and for neat PP membrane is: SADm = 1.5 m 3 / m 2 h < SADm = 1 m 3 /m 2 h < SADm = 0.5 m 3 /m 2 h. Therefore, as shown in the Fig. 6, cake filtration mechanism is the dominant mechanism for neat PP membrane under lower aeration rate (SADm = 0.5 m 3 /m 2 h). Similar trend was found for PP/TiO 2 nanocomposite membrane (see Fig. 7a). As shown in Figs. 6 and 7, by increasing in aeration rate from 0.5 to 1 and 1.5 m 3 /m 2 h, there is no model to quite well prediction of experimental data. These results indicated that the aeration rate largely determines the potential for floc breakage and release of small fragments into the bulk liquid which cause membrane pore blockage. As shown in Fig. 4, increasing in aeration rate (higher than SADm of 2 m 3 /m 2 h) results in irreversible fouling. Under lower aeration rate, due to larger size of sludge floc and other particles, these matters accumulated on the membrane surface and cake layer formed on the membrane surface which can easily remove physically or eliminated by water rinsing. However, under very high aeration rate (SADm = 1.5 m 3 /m 2 h) the floc and particle breakage occurs which these small maters can penetration through the membrane pores and membrane pore blockage or irreversible fouling occurs. The amount of COD removal for activated sludge and membranes under different aeration rates was also investigated and the obtained results were shown in Fig. 8. It can be observed that COD removal of activated sludge decreased by increasing aeration rate. According to the Meng et al. [10] and Temmerman et al. [25] studies, high aeration rates led to the release of soluble microbial products (SMP) and breakage of particles and bacteria. Therefore, it is expected that COD removal efficiency of activated sludge was decreased by increasing aeration rates. This trend was observed elsewhere [30]. However, COD removal increased by increasing aeration rate for neat PP and PP/TiO 2 membranes. As shown in Fig. 8, the higher COD removal for both membranes was appeared at the highest aeration rate (SADm = 1.5 m 3 /m 2 h). It could be concluded that high aeration rate results in breakage of particles and bacteria, and as mentioned previously, under high aeration rate membrane pore blockage occurs and foulants cannot cross through the membrane and therefore membrane COD removal increased. In other words, under high aeration rate conditions, due to the floc breakage, membrane pore blockage as well as thinner and denser cake layer was formed on the membrane surface and this phenomenon acts as a secondary membrane that filters and prevents the penetration of foulants [31]. On the other hand, under lower aeration rate conditions, due to the presence of larger sludge floc in MBR tank, a thicker and porous cake layer was formed on the membrane surface which cause foulants cross a cake layer and membrane and subsequently results in increasing COD with respect to higher aeration rate. This phenomenon schematically is shown in Fig. 9. Comparing the COD removal values for neat PP and PP/TiO 2 membranes show that the COD removal for nanocomposite membrane was higher than neat membrane. This efficient reduction in COD can be ascribed to the presence of TiO 2 nanoparticle which results in increasing in membrane hydrophilicity. Conclusions This study examined the effect of aeration rate on antifouling properties of polypropylene/TiO 2 nanocomposite membrane in MBR system in order to oil refinery wastewater treatment. PP/TiO 2 nanocomposite membranes with high hydrophilicity and high porosity were successfully fabricated via TIPS method. The obtained results indicated low or high aeration rate had a negative influence on membrane permeability. Low aeration could not remove the membrane foulants from membrane surface effectively. However, high aeration could induce a severe breakage of sludge flocs. The SADm of 1 m 3 /m 2 h was selected as optimal aeration rate for both neat PP and nanocomposite membranes, which low IFR and high FRR was occurred. According to the results obtained from Hermia's model, it can be determined that for both membranes, the best fit to experimental values are cake formation mechanism under lower aeration rate (SADm = 0.5 m 3 /m 2 h). However, by increasing aeration rate no one of models couldn't predict experimental data. Also, the COD removal rates for activated sludge decreased as the aeration rate increased. While, by increasing aeration rate, COD removal for neat PP and PP/TiO 2 membranes increased due to formation of pore blockage and denser cake layer. As a final result, PP nanocomposite membrane showed better antifouling properties compared to neat PP membrane. Declaration of Competing Interest We have no conflicts of interest to declare.
2020-01-02T21:46:15.949Z
2019-12-31T00:00:00.000
{ "year": 2019, "sha1": "acde899270f70dce5d5c157d61889b82475c07a0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.btre.2019.e00414", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "251206a956322cf75a976066c3f79397100cd932", "s2fieldsofstudy": [ "Engineering", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }