id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
124848023
pes2o/s2orc
v3-fos-license
Effects of the Cattaneo–Christov heat flux model on peristalsis ABSTRACT This paper addresses the influence of newly-developed Cattaneo–Christov heat flux model on peristalsis. Analysis has been carried out in a two-dimensional planner channel with wall properties and the Soret effect. An incompressible viscous fluid fills the space inside the channel. The relevant mathematical modeling is developed and a perturbation technique is employed to obtain a series form of solutions about small wave numbers. Expressions of velocity, temperature, concentration and heat transfer are treated graphically, corresponding to elasticity parameters, relaxation time and Prandtl numbers specifically. The graphical results are found distinctive that offers challenging role for further research on the topic. Further, the results of Fourier’s law can be verified when the relaxation time of the Cattaneo–Christov heat flux model is considered absent or concepts of large wavelength and small Reynolds numbers are applied. Introduction There is growing interest in the study of peristalsis due to its occurrence in extensive environmental, industrial and physiological processes. Some prominent processes, among many others, include the movement of chyme in the digestive tube, the movement of spermatozoa and ova in the cervical canal and fallopian tubes, the carrying of lymph in vessels, the transfer of urine to the bladder, the mixing of food, water transport from the ground to tall trees, and the spontaneous movement of blood vessels. Peristaltic transport is generated through periodic waves travelling beside the walls -thus the peristaltic flow can be developed in tubes or channels. The study of peristaltic activity was initiated by Latham (1966), after which the experimental analysis of Shapiro, Jafferin, and Weinberg (1969) formalized the validity of the theoretical results. Elnaby and Haroun (2008) addressed peristaltic transport with compliant wall effects of viscous fluid, while Vasudev, Rao, Reddy, and Rao (2010) examined the heat transfer phenomenon on peristalsis. In two separate studies, Tripathi andBeg (2014a, 2014b) investigated the peristaltic flow of nanofluid and the peristaltic flow of Berger's fluid through a porous medium. Abd elmaboud and Mekheimer (2011) presented work on second-order non-linear peristaltic flow, while Ali and Hayat (2007) investigated peristaltic motion in an CONTACT A. Tanveer qau14@yahoo.com asymmetric channel. The peristaltic flow comprising copper-water nanofluid has been investigated by Abbasi, Hayat, and Ahmad (2015), while Hayat, Tanveer, Yasmin, and Alsaadi (2015) explored the outcomes of Hall currents and thermal deposition on the peristaltically generated flow of Eyring-Powell fluid. Further, the channel walls possess some elastic properties due to their compliant and damping nature. In physical situations such as the movement of blood through vessels and tissues, tension and the damping of walls play an essential role. The instabilities of a plane channel flow induced by a peristaltic wave bounded by compliant walls have been investigated under a large wavelength for small Reynolds numbers by many researchers in recent years (for some representative studies, see Gad, 2014;Javed, Hayat, & Alsaedi, 2014;Riaz, Nadeem, Ellahi, & Akbar, 2014). The compliant wall effects on peristaltic transport have also been reported by Hina, Hayat, Asghar, and Hendi (2012). Some authors have presented their research on the fluid flows followed by peristaltic activity using a numerical approach. Ali, Javid, Sajid, Zaman, and Hayat (2016) addressed the heat transfer phenomenon in a curved channel using numerical analysis, while Abbasi, Hayat, and Alsaedi (2015) conducted a numerical study on the Hall current effects with magnetohydrodynamics (MHD) Carreau-Yasuda fluid in a curved flow configuration. Mustafa, Abbasbandy, Hina, and Hayat (2014) numerically examined the Soret and Dufour effects of peristaltic flow. Moreover, some interesting studies relevant to computational fluid dynamics (CFD) are mentioned in Zhang, Huang, Zhang, Zou, and Tang (2016), Fu, Uddin, and Curley (2016) and Özkan, Wenka, Hansjosten, Pfeifer, and Kraushaar-Czarnetzki (2016). The transfer of heat is a widespread phenomenon that occurs due to the difference in temperature between a system and its environments. Whenever there is a difference in temperature between two objects, heat starts propagating from the higher temperature field to the lower temperature field. Considerable attention has been devoted to studying the behavior of the heat transport mechanism. Some authors have analyzed the heat transfer mechanism through the empirical law of conduction known as Fourier's law. In particular, Turkyilmazoglu and Pop (2013) reported heat transfer effects with Jeffery fluid, while Jalil, Asghar, and Imran (2013) found the heat transfer mechanism on the moving surface of a free stream. Hayat, Rafiq, Ahmad, and Yasmin (2015) looked at the impact of melting heat transfer on peristalsis with thermal radiation and Joule heating, and Tripathi (2013) presented a transient heat flow analysis. Further, Ali, Sajid, Javed, and Abbas (2010) examined the heat transfer phenomenon in curved channel, while Sheikholeslami, Hatami, and Ganji (2014) carried out an analysis on nanofluid heat transfer with a magnetic field. However, the above-mentioned studies addressed the heat transfer mechanism using Fourier's law. This law was considered to be sufficient to describe heat transfer analysis for two centuries. However, the vector field aspect of heat flux requires that the governing equations for heat transfer involve an objective time derivative. Cattaneo on the basis of Cattaneo's law proposed a new model by adding a term of relaxation time in Fourier's law called the modified Fourier heat conduction law. Cattaneo generalizes the modified Fourier heat conduction law by utilizing the Oldroyd's upper-convected derivative and thus constitutes a single equation for temperature. After the development of the Cattaneo-Christov model, various attempts were made to verify fluid flow according to this law. Christov (2009) investigated the heat equation that describes the frame indifferent expression of the Maxwell-Cattaneo version of heat flux. Straughan (2010) presented the thermal relaxation effects of the Christov heat flux model and found that uniform convection switches to fluctuating convection with narrower channels. Han, Zheng, Li, and Zhang (2014) explored the heat transfer phenomenon of viscoelastic fluid under the Cattaneo-Christov theory, while Haddad (2014) employed the Cattaneo-Christov model to study the thermal instabilities in fluid saturating a porous medium. Mustafa (2015) explored the thermal relaxation aspects of the Cattaneo-Christov model in the flow of a rotating frame, while Khan, Mustafa, Hayat, and Alsaedi (2015) numerically investigated the model with an exponentially stretching surface. Hayat, Imtiaz, Alsaedi, and Almezal (2016) carried out a study for the MHD characteristics of fluid under the Cattaneo-Christov theory with homogeneous-heterogeneous reactions, and Salahuddin, Malik, Hussain, Bilal, and Awais (2016) analyzed the Cattaneo-Christov heat flux model numerically through variable viscosity. In spite of the considerable importance of the heat transfer mechanism to peristalsis in physical and engineering systems, to date no study has been reported that encountered the heat flux model based on Cattaneo-Christov concepts. The motivation of this work is to predict the behavior of heat flow subject to the Cattaneo-Christov heat flux model under a sinusoidally induced peristaltic wave between compliant walls. An incompressible viscous fluid is employed in a channel to predict the relaxation time effects. Further to exploring the extended heat effects on the concentration field, the Soret effect is employed. Series solutions are obtained for a small wave number and the obtained results are discussed. It is found that convective results (obtained by Fourier's law) have oscillatory behavior in terms of temperature and concentration. Moreover, in the limiting case, the exact results corresponding to Fourier's law can be recovered. Mathematical formulation To begin with, the peristaltically induced flow of an incompressible viscous fluid bounded by horizontal walls with a separating distance of 2d is considered. The flow configuration is such that the fluid flows with speed c in the x-direction along the channel walls whereas the y-direction is transverse to it. Further, the elastic and damping nature of the wall are also taken into account. Through the following expression, the configuration of the wall can be analyzed: where a is the wave amplitude, λ is the wavelength, d is the distance of the walls from the center of the channel, t is the time, and η and −η are the displacements of the upper and lower walls, respectively. The fundamental equations are based on the conservation of mass, momentum, energy and concentration: The momentum equation is The energy equation q can be obtained from the heat flux model proposed by Christov (2009), whereas the concentration equation encounters the thermo-diffusion or Soret effect. The mathematical expressions are as follows: The Cattaneo-Christov heat flux model has the form where V = (u, v, 0) is the two-dimensional velocity of the viscous fluid, q is the heat flux, T is the temperature of the fluid, k is the thermal conductivity and λ 2 is the relaxation time parameter for the heat flux. When λ 2 = 0, the simplified expression of Fourier's law can be deduced. Since we considered an incompressible fluid, the above expression takes the form Upon the elimination of q between Equations (4) and (10), the required energy equation reads The concentration equation (Equation (5)) is then formulated as: where ρ is the fluid density, μ is the dynamic viscosity of the fluid, ν is the kinematic viscosity of the fluid, C is the concentration of the fluid, D is the mass diffusivity coefficient, c p is the specific heat constant, k is the thermal conductivity, K T is the thermal diffusion ratio coefficient and T m is the mean temperature of the fluid. The corresponding conditions at the boundary are given by Here, T 0 and C 0 represent the prescribed values of temperature and concentration at the lower channel walls and T 1 and C 1 represent the prescribed values of temperature and concentration at the upper channel walls. The no-slip condition at the boundary has the form given below: where τ is the coefficient of elastic tension, m * 1 is the coefficient of the mass per unit area, and d is the coefficient of the viscous damping. On combining equations (7) and (8) by taking partial derivative of equation (7) with respect to y and partial derivative of equation (8) with respect to x and then subtracting the resultant expression we get The stream function ψ(x, y, t) and the dimensionless variables are given in the following definitions: The dimensionless equations -Equations (11), (12) and (17) -are obtained as follows: with the conditions In the above equations, the asterisks have been dropped for the simplification of the mathematical expressions. The definitions of the dimensionless amplitude ratio ε, the Prandtl number Pr, the wave number δ, the Reynolds number R, the relaxation time parameter α, the non-dimensional elasticity parameters E 1 , E 2 and E 3 , the Schmidt number Sc and the Soret number Sr are as follows: Perturbation solutions The non-linearity of the resulting system suggests the application of the perturbation method about the small wave number δ. Thus we get the series form of solutions to plot graphs corresponding to the stream function, temperature, concentration and heat transfer coefficient (Z) as follows (Hayat, Rafiq, et al., 2015): Zero-order system ψ 0y = 0 at y = ±η, The solutions of the above equations subject to the corresponding boundary conditions are and the heat transfer coefficient by the definition It should be noted that Equation (38) is identical to the result in Elnaby and Haroun (2008), which was obtained in the absence of heat and mass transfer. First-order system and the solution and the heat transfer coefficient where Here L t , L x , η t , η x , L tt , L tx , L xt , L tt , η tt , η tx , η xt , η xx , A 2t , A 2x , A 4t and A 4x represent partial derivatives with respect to the corresponding subscripts. Results and discussion To analyze the heat transfer effects on peristalsis we applied the concept of the Cattaneo-Christov heat flux model to the flow of viscous fluid. The stream function equation is comprised of the wall properties, whereas the pair of equations are made for temperature and concentration distribution, and the concentration distribution equation covers the Soret effect. The variations in the fluid velocity u, temperature θ, concentration φ and coefficient of heat transfer Z are described in this section. Figure 1 presents the increasing velocity with the wall elastic parameters E 1 and E 2 due to their elastic behavior, whereas an increase in the damping parameter E 3 reduces the velocity. Figures 2-4 show the increase in velocity with increasing values of the wave number δ, Reynolds number R and amplitude ratio parameter ε. (Cattaneo-Christov model) is responsible for these results. From Figure 5 it can be seen that the behavior of wall properties towards temperature profile is increasing towards E 1 and E 2 , while an increase in E 3 lowers the fluid temperature. Figure 6 represents a decrease in increasing values of γ opposite behavior is noticed, i.e., an increase in γ increases the temperature distribution towards the lower wall and decreases the temperature distribution towards the upper wall when the relaxation time increases. These results can be compared qualitatively with Hayat et al. (2016) and Salahuddin et al. (2016) in terms of a half channel. Figures 8 and 9 display a decrease in the temperature profile with increasing values of the wave number δ and Reynolds number R at the upper channel wall. Moreover, in absence of δ and R we get straight lines corresponding to viscous fluid (see Figures 8 and 9). It is noted that by the concepts of a large wavelength and small Reynolds number (Haddad, 2014) the results of Fourier's law can be retrieved, i.e., the Cattaneo-Christov theory becomes identical to Fourier's law. Figures 10 and 11 signify that larger values of δ and R raise the concentration distribution around the upper wall of the channel. The impact of the Soret number Sr on the concentration distribution is to cause an increase ( Figure 12) since higher values of Sr increase the density of the fluid. An increase in the concentration distribution is noticed for the Schmidt number Sc as greater values of Sc result in a decrease in the molecular diffusion that dominates the intermolecular forces between molecules, hence there is increase in the concentration (see Figure 13). The effect of the Prandtl number Pr on the concentration is to cause an increase because with an increase in Pr the viscosity increases (see Figure 14). Figure 15 exhibits the increasing response of the concentration profile towards the relaxation time γ . The wall parameters have an effect on the concentration profile that is opposite to the effect of the temperature (Figure 16), i.e., E 1 and E 2 lessen the concentration of the fluid and E 3 increases it. Figures 17-19 correspond to the effects of the relaxation time γ , wall parameters E 1 , E 2 and E 3 , and Prandtl number Pr on the heat transfer coefficient Z. The results shows an increase in the concentration as a result of these parameters, except for the wall damping parameter E 3 which results in a decrease in the concentration (see Figure 18). Figures 20 and 21 present the increase in the heat transfer coefficient with growing values of the wave number δ and Reynolds number R. Concluding remarks A study of peristaltic motion with flexible walls occupying a viscous fluid composed of thermal convection effects satisfied by the Cattaneo-Christov heat flux model has been presented. In addition, the thermo-diffusion or Soret effect is also factored into the analysis. The results indicate that the thermal relaxation time plays a key role in the heat transfer process. Thus, an interesting relationship between the temperature and concentration has been found. The graphical results encourage further consideration of the Cattaneo-Christov theory with respect to peristalsis. This work can also be used to compare the results of the Cattaneo-Christov heat flux model to other work on peristaltic theory. The main findings are as follows: • Velocity increases with the wave number and Reynolds number since the speed of the wave as well as its momentum diffusivity increases with increasing values of these parameters. • The decrease on temperature and increase of concentration is noticed for Prandtl number. • In contrast to Fourier's law, the effect of the relaxation time parameter causes a decrease in temperature, but the concentration increases with γ . • The concentration rises where temperature falls with wave number and Reynolds number.
2019-04-21T13:08:42.099Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "1220643d54243ec15f09d8de656fc29ea4669c4a", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19942060.2016.1174889?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "8bce40ec372e909e1eb6586693a86d0dca285b05", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
118602787
pes2o/s2orc
v3-fos-license
Gauge turbulence, topological defect dynamics, and condensation in Higgs models The real-time dynamics of topological defects and turbulent configurations of gauge fields for electric and magnetic confinement are studied numerically within a 2+1D Abelian Higgs model. It is shown that confinement is appearing in such systems equilibrating after a strong initial quench such as the overpopulation of the infrared modes. While the final equilibrium state does not support confinement, metastable vortex defect configurations appear in the gauge field which are found to be closely related to the appearance of physically observable confined electric and magnetic charges. These phenomena are seen to be intimately related to the approach of a non-thermal fixed point of the far-from-equilibrium dynamical evolution, signalled by universal scaling in the gauge-invariant correlation function of the Higgs field. Even when the parameters of the Higgs action do not support condensate formation in the vacuum, during this approach, transient Higgs condensation is observed. We discuss implications of these results for the far-from-equilibrium dynamics of Yang-Mills fields and potential mechanisms how confinement and condensation in non-abelian gauge fields can be understood in terms of the dynamics of Higgs models. These suggest that there is an interesting new class of dynamics of strong coherent turbulent gauge fields with condensates. The real-time dynamics of topological defects and turbulent configurations of gauge fields for electric and magnetic confinement are studied numerically within a 2+1D Abelian Higgs model. It is shown that confinement is appearing in such systems equilibrating after a strong initial quench such as the overpopulation of the infrared modes. While the final equilibrium state does not support confinement, metastable vortex defect configurations appear in the gauge field which are found to be closely related to the appearance of physically observable confined electric and magnetic charges. These phenomena are seen to be intimately related to the approach of a non-thermal fixed point of the far-from-equilibrium dynamical evolution, signalled by universal scaling in the gauge-invariant correlation function of the Higgs field. Even when the parametersof the Higgsaction do not support condensate formationin the vacuum, during this approach, transient Higgs condensation is observed. We discuss implications of these results for the far-from-equilibrium dynamics of Yang-Mills fields and potential mechanisms how confinement and condensation in non-abelian gauge fields can be understood in terms of the dynamics of Higgs models. These suggest that there is an interesting new class of dynamics of strong coherent turbulent gauge fields with condensates. I. INTRODUCTION Matter produced in heavy-ion collisions has been argued to form a Glasma at early times [1][2][3][4][5]. This Glasma is initially a highly coherent, stochastic ensemble of colour electric and colour magnetic fields. The colour fields are very strong and correspond to high occupancy of coherent gluon modes. The gluon states begin evolving from certain initial conditions, but soon fluctuations, amplified by instabilities, begin to dominate the form of the classical fields. The field configuration becomes turbulent, but at the same time remains highly coherent, with a high occupancy of gluon modes. Initially, one particular scale characterizes the momentum distribution of gluons. This is the saturation momentum Q sat Λ QCD which is derived from the colour glass condensates of the initial nuclei [6][7][8][9] As the system evolves, there is a separation of momentum scales between an infrared scale Λ s and an ultraviolet scale Λ, see e.g. [10][11][12][13]. The infrared scale Λ s marks the momentum below which modes are maximally coherent and occupation numbers are of order n(k) ∼ α s (Q sat ) −1 , k Q sat . The coupling constant of QCD is weak at that scale, α s (Q sat ) 1, since the saturation momentum is large compared to the QCD scale for high energy collisions of large nuclei. Above the ultraviolet scale Λ, mode occupation numbers go rapidly to zero. At the initial time, the different scales are of the same order, When thermalisation is reached at time t th , one has Λ s (t th ) ∼ α s Λ(t th ). The last relation follows because for a thermalized system, the coherence scale is given by the magnetic mass M mag ∼ α s T . Hence, relation (2) determines the thermalisation time. Studying these types of systems and initial conditions is interesting in its own right, as they are relevant beyond the context of heavy-ion collisions. A central feature is that the considered systems initially are dominated by modes with high occupation number, that is, each of the momentum modes with momentum below the scale Λ has an occupation number much larger than one. This is equivalent to having large coherent fields. If such a system were not described by a gauge theory, one would expect the formation of a condensate [14][15][16][17]. For gauge systems, however, it is to a certain extent difficult to precisely articulate what is meant by condensation, and one of the goals of the present article is to demonstrate how this can be realized far from equilibrium. It is well-known that in a generic evolution of a system of interacting fields towards a thermal state, turbulence can appear [18][19][20]. Also this is different for gauge systems [12,13,[21][22][23][24][25] as opposed to most other systems since the turbulent variables describe very strong fields. Thus, even though the coupling can be very weak, one has both strong fields and strongly fluctuating fields. Such a system looks somewhat like the Yang-Mills vacuum, where the strong quantum fluctuating fields can induce non-trivial phenomena such as confinement. One needs to ask here what the expected non-trivial phenomena associated with such turbulent, strong-field, but weakly coupled systems are. In this article we discuss condensation as well as topological defects and turbulence in abelian as well as non-abelian gauge theories. We present a concrete numerical study of the relation between turbulence, topological defects, and condensation in an abelian Higgs model. The simulated evolutions start from a metastable symmetric state, leading to tachyonic evolution, as well as from the above described overpopulated initial states. Our results show the transient formation of (quasi-)topological vortex defects. Vortices formed during the early evolution after the initial quench are stabilized by magnetic fluxes in the gauge sector. At later times, these destabilize due to strong short-wavelength fluctuations in the gauge and matter sectors. For sufficiently weak gauge coupling we find soliton-like defect formation, separating spatial domains of opposite-sign homogeneous charge distributions and thus exhibiting a type of electric confinement. Also these patterns are due to vortex and vortexsheet defects in the gauge-invariant field correlations of the Higgs field. These defects mark the onset of turbulent cascades [26][27][28][29][30]. As indirect cascades they lead to symmetry breaking and condensation of the Higgs field [16,17]. The universal scaling in the turbulent occupation number distributions signals the approach of a non-thermal fixed point [31]. Dynamics near such fixed points has recently been studied for non-relativistic Bose systems [26,27,29,30,[32][33][34], pure scalar relativistic theories [16,28,31,[35][36][37] as well as pure gauge systems [12,13,21,23,24] studied extensively in the recent past. Most remarkably, increasing the gauge coupling causes the condensation of the Higgs field to disappear. Our article is organized as follows. In Sect. II, we discuss generic issues related to condensation phenomena that are peculiar to gauge theories, as well as possible observables for condensation. We furthermore discuss how topological objects such as vortices and defects may appear in turbulent gauge systems. In Sect. III we lay out our results of numerical semi-classical simulations of the equilibration dynamics of the Abelian Higgs model starting from different initial states far from equilibrium. These results show the formation of vortical topological defects, related to magnetic as well as electric confinement phenomena, the approach of a non-thermal fixed point, as well as transient Bose-Einstein condensation. A. Confinement in Higgs models For gauge theories, condensation is conventionally formalized within Higgs models. For example, in the Abelian Higgs model, a gauged vector potential couples to a scalar field. The abelian U (1)-symmetric model is described by its classical action where x = d 1+2 x, D µ = ∂ µ + ieA µ , and v = |m| 6/λ is the Higgs field's equilibrium expectation value. The scalar field is in the fundamental representation of the gauge group, i.e., it acquires a phase under gauge transformations. In the most basic setup, one considers the Higgs potential and looks for a minimum at a nonzero value of the scalar field. The scalar field, however, rotates under local gauge transformations and hence cannot have a non-zero expectation value. The root of this problem is that the Higgs field possesses two degrees of freedom, its phase and its modulus, of which the phase is gauge dependent. The modulus is positive definite, so that when fluctuations are included, it will always have an expectation value. However, at first glance, there is no gauge invariant way of characterizing the condensed phase in terms of spontaneous symmetry breaking. The Abelian Higgs model, nevertheless, shows several phases. In the condensed phase, one observes the Meissner effect of magnetic fields being expelled by the charged Higgs field which, thinking in the Landau-Ginzburg condensed matter context, describes the Cooper pair condensate. Moreover, depending on the value of the Ginzburg-Landau parameter which measures the relative strength of the Higgs and the gauge couplings, the superconductor is in its type I or type II phase. In the type II phase, magnetic fields exceeding a critical strength become confined within flux tubes, centered at Abrikosov vortices in the Cooper pair condensate. Despite the fact that the phase of the Higgs field is gauge dependent, it is twisted by a non-zero integer multiple of 2π when following it around a vortex core. Gauge transformations cannot unwind this defect structure. Finally, the Abelian Higgs model can also be in the symmetric or Coulomb phase where magnetic flux is not confined. Little is known about the role of topological defects for Higgs models excited far from thermal equilibrium. In Sect. III we will study numerically time-evolutions of the Abelian Higgs model in view of the role of defects in the equilibration dynamics after a strong initial quench. Before we proceed with this, we discuss, in the remainder of this section, potential implications for non-Abelian gauge theories, in particular the relation between condensation, confinement, and topological defects. We will discuss monopole parametrisation defects in the non-Abelian gauge field which are understood to be closely related to electric confinement. Interestingly, similar, vortex-type defects will show up in the simulations of the Abelian Higgs model in Sect. III. In non-Abelian Higgs models, condensation is associated with an analogous but in general richer spectrum of topological defects. A way to formalize this structure is in terms of the following two order parameters: One is the conventional Wilson loop measuring confinement of electric charges. The other is the 't Hooft loop that measures the confinement of magnetic monopoles. The Wilson and 't Hooft loops are convenient parameters to use for systems in thermal equilibrium, that can be formulated in terms of a Euclidean path integral. Three possible phases can be associated with these order parameters in gauge theories [38]: -The confined phase shows electric confinement and magnetic deconfinement. The Yang-Mills theory vacuum and the strong-coupling limit of compact electrodynamics provide example realisations of the confined phase. -The magnetic confinement phase shows electric deconfinement and magnetic confinement. The Abelian Higgs model in the Higgs phase is an example of the magnetic confinement phase. -The Coulomb phase shows electric and magnetic deconfinement. Electrodynamics without condensation is an example of the Coulomb phase, as there is confinement neither in the electric nor in the magnetic sector. A non-trivial example of the Coulomb phase occurs for SU (2) Yang-Mills theory with an adjoint-representation Higgs field as it is realized in the (Georgi-Glashow) non-Abelian Higgs model. An expectation value of the Higgs field causes two vector bosons to become massive. One direction remains unbroken, so that both colour electric and colour magnetic charges are deconfined. The dynamics of all of these models is entirely nontrivial in the infrared. To generate electric confinement, one presumably needs condensation or degeneracy with some type of colour magnetic monopole excitations. In analogy to the non-Abelian Higgs model, colour electric charge condensation is expected to be required for magnetic confinement. B. Topological defects, condensation, and confinement in Yang-Mills theories From the above discussion of condensation and confinement in Higgs models it remains unclear whether analogous relations exist in pure Yang-Mills theories which contain only adjoint-representation fields. It has already been pointed out that the condensation of adjointrepresentation fields as in the Georgi-Glashow model generates a Coulomb phase. While the confinement of magnetic or electric flux can be fairly easily imagined if there is condensation of fundamental-representation fields, it remains unclear how this is realized in a model with adjoint-representation fields only. In this subsection we present a possible order parameter for the confinementdeconfinement transition in the frame of a non-Abelian Higgs reformulation of the Yang-Mills Lagrangian. The main difficulty in describing condensation phenomena in gauge theories is to find suitable gaugeinvariant order parameters. Gluon descriptions in terms of the gauge fields A a µ are not gauge invariant, and it may be difficult to directly read off physical mechanisms of condensation from correlation functions of the gauge field. This problem already occurs in a static setting and has been discussed at length in the context of the confinement-deconfinement phase transition [38]. Standard confinement scenarios are based on condensation or percolation involving topological defects, i.e. colour-magnetic monopoles or center vortices. In QCD, in four dimensions, these topological defects are not stable objects as the related topological invariant vanishes. The only stable configurations known are instantons. In fact, gauge field configurations which contain monopoles or vortices have infinite action. Nevertheless, non-trivial vacuum configurations are possible which carry a nonvanishing topological density well-described in terms of these defects instead of instantons. This favors a description of the ground state in terms of defects which are simplified within appropriate gauge fixings. In order to describe the static confinementdeconfinement phase transition in this way, an appropriate gauge is the Polyakov gauge where the temporal gauge field is locally rotated into the Cartan subalgebra and made static. For the non-equilibrium evolution we consider in the following we take a spatial component, say A 3 , and apply such a gauge fixing. More precisely, we apply a diagonalisation transformation to the Wilson loop in x 3 -direction, where P denotes path ordering and L 3 is the spatial extent along this direction. Now we write the Wilson line in terms of an algebra-valued field φ, to wit where φ is referred to as a Higgs field. Under gauge transformations U ∈ SU (N ) the Wilson loop (4) transforms as where U (0) and U (L 3 ) is the gauge-group element evaluated at x 3 = 0 and x 3 = L 3 , respectively. The SU (N )rotation in (6) can be used to diagonalise the Wilson loop, and hence φ, up to defects, see e.g. [39]. A diagonal W 3 takes a particularly simple form in terms of φ. For example, for SU (2), its trace reads The above diagonalisation can be achieved within a whole class of gauges. The natural one is a type of Polyakov gauge where the 3-component of the gauge field is rotated into the Cartan subalgebra, where {τ c } are the generators in the Cartan, i.e. for SU (2) we have one Cartan component with τ c = σ 3 /2, see also (7). In the gauge (8), we have the relation The definition of φ implies that its eigenvalues ϕ n with n = 1, ..., N c in the fundamental representation are gauge invariant. They are directly related to the eigenvalues of the Wilson line W 3 which read exp{iϕ n } and do not change under gauge rotations (6) with periodicity U (L 3 ) = U (0). We emphasise that W 3 is in general not gauge invariant. In SU (2), a center flip combined with an adjungation of the Polyakov loop is provided by the transformation ϕ → 2π−ϕ [39], with the fixed point ϕ = π. We conclude that in the center-symmetric phase where the trace of the Polyakov loop vanishes, we have ϕ = π. In euclidean space, a vanishing ground-state expectation value of the trace of the Polyakov loop implies confinement. Hence φ or trW 3 ( φ ) serve as well as order parameters of confinement as does trW 3 . Indeed one can also show that with saturation in the confined phase, see [40,41]. The related order-parameter potential has been computed perturbatively, [42,43], and non-perturbatively, [40,41,44], in Yang-Mills theory, see also [45][46][47] for recent lattice computations. As a result of the above gauge prescriptions one obtains the Higgs field φ as an order parameter field for the confinement-deconfinement phase transition. The parametrisation of this phase transition in terms of φ allows to relate confinement to the vortex percolation picture: The gauge described above which diagonalizes W 3 potentially has defects that are located at the points where W 3 is an element of the center Z of the gauge group; Z Z N for SU (N ), i.e., when ϕ ∈ {0, 2π} for SU (2). This is easily seen for the case W 3 = 1l. Assume that the Higgs field vanishes, φ = 0. This can happen either homogeneously, or with a non-trivial angular dependence around the point in the (x 1 , x 2 )-plane where φ = 0. In this case the phasê possesses a non-vanishing winding. The related topological invariant is the standard Hopf winding number ij being the antisymmetric tensor. Hence, magnetic (anti-)monopole defects occur, where W 3 = 1l (W 3 = −1l). If trW 3 = 0 then monopoles and anti-monopoles are condensed in equal proportions, i.e., no net magnetic charge exists and the system is in the confined phase. Note that the winding numbers (12) are parametri-sation-windings and not windings of the Wilson line W 3 . However, they are related to the monopole number of 2+ 1-dimensional Yang-Mills theory. In four-dimensional Yang-Mills theory they are known to be related to the instanton number, see e.g. [39]. For vortex-free configurations, that is, those where φ sustains no Hopf windings, the diagonalisation can be performed in the entire space. Rewriting the Yang-Mills action in terms of the scalar field φ in an expansion in φ leads to a gauge action of the remaining spatial components of the gauge field coupled to the scalar field φ. In order to describe confinement, the effective potential of this field must exhibit, in the confined phase, a non-trivial minimum at ϕ = π. The Yang-Mills action S YM = 1/2 x trF 2 µν written in terms of a reduced gauge theory with Lorentz indicesμ = 0, 1, ..., d − 1 and a Higgs field formed from the remaining gauge field, reads, in the Polyakov gauge (8), If we restrict ourselves to configurations that do not depend on x d but only on x = (x 0 , x 1 , ..., x d−1 ), this action further reduces to the Glasma action In summary the following picture emerges. Yang-Mills theory, if parameterized in diagonalisation gauges, resembles a non-Abelian Higgs model, with the Higgs field in the adjoint representation. The Higgs field carries information about a confinement-deconfinement phase transition. The diagonalisation gauges feature topological configurations/defects which, at face value, are nothing but parameterisation defects even though the global information about these defects relates to the stable topological charge in these systems. In Yang-Mills theory, the single defects are not stable and can decay. Despite this fact, they are still related to stable topological configurations in Yang-Mills theory and can be used to extract the topological density. Interestingly, similar, vortex-type defects will show up in the simulations of the Abelian Higgs model in Sect. III. This establishes a close link between the two classes of theories, pure Yang-Mills and Higgs, and makes it even more relevant to study the far-from-equilibrium dynamics of the Higgs model. A. Model and observables In the following we present our results of real-time semi-classical simulations of an abelian U (1)-symmetric Higgs model. We will study the dynamical equilibration of 2 + 1-dimensional systems starting from different initial conditions far from thermal equilibrium and different values of the Landau-Ginzburg parameter. The model is described by the classical action (3). The corresponding equations of motion (EOM) for the gauge and scalar fields read with the current The action and the EOM are discretized on a cubic lattice using the compact formulation for U (1) gauge fields, in terms of the plaquette variables U µν , and the ratio of spatial and temporal lattice spacings γ = a s /a t . The variation of the action with respect to the fields yields the discretised EOM as well as the Gauss constraint. Rescaling the scalar field as φ = √ λφ, one finds that only the ratio of the couplings e 2 /λ is relevant for the dynamics. This is called the Ginzburg-Landau parameter which controls the transition between the type I (ξ > 1/2) and type II (ξ < 1/2) superconducting phases. During the time evolution we measure photon number distributions using the two-point correlators of the gauge fields, where the brackets ... cl denote averages over the classical ensemble defined by averaging over the initial conditions. The occupation number is calculated from the spatial Fourier transform of the two-point correlators, which can be conveniently calculated using the Fourier transform E i (k, t) = d 2 x E i (x, t) e ikx of the field variables. We consider a definition of the occupation number in terms of the field and its canonical conjugate which does not involve the dispersion relation explicitly. The Coulomb gauge is used to evaluate the occupation num-ber as where we also use angle averaging within the · cl brackets to improve statistics, A C is the gauge field in Coulomb gauge, ∇·A = 0, and V is the volume. The above expression can be conveniently calculated from the magnetic field using |A C (k, t)| 2 = |B(k, t)| 2 /k 2 and thus does not require knowledge of the dispersion. We have typically chosen a two-dimensional grid of 512 2 to 2048 2 spatial points. Correspondingly, we consider the dispersion We will also be interested in the two-point function of the Higgs field, but since this is not gauge invariant, it is not suitable for defining a meaningful occupation number. In contrast, (no angle averaging yet) which involves the parallel transport operator along some path γ(x, y) between x and y, is a gauge invariant two-point function and thus better suited, despite the fact that it will have some residual path dependence. In practice, we use the zig-zag path along the lattice close to the straight line connecting x and y. In analogy to the gauge sector we furthermore define the two-point correlation of the (gauge covariant) time derivative of the fields, This serves to define the angle-averaged scalar field oc- and the dispersion, in terms of the angle-averaged two-point functions H U (k, t) and G U (k, t). B. Initial conditions We consider two different type of initial conditions and ranges of parameters. The first is the 'tachyonic' scenario, where we have −λv 2 /6 ≡ m 2 < 0, and the expectation value of the Higgs field vanishes initially. For a purely scalar model, the evolution of such a system is well known to give rise to a non-zero expectation value of the Higgs field following the tachyonic instability in which modes with k < |m| become strongly populated [48][49][50]. All modes are being populated with random amplitude and phase fluctuations to account for the quantum ground-state fluctuations with energy ω/2. After the rescaling Φ = √ λΦ and using a small coupling this in practice gives field fluctuations with very small amplitude, and since the dynamics will be governed by the tachyonic instability, the actual value of the coupling (and thus the level of initial fluctuations) has very little impact on the dynamics. The negative mass-squared implies that the equilibrated system will be in the Higgs phase. Second, in the 'overpopulation' scenario, we choose the scalar modes to be occupied up to a cutoff Q s with a constant particle number with n 0 λ 1. We choose m 2 = 0, such that the system will equilibrate to a state in the Coulomb phase at late times, but as we will see below, the system shows a transient behavior exhibiting signs of being in the Higgs phase before thermalisation. The initial gauge field modes are chosen empty. Hence, we expect early-time dynamics in the locally gauge invariant model that resembles that of a purely complex scalar field theory. C. Vortex defects Pure scalar theories have recently been studied with respect to vortex formation and the interpretation of the corresponding field dynamics in terms of non-thermal fixed points [26][27][28][29]. The example of a non-relativistic Gross-Pitaevskii model served to illustrate that the evolution of a diluting vortex ensemble with vanishing total winding number corresponds to a self-similar process within which the system approaches a non-thermal fixed point, experiences critical slowing down and eventually moves away again from the critical point towards final thermalisation [17,29]. The vortices appearing in the superfluid thereby take the important role of slowing down the evolution and stabilizing the field configuration against equilibration. While scattering of vortices leads to a redistribution of the inter-vortex distances and initiates mutual defect annihilation, the final departure from the fixed point becomes possible only through the much weaker exchange of energy through sound modes propa- gating on the quasi-coherent background. Coupling the evolving scalar field to a gauge sector, we expect the gauge field to become relevant at some stage during and after the initial evolution which resembles the dynamics of the pure scalar theory. We specifically expect the scalar gauge field interaction to inflict the typical counter winding which is present in static solutions. In a pure scalar relativistic theory, Derrick's theorem [51] forbids the formation of stable vortex solutions. In the Higgs phase, the above dynamic equations possess, however, stationary Abrikosov/Nielsen-Olesen vortex so-lutions [52,53] with spatially asymptotic behavior φ as (x) = ve iϕ (x 1 + i x 2 ) n |x| , A as = n i e ∇ ln φ as , (29) for |x| → ∞, with a constant phase ϕ and winding number n, |n| ≥ 1. The asymptotic form (29) exhibits an important property of a vortex solution in a gauge theory. Asymptotically, the vortex winding carried by the scalar field is countered by that in the gauge field. This is how Derrick's theorem is circumvented and a finite and even minimum energy for the vortex solution is arranged for. In the center of the vortex core the ansatz (29) is not valid anymore. The Higgs field has a zero at this point, the vortex configuration therefore possesses a finite energy, and thus the appearance of dynamical vortices in scalar theories in (1 + d)-dimensional theories with d > 1 is possible. D. Magnetic confinement in the tachyonic scenario In Sect. II we have discussed the occurrence and stability of vortices in the stationary limit and pointed to their importance for the scalar field dynamics. Starting from tachyonic initial conditions we find the scalar theory and the Higgs model to exhibit closely related dynamical evolution. In both cases, with and without gauge coupling, the phase distribution of the scalar field exhibits vortex defects with winding number n = ±1. The phase pattern shown in the right column of Fig. 1 for a case with gauge coupling exhibits vortex defects, i.e., singular points around which the colour encoded phase angle wraps from −π to π. The formation of these vortex-type configurations follows very similar patterns in the pure scalar and the gauge theory shown here. However, after the formation of the vortices in the scalar field, the dynamics differs. In the Higgs theory we find a reaction to occur in the magnetic field, forming typical magnetic Aharonov-Bohm-type fluxes which in the stationary limit signal the formation of stable Nielsen-Olesen vortices in the coupled gauge-scalar system, as seen in the second row in Fig. 1. The respective leftcolumn panels show the magnetic flux which, in the second row, is seen to become confined within the vortex cores. During the progressing evolution, we find the confined magnetic flux to dissipate, however, due to the formation of additional vortex-antivortex pairs in the vicinity of the initial vortices. These pairs seem to appear due to counter flow being induced by the rising magnetic flux and because the hybrid scalar-gauge vortex has not yet adapted to a Nielsen-Olesen equilibrium shape. Note that for a pure scalar vortex, the field modulus asymptotically approaches the bulk as |φ bulk | − |φ| ∼ 1/r 2 where r is the distance from the vortex core, while for a Nielsen-Olesen vortex, both, the Higgs and the gauge field approach their bulk moduli exponentially in r. Eventually, the system enters a phase of wildly fluctuating magnetic fields and a considerably changing phase of the Higgs field, see bottom row in Fig. 1. This means, the intermediate magnetic confinement disappears. The reason for this behavior is that the initial energy is too high to allow for a stable configuration bearing Nielsen-Olesen vortices. To check whether the dynamics evolves to the expected equilibrium states we have also studied the evolution of a single scalar vortex with large winding number n = 5. We have found that it breaks up into five n = 1 vortices for ξ ξ LG while it stays confined for ξ ξ LG indicating a transition from an Abrikosov (type II) to a Meissner phase (type I) at ξ LG 0.5. In both situations, type I and type II, the equilibrium magnetic field is confined in analogy to Meissner extrusion from the superconductor. During the ensuing dynamical evolution the phase of the scalar field develops strong variations across the sample. Considering, however, gauge invariant correlations, we find that the coherence present in the early-time field configuration is preserved at later times. Specifically, the rôle of the covariance (22) in the scalar theory is taken over by its gauge invariant counter part defined in Eq. (23). One may speculate that the two observables behave similarly in both theories which would imply the possibility of quasi-universal similarities between them. This will be discussed more in the following section. E. Defects and electric confinement A particularly interesting question which arises with regard to previous results for scalar models [28] is how the dynamics occurring after the initial evolution due to the instability is "distributed" between the scalar and the gauge sectors. In the following we will show that, for weak gauge coupling, i.e., a Landau-Ginzburg parameter ξ 1/2, the universal dynamics known to appear in a self-interacting Klein-Gordon model with global U (1) symmetry is recovered in the locally gauge invariant theory. It is found, in particular, that the pattern of the modulus of the scalar field as well as the gauge invariant pattern of relative phases between two given points in the system resembles the structure found in the purely scalar theory. This implies the spontaneous spatial separation of regimes with opposite electric, i.e., Higgs charge, separated by sharp boundaries, without the occurrence of (meta)stable confined magnetic fluxes. As we will discuss further below, the appearance of this pattern can be related to the approach of a non-thermal fixed point of the dynamical evolution towards equilibrium. Fig. 2 shows the time evolution of the phase angle of the complex scalar field starting from the overpopulation initial condition, with m 2 = 0 and a weak gauge coupling ξ = 0.025 2 . Similarly as in the tachyonic scenario discussed in the previous section, the pattern becomes soon dominated by strong phase rotations. At the same time, the modulus squared of the magnetic field devel- ops strong spatial fluctuations as is shown in a series of snapshots in Fig. 3. On the contrary, depicting the evolution of the relative phase distribution as obtained from the gauge invariant correlator (23) we find, at later times, much weaker variations, see Fig. 4. The system rather quickly develops long-range phase coherence which is disrupted by defect lines separating large areas of almost equal phase. The phase jumps by approximately π at these defect lines. Along these defect lines the modulus squared of the Higgs field is found to be suppressed to near zero, see Looking at the details of the time evolution we find that the phase jump by π seen in Fig. 4 is in fact the maximum of a phase jump oscillating sinusoidally in time. This is the result of the oppositely rotating phases on either side of the boundary. Due to an additional spatial oscillation of the phase along the lines, the observed phase jump and Higgs field suppression near this kink propagates in time along the boundary between the op- positely charged regions which are shown in Fig. 6. The propagating defect lines can be considered as elongated "vortex sheets". These sheets have the form of phase defect lines of finite length, delimited by semi-vortex configurations at both ends. The observed pattern is analogous to the formation of long-lived domains of opposite charge found in purely scalar simulations [28]. It is emphasized that, here, the gauge-field pattern is releted to the phase pattern in the lower two panels as is seen by comparing with Fig. 2. The clear phase pattern is only seen in the gauge covariant quantities. Differently stated, while the charge J 0 , in our temporal gauge, is identical to the pure Higgs charge, the spatial current distribution J receives a distinct contribution from the gauge field A. As we will show in Sect. III G, for stronger gauge coupling, ξ 1/2, this gauge-field contribution leads to the deterioration of the observed longrange charge separation. At late times, the charge difference gradually vanishes, see lower right panel of Fig. 6, and only much weaker fluctuations of the Higgs field around zero remain as is seen in Fig. 5. The system approaches a thermal configuration in the Coulomb phase as will be seen in the following results for momentum-space spectra. Back to the intermediate-time pattern in the gauge field: Given the almost flat, up to line defects, gauge covariant phase distribution of the Higgs field seen in the lower panels of Fig. 4, it becomes clear that the gradient of the phase distribution seen in Fig. 2 reflects, at the later times (lower panels), the distribution of the vector potential A accross the sample. Interestingly, this distribution shows clear vortex-type defects. These, how- Fig. 2. At short times, Qst 10 3 , the pattern is very close to that shown in Fig. 2, i.e., the build-up of long-range coherence in the Higgs field is not yet modified by the gauge potential. At the later stage, sharp boundaries appear in the gauge-covariant phase where the phase jumps by approximately π. Along these defect lines the modulus squared of the Higgs field vanishes, cf. Fig. 5. The boundary lines separate regions of opposite phase rotation of G U in time and thus of opposite charge of the Higgs field. At late times the defects vanish and the system approaches a thermal configuration. It is emphasized that the gauge-field contribution is relevant for the phase pattern in the lower two panels as is seen by comparing with Fig. 2. ever, do not give rise to Aharonov-Bohm phases when integrated around a defect, and therefore do not correspond to type II Abrikosov vortices enclosing magnetic flux. Comparing with Figs. 4 and 6, it becomes clear, however, that the position of these vortex defects is correlated with the pattern of the solitary defect lines which separate the regions of opposite charge, see also [54]. In summary, at intermediate times of the equilibration process, a comparatively stable, slowly evolving defect structure is present in the gauge field. It is clear that no smooth gauge transformation can be found to unwind the defects in the gauge field unless the vortices of opposite charge approach each other and mutually annihilate. This annihilation process is not seen in our simulations. Comparing Figs. 2 and 6 we find that even at late times, when the electric confinement has disappeared, the vortices in the gauge potential remain, with equal numbers and density. Comparing this to the cases of larger gauge coupling and thus ξ, just below as well as above the transition [54], one finds that the density of vortices increases as does the size of the charged domains decrease. For ξ 1/2 there are no clearly separated defects visible any more. These observations can be traced back to the slowly evolving electric fields between the regimes of opposite Higgs charge. As we chose temporal gauge, A 0 = 0, the electric fields are proportional to the time derivative of the vector potential A. Hence, the spatial pattern of the vector potential reflects the time-integrated electric field pattern. As the latter to first approximation only slowly evolves in time and since at late times it simply degrades in strength due to the charge difference between the regimes levelling off [28] while keeping its pattern stable, see Fig. 6, also the pattern in A survives to late times and conserves the previously existing structure. We conclude that the vortex defects in the gauge field seen in our simulations do not directly give rise to physically visible magnetic confinement. They rather play an important part in the physics of electric confinement of nearly homogeneous charge distributions within sharply bounded regions. In contrast to this, as discussed in the previous section, also Nielsen-Olesen vortices and antivortices, confining magnetic flux, are possible if the fluctuations in the short-wave-length modes are suppressed, i.e., there is considerably less energy in the system than in the realisations described here. We will discuss, in the next section, that it is the formation of the chargeseparated regimes which reflects the approach of a non- thermal fixed point, rather than the presence of a dilute ensemble of Nielsen-Olesen (anti-)vortices, as it is the case in a non-relativistic Gross-Pitaevskii system without gauge fields [26,27]. F. Relation to a non-thermal fixed point The appearance of the above coherent spatial configurations is accompanied by characteristic gauge invariant power-law momentum distributions of the coupled gauge-scalar fields. Fig. 7 shows the time evolution of the momentum distribution obtained by taking the Fourier transform of G U (x, y) with respect to x − y and angleaveraging over the direction of these vectors. Following the early-time overpopulation at intermediate momenta, together with an underpopulation at large k, the spectrum develops power-law regimes at intermediate times. Comparing with the real-space patterns we find that this scaling, n(k) ∼ k −ζ , with a characteristic infrared power ζ = 3 . . . 3.5 for k k L = 0.3Q s , corresponds to the onset of the formation of separate oppositely charged domains. In particular, the extent of the scaling regime in the infrared is roughly of the order of the inverse size of the domains. At large times, the domains gradually disappear, as does the infrared scaling, leaving an essentially thermalized classical ensemble of fluctuations which according to the Rayleigh-Jeans law scales as n(k) ∼ k −1 . In Refs. [28,30] the infrared scaling was shown to sig- nal the approach of a non-thermal fixed point. It was, in particular, argued in Ref. [30] that the strongest infrared scaling n(k) ∼ k −ζ is related to the appearance of vortex-type defects. The scaling thereby can be explained as arising from the geometric nature of the phase angle gradient around the vortex defect which falls off as 1/r as a function of the distance r from the vortex core. A non-integer power of ζ = 3.5 had been reported in Ref. [28] which is likely to be explained as a mixed effect of an infrared power of ζ = 4 caused by vortices and ζ = 3 which indicates elongated soliton-like phase kinks as they are present in the domain walls. In our case, the ensemble of "vortex sheets" is expected to have both effects, the steeper vortex-induced power law in the infrared due to the vortex-type behavior at the end of the sheet, in particular the contribution from short sheets and point-like defects, and the less steep power law due to the elongated, soliton-type nature of the sheets. We remark that a dilute ensemble of Nielsen-Olesen vortices and anti-vortices, which is possible in the Higgs phase for low total energies, does not give rise to the infrared scaling seen for the non-gauged non-relativistic Gross-Pitaevskii model [26,27] because the confined magnetic field and the Cooper current around the defects counteract each other. This leads to a current which decays exponentially as a function of the distance r from the core, as compared to the algebraic decay ∼ 1/r around a Gross-Pitaevskii vortex. We finally emphasize that the steep scaling with ζ = 3 (i.e., ζ = d + 1 in d dimensions) was predicted within quantum field theory, extending kinetic-theory results from weak-wave-turbulence theory to the infrared regime where Boltzmann-type kinetic equations break down [31,33,36]. Analyzing Kadanoff-Baym dynamic equations derived from a non-perturbatively resummed twoparticle-irreducible (2PI) effective action the above power law was derived as the momentum-space signature of a non-thermal fixed point [31] and of strong wave turbulence in the infrared limit of strong occupation numbers. G. Strong gauge coupling Fig. 8 shows the evolution of the same momentum spectra as above for different gauge couplings e. Increasing the gauge coupling, i.e., the Landau-Ginzburg parameter to ξ > 1/2, which in equilibrium implies realisation of the type I Meissner phase, we find neither the characteristic infrared momentum scaling to appear during the chaotic evolution towards thermal equilibrium nor the clear domain formation in the gauge invariant spatial phase pattern. The spectrum of the gauge fields for ξ = 1 is shown in the upper panel of Fig. 9. Since the gauge fields are unoccupied initially, they are excited by the scalar fields, which have their dominant contribution to the energy density of the system at the scale Q s . One observes, at early times, a peak around k = 0.5 Q s in the gauge field spectrum, which slowly evolves into the thermal distribution of n k ∼ k −1 . In the lower panel, we also show the dispersion of the gauge modes, changing quickly from an initial massive behavior into the expected ω k ∼ k for high modes. In the infrared, a discrepancy from the free dispersion is visible. A gauge invariant and quasiparticle-definition [55] independent way to characterize the phase of the system may be possible in terms of Wilson and t'Hooft loop variables, which are beyond the scope of this study, or the screening properties of (static) gauge fields, as described in the next section. H. Screening dynamics of test charges As indicated in Sect. II one can characterize the Higgs phase with strong electromagnetic fluctuations by the screening of electric and magnetic charges. The screening behavior can be directly obtained by introducing test charges to the system. The test charges should be small enough such that they do not influence the physical state of the system. We choose them, on the other hand, sufficiently large, such that the excess electric field can be detected clearly on the background of the fluctuations in the plasma. In the initial state, we introduce the test charges by solving the Poisson equation (30) adding a positive and a negative test charge at positions x 1 and x 2 , respectively, such that the total charge in our box with periodical boundary conditions still vanishes. In the U (1)-symmetric model, in which the gauge fields have no self interactions, the contribution of the initial plasma state can be added to E tc directly. For SU (N ) gauge fields, one has to solve the above equation for the full system, with the contribution of the plasma added to the r.h.s. of Eq. (30). The solution of Eq. (30) can be calculated easily by performing a Fourier transformation on the lattice. The solution of the EOM then supplies a time dependent electric field, the divergence of which satisfies where ρ(x, t) is the charge density contribution of the Higgs field (15). This equation is satisfied automatically once the initial configuration is chosen to include a contribution according to Eq. (30). This is a consequence of the property of the EOM that the external charge (defined below as the 'excess divergence' of the electric field) does not change as the fields evolve according to the EOM, Usually one uses the special case where ρ ex (x, t) = 0, which just means there are no charges other than the particles in the plasma. To measure the decay of the electric field, we insert two test charges of opposite sign onto the lattice. We then measure the component of the time-averaged electric field parallel to the line connecting the test charges, on the line connecting the two charges. In Fig. 10 we show the electric field of two test charges in the plasma after the Higgs field gets populated starting from the tachyonic initial condition. (Note that the time-averaged electric field points to the negative direction between the charges, therefore it does not show up on the plot.) At t = 0 we see the electric field of the charges in vacuum, described by the Coulomb law, while at later times we observe screen- ing, i.e., the electric field decays exponentially with the distance. This is consistent with the expectation that the system ends up in the Higgs phase. Fig. 11 shows the electric field of the test charges developed from the overpopulation initial condition for the scalar fields, with initially unoccupied gauge fields, except for the field of the test charges. We find that, despite the unscreened field at t = 0, the electric fields become screened at intermediate times, while at late times, the field does not decay exponentially anymore implying that the fields are no longer screened. This suggests a transient condensation of the scalar fields, as it has been seen for non-gauged systems [16], which leads to a screening of the electric fields at intermediate times. At late times when the condensate has decayed, the fields become again un-screened. I. Condensation The overpopulation initial state of a pure scalar field theory leads to non-equilibrium transient condensation even though the theory has a non-negative mass term [16,28]. As we have seen in the previous section the screening properties of the electric fields suggest that the same phenomena also happen in a gauged system. The question naturally arises whether one can see the analogous condensation in this system by studying the occupation numbers of the scalars directly. In the non-gauged scalar model, condensation can be detected through the volume independence of the quan- tity The local gauge symmetry allows the fields to have any direction in field space which means that, after averaging over gauge rotations, the above integral has only diagonal contributions left, i.e., those at x = y. As a consequence, the integral is only proportional to the volume, and thus one would argue that no condensation is possible. We can make the above quantity gauge invariant by introducing a parallel transporter U (x, y): This corresponds to using the particle number definition in Eq. (26). This definition introduces a path dependence which can become important when the gauge field fluctuations are strong. In practice we choose the shortest possible path between the two end points, i.e., in general, a zig-zag path on the lattice. With the gauge invariant definition of the two-point function, it is possible to recover the scenario which is realized for non-gauged scalars by considering ξ < 1/2. In Fig. 12 the two-point functions introduced above are shown for a simulation with ξ = 0.09. One observes that, while the observable (34) fails to show condensation, the gauge invariant quantity (35) confirms that transient condensation is present in the system. Note the relatively large deviation for the largest system size N = 256. This occurs because the coherence of the dynamics of the system is limited by the speed of light, and thus the build-up of coherence within a given time is only possible on scales smaller than the light propagation distance for that time. This means that exact volume independence is satisfied only for small volumes. Larger volumes show a large growth, but the condensate starts to decay before the volume independence is reached. In Fig. 13, we show the gauge invariant two-point function (35) for couplings ξ = 0.36 and ξ = 1.0. One observes that, as the photons become more relevant, the volume independence of the condensate is no longer satisfied. It is remarkable that the quantity still shows a strong growth in the first part of the time evolution, but can not reach volume independence. Whether this signals that condensation does not happen because of stronger decay processes, or whether the too strong contributions to Eq. (35) from gauge fields cause it to fail to signal condensation, is an open question. Note that the zeromomentum two-point function shows a strong growth at short times suggesting a strong IR particle cascade, independent of the coupling ξ, see Fig. 14. IV. CONCLUSIONS In this paper we have investigated the far-fromequilibrium dynamics of the Abelian Higgs model in two spatial dimensions. Using classical simulations we have studied initial conditions triggering a tachyonic instability as well as initial conditions with a strong overpopulation in the IR modes of the scalar field. We have compared the equilibration dynamics of the system for different choices of the Landau-Ginzburg parameter which, in equilibrium, determines whether the system in the phase of spontaneous U (1)-symmetry breaking is of type I (Meissner effect with magnetic fields expelled from the superconductor) or of type II (appearance of Abrikosov vortices within which confined magnetic flux can intrude the superconducting region). For sufficiently weak gauge coupling, corresponding to a type II equilibrium Higgs phase, we find transient magnetic or electric confinement, topological defect formation in the gauge and Higgs fields, and turbulent scaling, even if the system at large times reaches an equilibrium state in the U (1)-symmetric or Coulomb phase. We find, in particular, that electric charge separation appears in the system which has no net total charge, and that the regions with almost homogeneous charge density are separated by sharp soliton-like boundary walls. These boundaries are caused by vortex sheets appearing in the gauge invariant Higgs phase correlator and are spatially correlated with vortex configurations in the gauge field which can not be removed by a smooth gauge transformation. Hence, the topological structure present in the gauge field is crucially relevant for the appearance and dynamics of transient confinement in the electromagnetic quantities. We have shown that the appearance of these defect structures is intimately related to the approach of a nonthermal fixed point [31], showing up in universal infrared momentum scaling of the gauge-invariant Higgs correlation function. This relation is consistent with earlier studies of non-equilibrium non-relativistic [26,27,29,30] as well as relativistic [28] scalar field theories, in particular the dynamics of the defect formation for the approach of a non-thermal fixed point [29]. The build-up of topological configurations corresponds to a shift of occupation numbers towards the infrared and causes an infrared strong-wave-turbulence cascade [16,17,33]. We emphasise that the relation between the appearance of topological defects and non-thermal fixed points was also found in 3 + 1 dimensional theories [26][27][28]. We furthermore find transient Bose-Einstein condensation, proposed for a Glasma in Ref. [10]. In Ref. [16], such condensation was shown, for the un-gauged version of the theory we study here, to result from turbulent cascades. According to our results, for weak gauge coupling ξ = 6e 2 /λ 0.5, using a gauge invariant definition of particle numbers (26), one recovers the known behavior of the scalar theory. We have also studied the screening of static electric fields as a signal of the transient behavior in the system by inserting test-charges into the plasma and measuring the decay of their fields. The results suggest that the overpopulation scenario indeed leads to a transient condensation and a non-thermal fixed point being approached. In summary, condensation, turbulence, the appearance of topological defects in the gauge fields, and confinement are found to be all closely related.
2013-07-19T18:15:08.000Z
2013-07-19T00:00:00.000
{ "year": 2013, "sha1": "a75962bf0970c656d4a204cd94110750063a1ceb", "oa_license": "elsevier-specific: oa user license", "oa_url": "http://manuscript.elsevier.com/S0375947414002310/pdf/S0375947414002310.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "a75962bf0970c656d4a204cd94110750063a1ceb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
53825020
pes2o/s2orc
v3-fos-license
Fecal Short Chain Fatty Acids and Dietary Intake in Italian Women With Restrictive Anorexia Nervosa: A Pilot Study Nutritional disorders such as Anorexia Nervosa (AN) can shape the composition of gut microbiota and its metabolites such as short chain fatty acid (SCFA). This study aims to compare fecal SCFA along with dietary intake of women with restrictive AN (r-AN = 10) and those of sex-matched lean controls (C = 8). The main fecal short chain fatty acids (SCFA) were assessed by gas chromatography equipped with a flame ionization detector. All participants completed 7-day food record and underwent indirect calorimetry for measuring resting energy expenditure (REE). Butyrate and propionate fecal concentrations were significantly reduced in r-AN patients compared to controls. The intake of carbohydrate and fat was significantly lower in r-AN patients than controls as well as energy intake and REE; whereas the amount of protein and fiber did not differ between groups. These preliminary results showed that r-AN patients had a reduced excretion of fecal SCFA, likely as a mechanism to compensate for the lower energy and carbohydrate intake observed between groups. Therefore, further studies need to be performed in patients with AN to explore the link between nutritional disorders, gut microbiota and its metabolites. INTRODUCTION Nutritional disorders such as anorexia nervosa (AN) can shape the composition and activity of the gut microbiota (1). Restrictive Anorexia nervosa (r-AN) is the most serious clinical subtype of AN characterized by severe dietary restriction and/or other weight loss behaviors (2), but the pathophysiological mechanisms are still unclear. Generally, patients with r-AN have insufficient energy intake with inadequate intake of certain macronutrients and micronutrients (3); however, malnutrition secondary to eating disorders develops slowly over time due to adaptive metabolic mechanisms to chronic underfeeding (4). A growing body of evidence recognizes the role of gut metabolites in affecting host metabolism and appetite through a variety of pathways (5), many of which are dependent on the diet of host, such as short chain fatty acids (SCFA) (6). SCFA are mainly produced by the fermentation of indigestible carbohydrates, especially dietary fibers and resistant starch, in the large intestine and are an important source of energy for colonocytes (7). The most abundant are acetic, propionic and butyric acids, representing 90-95% of the total SCFA (8). However, the final balance of SCFA production can be affected by some mechanisms such as the bacterial cross-feeding (9) beside substrate cross-feeding. Previous studies conducted in subjects with r-AN reported specific alteration of the gut microbiota and its metabolites when compared to both obese and lean subjects (10); specifically, SCFA increased in overweight/obese individuals (11) and decreased in AN subjects (12)(13)(14) than lean subjects. Several mechanisms as colonic SCFA absorption, colonic transit time, and differences in dietary intake and/or in colonic microbiota can modulate fecal SCFA concentration. Hence, the aim of this pilot study was to assess fecal SCFA concentration along with dietary intake, collected by 7day food records, in women with r-AN compared to lean subjects. Participants Recruitment The present study was part of an observational study that explored gut microbiota and its metabolites in different diseases condition (15). In this pilot study, we compared fecal SCFA and dietary intake provided by r-AN patients to a control group, asking them to collect their feces after recording food for consecutive 7 days. Fourteen young women with diagnosis of r-AN, according to Diagnostic and Statistical Manual of Mental Disorder (DSM)-V, were screened for the recruitment in this study from outpatient visit at the Internal Medicine and Clinical Nutrition Unit of Federico II University Hospital in Naples, Italy. At the enrollment 4 participants dropped out due to personal reason, therefore 10 r-AN women were finally recruited. On the other hand, 10 healthy sex-and agematched healthy subjects were screened for the control group (C), but only 8 agreed to participate in this study. Participants with history of digestive disease such as inflammatory bowel disease, use of antibiotics or probiotics within 3 months of study participation, habits of smoking, intensive physical activity and use of laxatives during the week before were excluded. The protocol was approved by the Local Ethical Committee of the Federico II University Hospital (Prot. Numb.155/14). All subjects gave written informed consent in accordance with the Declaration of Helsinki. Energy Intake and Nutrients Assessment Participants were instructed by a registered dietitian to fill in a food diary for 7 consecutive days and were trained for estimation of food portions by using household measurement tools. Specifically, participants were taught how to use tools such as bowls, cups, spoons, and plates to quantify food portions. In addition, pictures of varying portion sizes (small, medium, and large) of most widely consumed foods were shown to participants. A dedicated dietitian reviewed the completed 7-day food diary upon return for clarification of portions, missing or unclear data, and food preparation methods. All diaries were calculated using the WINFOOD database (3.4 version; Medimatica, Teramo, Italy). Resting Energy Expenditure (REE) Measurement REE was measured by indirect calorimetry (16) (Vmax29, Sensor Medics, Anaheim, California) with a ventilated hood and canopy system. Measurements of REE were made early in the morning and patients were instructed to follow a standardized fasting procedure on the day before (i.e., abstention from alcohol and intense physical activity). The indirect calorimetry was checked by burning ethanol, then oxygen and carbon dioxide analyzers were calibrated using nitrogen and standardized gases (mixtures of nitrogen, carbon dioxide and oxygen) before every run (17). Patients were asked to lie down for at least 10 min in a quiet environment and at room temperature of 23-25 • C; then oxygen and carbon dioxide production were determined for 30 min. EE was then calculated using the abbreviated Weir's formula, neglecting protein oxidation (18). Short Chain Fatty Acid (SCFA) Measurement Fecal samples were collected from all participants, stored in sterile plastic hermetically sealed boxes and processed for the analysis. One gram of feces was suspended in 5 ml of distilled water and mixed per 5 min. The fecal sample was homogenized in perchloric acid (0.15 mol/L), and centrifugated at 4,000 rpm for 10 min. The aqueous fecal phase (980 µl) was collected and 20 µl of methacrylic acid (2.5 mMol/ ml) was added. The concentrations of SCFA (butyrate, acetate, and propionate) were measured using a gas chromatography equipped with a flame ionization detector (Hewlett Packard 5890 Series II) (19), by injecting 1 µl of stool sample into the capillary column (Supelco SPBTM 30 m × 0.25 mm × 0.25 mm) and data were evaluated using an integrator manual (Hewlett Packard 3396 Series II). Statistical Analysis All data are shown as mean ± standard deviation (SD), otherwise stated. All dependent variables were controlled for normal distribution by Shapiro-Wilk test. If the distribution of a variable was skewed, it was log-transformed prior to analyses and backtransformed before presentation. Differences in SCFA were analyzed by using Wilcoxon's rank test. Pearson's correlations were used to test associations between variables with normal distributions; otherwise, Spearman's correlation was applied. Statistical analyses were performed using SPSS version 18.0 (Chicago, IL) and significance level was set at the p < 0.05. RESULTS All participants recruited completed the study, however 2 out 8 control subjects did not deliver food diary. Both groups differed for body weight (r-AN = 37.3 ± 3.8 vs. C = 55.8 ± 3.4 kg; p = 0.01), BMI (r-AN = 14.5 ± 1 vs. C = 22.1 ± 1.9 kg/m2; p = 0.01) and, even though the range of age was similar between groups (20-32 years), age resulted different (r-AN = 23.5 ± 2.8 vs. C = 29.2 ± 2.9 years; p = 0.02). As expected, the total energy intake was significantly lower in r-AN patients than C and macronutrient composition differed significantly between groups as reported in Table 1. r-AN had a lower amount of carbohydrate and fat compared to controls, while the intake of both protein and dietary fiber was similar. Likewise, trace elements and vitamins intake did not significantly differ between groups. None of the participants reported any alcohol consumption. As expected, REE measured by indirect calorimetry was reduced for r-AN than C; but those values were compliant with their self-reported energy intake. The concentrations of fecal SCFA were reported in Figure 1, showing that butyrate and propionate were significantly lower in r-AN patients than controls; whereas, acetate and total SCFA concentration did not differ. Correlation tests were run for weight, dietary intake and SCFA analyzed in the study; observing that weight and BMI were positively correlated to both butyric (r = 0.76 and r = 0.72, p < 0.01) and propionic acid (r = 0.71 and r = 0.77; p < 0.01). As concern dietary intake, the amount of fat as well as of starch were directly correlated to propionic (r = 0.70 and r = 0.59; p < 0.01) and acetic acid (r = 0.70 and r = 0.61; p < 0.01); while butyrate was correlated to carbohydrates and oligosaccharides (r = 0.70 and r = 0.61; p < 0.01). DISCUSSION The aim of this pilot study was to assess fecal SCFA and self-reported dietary intake in a small group of r-AN patients compared to control subjects. Our preliminary findings showed that fecal butyrate and propionate concentrations as well as dietary intake differed between the two groups. Anorexia is characterized by an altered gut microbiota composition and activities. In fact, so far, several studies have explored the fecal excretion of SCFA in r-AN patients, compared to healthy participants, showing reduced fecal concentrations of mainly propionic and butyric acid (12)(13)(14), according to our results. On the other hand, acetic acid was the most abundant and did not differ between groups. Interestingly, Borgo et al. (14) assessed SCFA concentrations in plasma as well, reporting that acetate was the only metabolite found; however, no significant relationship was observed between systemic and fecal concentrations. Fecal SCFA concentration can be influenced by nutrients availability of the diet (20). Therefore, a hypocaloric diet typically characterized by high protein and low carbohydrate intake could result in lowering fecal SCFA levels in patients with r-AN (12)(13)(14); likely by developing improved mechanisms in absorption and digestion of nutrients in the gut (21) and/or prolonging the colonic transit time due to constipation (22). Data, obtained by 7-day food records, revealed that diet in r-AN patients was low in fat and carbohydrates, but not in protein and dietary fibers, in comparison to control subjects, as already reported (3,13). Although dietary fibers are the main substrate for bacteria fermentation in the colon, it is likely that also other indigestible dietary substrates reached the colon in much smaller amount (10), due to the overall food reduction (13) that occurred in undernourished patients. Furthermore, it has been reported that a lower amount of carbohydrate, specifically starch, in the diet significantly decreased numbers of the butyrate-producing species, with a concomitant reduction in butyrate formation and excretion in the feces (23). The present study has several limitations. First and most important, the sample size is small and may therefore affect our results; and another limitation is that both groups differ for the mean age, although they had the same age range (20-32 years). In conclusion, these preliminary results confirmed that women with r-AN show a reduced excretion of fecal butyrate and propionate, most likely to compensate for the lower energy and carbohydrate intake, as reported previously. However, these results need to be further investigated to clarify the link between nutritional disorders, gut microbiota and its metabolites. AUTHOR CONTRIBUTIONS FP, FC, and ES designed the study. CDC, ES, and MN collected the data. MM and IC analyzed the data. EF, IC, and LS interpreted the data. IC and ES wrote the manuscript. All authors participated to the discussion of results and commented the manuscript and agreed to be accountable for all aspects of the work.
2018-11-29T14:04:17.679Z
2018-11-29T00:00:00.000
{ "year": 2018, "sha1": "a7e0d62e2fdc144dcbc8307ef1877ff1a9d23c54", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2018.00119/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a7e0d62e2fdc144dcbc8307ef1877ff1a9d23c54", "s2fieldsofstudy": [ "Medicine", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
17917954
pes2o/s2orc
v3-fos-license
Valuation Networks and Conditional Independence Valuation networks have been proposed as graphical representations of valuation-based systems (VBSs). The VBS framework is able to capture many uncertainty calculi including probability theory, Dempster-Shafer's belief-function theory, Spohn's epistemic belief theory, and Zadeh's possibility theory. In this paper, we show how valuation networks encode conditional independence relations. For the probabilistic case, the class of probability models encoded by valuation networks includes undirected graph models, directed acyclic graph models, directed balloon graph models, and recursive causal graph models. INTRODUCTION Recently, we proposed valuation networks as a graphical representation of val\lation�based systems [Shenoy 1989[Shenoy , 1992a. The axiomatic framework of valuation�based sys� terns (VBS) is able to represent many different uncertainty calculi such as probability theory [Shenoy 1992a], Dempster�Shafer's belief�function theory [Shenoy 1993], Spohn's epistemic belief theory [Shenoy 1991a, 199�a], and Zadeh's possibility theory [Shenoy 1992b]. In this paper, we explore the use of valuation networks for repre� senting conditional independence relations in probability theory and in other uncertainty theories that fit in the VBS framework. Conditional independence has been widely studied in prob ability and statistics [see, for example, Dawid 1979, Spohn 1980, Lauritzen 1989, Pear1 1988, and Smith 1989. Pearl and Paz [1987] have stated some basic prop erties of the conditional independence relation. (These properties are similar to those stated first by Dawid [1979] for probabilistic conditional independence, those stated by Spohn (1980] for causal independence, and those stated by Smith (1989] for generalized conditional independence.) Pearl and Paz call these properties 'graphoid axioms,' and they call any ternary relation that satisfies these properties a • graphoid.' The graphoid axioms are satisfied not only by conditional independence in probability theory, but also by vertex separation in undirected graphs (hence the term graphoids) [Pearl and Paz 1987], by d-separation in directed acyclic graphs , by partial correlation [Pearl and Paz 1987], by embedded multi�val� ued dependency models in relational databases [Fagin 1977], by conditional independence in Spohn's theory of epistemic beliefs [Spohn 1988, Hunter 1991, and by qualitative conditional independence [Shafer, Shenoy and Mellouli 1987]. Shenoy [1991bShenoy [ , 1992c has defined condi� tional independence in VBSs and shown that it satisfies the graphoid axioms. Thus the graphoid axioms are also satisfied by the conditional independence relations in all uncertainty theories that fit in the VBS framework includ ing Dempster-Shafer's belief-function theory and Zadeh's possibility theory. The use of undirected graphs and the use of directed acyclic graphs to represent conditional independence relations in probability theory have been extensively studied [see, for example, Darr och, Lauritzen and Speed 1980, Lauritzen 1989a,b, Wermuth and Lauritzen 1983, Kiiveri, Speed and Carlin 1984, Pearl and Paz 1987, Pearl, Geiger and Verma 1990, Lauritzen and Wermuth 1989, Frydenberg 1989, and Wermuth and Lauritzen 1990. The use of graphs to repre� sent conditional independence relations is useful since an exponential number of conditional independence state ments can be represented by a graph with a polynomial number of vertices. In undirected graphs (UGs), vertices represent variables, and edges between variables represent dependencies in the following sense. Suppose a, b, and c are disjoint subsets of variables. The conditional independence statement ' a is conditionally independent of b given c, ' denoted by a .Lb I c, is represented in an UG if every path from a vari� able in a to a variable in b contains a variable inc, i.e., if c is a cut-set separating a and b. One can also represent a conditional independence relation by a set ofUGs [Paz 1987]. A conditional independence relation is represented by a set of UGs if each independence statement in the rela tion is represented in one of the UGs in the set. In gen eral, one may not be able to represent a conditional inde pendence relation that holds in a probability distribution by one UG. Some probability distributions may require an exponential number of UGs to represent the conditional independence relation that holds in it [Verma 1987]. In directed acyclic graphs (DAGs), vertices represent vari� abies, and arcs represent dependencies in the following sense. Pearl [1988] has defined d-separation of two sets of Shenoy variables by a third. Suppose a, b, and c are disjoint sub sets of variables. We say c d-separates a and b iff there is no path from a variable in a to a variable in b along which (1) every venex with an outgoing arc is not in c, and (2) every vertex·with incoming arcs is either inc or has a de scendant in c. The definition of d-separation takes into ac count the direction of the arcs in a DAG. The conditional independence statement a .L b I c is represented in a D AG if c d-separates a and b. One can also represent conditional independence relations by a set ofDAGs [Geiger 1987]. A conditional independence relation is represented by a set of DAGs if it is represented in one of the DAGs in the set. As in the case of UGs, one may not be able to represent a conditional independence relation that holds in a probabil ity distribution by one DAG. Some probability distribu tions may require an exponential number of DAGs to rep resent the conditional independence relations that hold in it [Verma 1987]. Shafer [1993a] has defmed directed balloon graphs (DBGs) that generalize DAGs. A DBG includes a partition of the set of all variables. Each element of the partition is called a balloon. Each balloon has a set of variables as its par ents. The parents of a balloon are shown by directed arcs pointing to the balloon. A DBG is acyclic in the same sense that DAGs are acyclic. A DBG implies a probability model �nsisting of a conditional for each balloon given its parents. A DAG may be considered as a DBG in which each balloon is a singleton subset. Independence properties ofDBGs are studied in Shafer [l993b]. UGs and DAGs represent conditional independence rela tions in fundamentally diff erent ways. There are UGs such that the conditional independence relation represented in an UG cannot be represented by one DAG. And there are DAGs such that the conditional independence relation rep resented in a DAG cannot be represented by one UG. In fact, Ur and Paz [1991] have shown that there is an UG such that to represent the conditional independence relation in it requires an exponential number of DAGs. And there is a DAG such that to represent the conditional indepen dence relation in it requires an exponential number of UGs. In valuation networks (VNs), there are two types of ver tices. One set of vertices represents variables, and the other set represents valuations. Valuations are functions defined on variables. In probability theory, for example, a valuation is a factor of the joint probability distribution. In VNs, there are edges only between variables and valua tions. There is an edge between a variable and a valuation if and only if the variable is in the domain of the valua tion. If a valuation is a conditional for r given t, then we represent this by making the edges between the condi tional and variables in r directed (pointed toward the vari ables). (Conditionals are def" med in Section 2 and corre spond to conditional probability distributions in probabil ity theory.) Thus VNs explicitly depict a factorization of the joint valuation. Since there is a one-to-one correspon dence between a factorization of the joint valuation and the conditional independence relation that holds in it, VNs also explicitly represent conditional independence rela-tions. The class of probability models included by VNs include UGs, DAGs and DBGs. Given aUG, there is a corre sponding VN such that all conditional independence statements represented in the UG are represented in the VN. Given a DAG, there is a corresponding VN such that all conditional independence statements represented in the DAG are represented in the corresponding VN. And given a DBG, there is a corresponding VN such that all condi tional independence statements represented in the DBG are represented in the corresponding VN. Besides UGs, DAGs, and DBGs, there are other graphical models of probability distributions. Kiiveri, Speed, and Carlin [1984] have defmed recursive causal graphs (RCGs) that generalize DAGs and UGs. Recursive causal graphs have two components, an UG on one subset of variables (exogenous), and a DAG on another subset of variables (endogenous). Given a RCG, there is a corresponding VN such that all conditional independence statements represented in the RCG are represented in the VN. Lauritzen and Wermuth [1989] and Wermuth and Lauritzen [1990] have defmed chain graphs that generalize recursive causal graphs. Conditional independence properties of chain graphs have been studied by Frydenberg [1990]. It is not clear to this author whether VNs include the class of probability models captured by chain graphs. Jirousek [1991] has defmed decision tree models of proba bility distributions. These models are particularly expres sive for asymmetric conditional independence relations, i.e., relations that only hold for some configurations of the given variables, and not true for others. VNs, as de fmed here, do not include the class of models captured by decision trees. Beckerman [1990] has defmed similarity networks as a tool for knowledge acquisition. Like Jirousek's decision tree models, similarity networks allow representations of asymmetric conditional independence relations. VNs, as defmed here, do not include the class of models captured by similarity networks. An outline of this paper is as follows. Section 2 sketches the VBS framework and the general defmition of condi tional independence. The defmition of conditional indepen dence in VBS is a generalization of the defmition of condi tional independence in probability theory. Most of the ma terial in this section is a summary of [Shenoy 1991b[Shenoy , 1992c. Section 3 describes the valuation network repre sentation and shows how conditional independence rela tions are encoded in valuation networks. Section 4 com pares VNs to UGs, DAGs, DBGs, and RCGs. Finally, Section 5 contains some concluding remarks. VBSs AND CONDITIONAL INDEPENDENCE In this section, we briefly sketch the axiomatic framework of valuation-based systems (VBSs). Details of the ax iomatic framework can be found in [Shenoy 199 lb, 1992c]. In the VBS framework, we represent knowledge by enti ties called variables and valuations. We infer conditional independence statements using three operations called combination, marginalization, and removal. We use these operations on valuations. Variables. We assume there is a finite set$ whose el ements are called variables. Variables are denoted by up per-case Latin alphabets, X, Y, Z, etc. Subsets of $ are denoted by lower-case Latin alphabets, r, s, t, etc. Valuations. For each s � $, there is a set 'lJ •. We call the elements of 'lJ s valuations for s. Let 'lJ denote u { 'lJ s I s � $ } , the set of all valuations. If cr e 'lJ •• then we say s is the domain of u. Valuations are denoted by lower-case Greek alphabets, p, u, 't, etc. Valuations are primitives in our abstract framework and, as such, require no definition. But as we shall see shortly, they are objects that can be combined, marginalized, and removed. Intuitively, a valuation for s represents some knowledge about variables in s. Zero Valuations. For each s �$,there is at most one valuation 1;. e 'lJ s called the zero valuation for s. Let Z denote {I;. Is�$}, the set of all zero valuations. We call valuations in 'lJ -z nonzero valuations. Intuitively, a zero valuation represents knowledge that is internally inconsistent, i.e., knowledge that is a contradic tion, or knowledge whose truth value is always false. The concept of zero valuations is important in the theory of consistent knowledge-based systems [Shenoy 1990b]. Proper Valuations. For each s � $, there is a subset � s of 'lJ .-{1;.}. We call the elements of� .proper valua tions for s. Let� denote u{ � s I s � $ }, the set of all proper valuations. Intuitively, a proper valuation repre sents knowledge that is partially coherent. By coherent knowledge, we mean knowledge that has well-defmed se mantics. Normal Valuations. For each s � $, there is another subset ')'\. s of 'lJ 8-{ 1;.}. We call the elements of ')'\. s nor mal valuations for s. Let 'J1. denote u{ ')'\. s I s � $}, the set of all normal valuations. Intuitively, a normal valua tion represents knowledge that is also partially coherent, but in a sense that is different from proper valuations. We call the elements of �n'J\. proper normal valuations. Intuitively, a proper normal valuation represents knowl edge that is completely cohe�ent, i.e., knowledge that has well-defmed semantics. Intuitively, combination corresponds to aggregation of knowledge. If p and cr are valuations for rands represent ing knowledge about variables in r and s, respectively, then pEElu represents the aggregated knowledge about vari-abies in rus. It follows from the defmition of combination that the set 'Jl..u { 1;.} together with the combination operation E9 is a commutative semigroup. Identity Valuations. We assume that for each s � $, the commutative semigroup ')'\. ,u { 1;.} has an identity de noted by ls· In other words, there exists 1.s e 'J\. ,u{ 1;.} such thatforeachcre 'J\.8u{l;.}, cr Ea l,;= cr. Notice that a com mutative semigroup may have at most one identity. Intuitively, identity valuations represent knowledge that is completely vacuous, i.e., they have no substantive con tent. Marginalization. We assume that for each nonempty s � $, and for each X e s, there is a mapping J.. If we regard marginalization as a coarsening of a valuation by deleting variables, then we assume that the order in which the variables are deleted does not matter. Also we assume that marginalization preserves the coher ence of knowledge. Suppose p E 'lf rand 0' E 'lJ s· Suppose X e: r, and X E s. where p = Gl{pi I Y e rd. and r= u{ri I Y e rd. After fu sion, the set of valuations is changed as follows. All val uations whose domains include Yare combined, and the resulting valuation is marginalized such that Y is elimi nated from its domain. The valuations whose domains do not include Y remain unchanged. The following lemma describes an important consequence of the fusion opera tion. We assume that the removal operation is an "inverse " of the combination operation in the sense that arithmetic di vision is inverse of arithmetic multiplication, and in the sense that arithmetic subtraction is inverse of arithmetic multiplication. Conditional Independence. Suppose t e en. w• and suppose r, s, and v are disjoint subsets of w. We say ra nd s are conditionally independent given v with respect to -r, written as r Lc s I v, if and only if t(rusuv) = a.vv EBa.uv, where <X,vv e 'If ruv• and !X.suv e 'If suv· When it is clear that all conditional independence state ments are with respect tot, we simply say 'r and s are conditionally independent given v' instead of 'rand s are conditionally independent given v with respect tot,' and use the simpler notation r ..L s I v instead of r Lc s I v. Also, if v = 0, we say 'rand s are independent' instead of 'rand s are conditionally independent given 0' and use the simpler notation r ..Ls instead of r ..L s 1 0. Shenoy [1991b] shows that the conditional independence relation generalizes the conditional independence relation in probability theory. In particular, all characterizations of it given by Dawid [1979] (including the graphoid axioms) follow from the above definition. VALUATION NETWORKS In this section, we defme a valuation network representa tion of a VBS and explain how a valuation network en codes conditional independence statements. A valuation network (VN) consists of a four-tuple { $, 'If, e, et } where e � 'lfx$, and et �'lf x$. We call the elements of$ vertices and they represent variables. We call the elements of 'If nodes and they represent valua tions. We call the elements of e edges, and they denote ei ther domains of valuations, or tails of domains of condi tionals. We call the elements of et arcs and they denote the heads of domains of conditionals. In VNs, vertices are denoted by circles, nodes by diamonds, edges by lines joining the respective nodes and vertices, and arcs by a di rected edge pointing to the corresponding vertex. When a VN contains conditionals, we will assume that all condi tionals are with respect to valuation t obtained by com bining all valuations in the network. Example 3. Consider a VBS consisting of variables X1o ... , X10, and conditionals a.1 for X1 given 0, Uz for {Xz, X3) given X1o a.3 for )4 given X2, �for {Xs, X6, X,} given X2, a5 for X8 given X3, � for X9 given Xs, and a.7 for X10 given {X6, X7}. Figure 3 shows the VN for this VBS. If t denotes a.1E9 ... E9a.7, then a.1 = t(Xl), Uz = t(X2, X31 X1), a.3 = t(X4 I Xz}, a.4 = t(Xs. X6, X1l Xz}, a5 = t(X81 X3), a.6 = t(X91 Xs). a.7 = t(XIO I X6, X7). Example 4. Consider a VBS consisting of variables V, W, X, Y and Z, valuations a for (V, W}, and 13 for {V, X}, and conditionals yfor Y given {W, X}, and() for Z given X. Figure 4 shows the VN for this VBS. If t de notes a.EBI3EayE9CS, then a.EBI3 = t(V, W, X), y = t(Y I W, X), and () = t(Z I X). Conditional Independence in Valuation Networks. How is conditional independence encoded in VNs? Let us exam ine the defmition of conditional independence graphically. Suppose r, s, and v are disjoint subsets of variables, and suppose 't is a normal valuation for rusuv. Our defmition of conditional independence states that 't = CXrvv E9CX.uv iff r ..Lc s I v, where CXrvv e 'V' rvv • and CX.uv e 'lf svv· Suppose 't = CXrvv E9ex svv is a normal valuation for rusuv, where CXrvv e 'V' rvv • and CX.uv�E 'V' suv· Figure 7 shows the VN representation of this situa tion. Notice that all paths from a variable in r to a variable in s go through a variable in v, i.e., vi s a cut-set separating r from s. ations not included in either p or a. Clearly, the domain of e does not contain variables in either r or s. Since 't(rusuv) = pEBaEB6, it follows from the delmition of conditional in dependence that r l...c s I v. To summari ze, suppose we are given a VN representation of 'te «Jl.w· Suppose vi s a cut set separating r and s in the marginalized net work for variables in rusuv. Then r l...c s I v. COMPARISON In this section, we briefly compare VNs with UGs, DAGs, DBGs, and RCGs. We start with UGs. In UGs, the cliques of the graph (maximal completely connected vertices) denote the factors of the joint valuation. For example, consider the UG shown in Figure 8. ordering of the variables, and a conditional for each variable given a subset of the variables that precede it in the given ordering. Figure 9 shows an example of a DAG with 5 variables. An ordering consistent with this DAG is VWX¥Z. The DAG implies we have a condi tional for V given 0, a conditional for W given V, a conditional for X given V, a conditional for Y given {W, X}, and a conditional for Z given Y. The VN representation of the DAG model is also shown in Figure 9. Suppose 't denotes the joint probability distribution. Then a = 't(V), 13 In the DAG of Figure 9, using Pearl's defmition of d-separation, we cannot conclude, for example, that ration in VNs. If we fuse the VN with respect to Y, then {V, Z} is not a cut-set separating W and X. Therefore we cannot conclude that W l...c X I {V, Z}. If we further fuse W l...c X I {V, Z}. However, we can conclude that W l...c X I V. We can draw the same conclusion using sepa-the VN with respect to Z, then Vi s a cut-set separating W and X. Therefore, W -4X IV. The technique we have proposed for checking for conditional independence in VNs is an al ternative to the d-separation method proposed by Pearl [1988] for DAGs. Whether we have conditionals or not, checking a conditional in dependence statement in a VN is a matter of first fusing the VN to remove variables not in the conditional independence statement and then checking for separation in the fused VN. The information about conditionals is used in the fusion operation. Lauritzen et al. [1990] describes yet another method for checking for conditional indepen dence in DAGs. Their method consists of converting a DAG to an equivalent UG and then checking for conditional independence in the UG using separation. In short, their method consists of examining a subgraph of the DAG (after eliminating the variables that succeed all variables in the conditional independence statement in an ordering consistent with the DAG), moralizing the graph, dropping direc tions, and then checking for separation. Next, let us compare VNs and DBGs. DBGs are defmed in [Shafer 1993a]. A DBG includes a partition of the set of all variables. Each ele ment of the partition is called a balloon. Non singleton balloons are shown as ellipses encir cling the corresponding variables. Each balloon has a set of variables as its parents. The par ents of a balloon are shown by directed arcs pointing to the balloon. A DBG is acyclic in the same sense that DAGs are acyclic. A DBG implies a probability model consisting of a conditional for each balloon given its parents. A DAG may be considered as a DBG in which each balloon is a singleton subset. Figure 10 implies a conditional for X 1 given 0, a conditional for {X2, X3} given X1, a condi tional for� given X2, a conditional for (Xs. X6, X1} given X2, a conditional for X8 given X3, a conditional for X9 given X5, and a conditional for X10 given {X6, X7}. The corresponding VN is also shown in Figure 10. The conditional independence theory ofDBGs is described in [Shafer 1993b], and is analogous to the conditional in dependence theory of DAGs. In the DBG and VN of Finally, we compare VNs to RCGs. RCGs are defined in [Kiiveri, Speed, and Carlin 1984]. A RCG consists of two kinds of vertices (variables }-exogenous and endogenous, and two kinds of edges-undirected and directed. An undi- rected edge always connects two exogenous variables, and a directed edge always points to an endogenous variable. RCGs generalize DAGs in the sense that a DAG is a RCG with at most one exogenous variable. directed edges pointing to Y imply a conditional for Y given {W, X}, and the directed edge pointmg to Z implies a conditional for Z given X. The corresponding VN is also shown in Figure 11. Conditional independence properties of RCG are given in [Kiiveri, Speed and Carlin 1984]. Briefly, if we look at the subgraph of a RCG restricted to the exogenous vari ables, the subgraph is an UG and its conditional indepen dence properties are the same as those given by the UG models. On the other hand, the conditional independence relation in the complete RCG is given by the d-separation relation of DAGs. Since the basis of the conditional independence relations in RCGs is the underlying factorization and the additional information about conditionals, and since this information is encoded in VNs, a corresponding VN encodes the same condi tional independence relation as a RCG. For example, in the RCG and VN of Figure 11 CONCLUSION We have described valuation networks and how they encode conditional independence. Given a valuation network, r � s I v if v is a cut-set separating r from s in the marginalized valuation network for rusuv. We have com pared valuation networks to undirected graphs, directed acyclic graphs, directed balloon graphs, and recursive causal graphs. All prob ability models encoded by one of these graphs can be represented by corresponding valuation networks. Factorization is fundamental to conditional independence. The power of the valuation network representation arises from the fact that it represents factorization explicitly. Also notice that valuation networks encode condi tional independence not only in probabilistic models, but also in all uncertainty theories that fit in the VBS framework. This includes Dempster-Shafer's belief-function theory, Spohn's epistemic belief theory, and Zadeh's possibility theory [Shenoy 1991b].
2013-03-06T06:20:35.000Z
1993-07-09T00:00:00.000
{ "year": 2013, "sha1": "ad8b98d85104754fd89012291efceb0c46247a97", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1303.1477", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ad8b98d85104754fd89012291efceb0c46247a97", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
40461764
pes2o/s2orc
v3-fos-license
A novel WDR62 mutation causes primary microcephaly in a large consanguineous Saudi family BACKGROUND Primary microcephaly (MCPH) is a rare developmental defect characterized by impaired cognitive functions, retarded neurodevelopment and reduced brain size. It is genetically heterogeneous and more than 17 genes so far have been identified that are associated with this disease. OBJECTIVE To study the genetic defect in a consanguineous Saudi family with primary microcephaly. DESIGN Cross-sectional clinical genetics study of a Saudi family. SETTING Medical genomics research center. PATIENTS AND METHODS Blood samples collected from six members of a family of healthy consanguineous parents were analyzed by whole exome sequencing to identify the underlying pathogenic mutations in two members of the family (23-year-old female and 7-year-old male) who presented with primary microcephaly, intellectual disability, delayed psychomotor development and walking difficulty, speech impediments and seizures. MAIN OUTCOME MEASURE(S) Detection of mutation in the WD repeat domain 62 (WDR62) gene in a family segregating autosomal recessive primary microcephaly. RESULTS The exome variant analysis identified a novel missense mutation (c.3878C>A) in WDR62 gene in exon 30 resulting in amino acid change from alanine to aspartate (p.Ala1293Asp). Further validation in the affected patients and healthy members of family and 100 unrelated healthy persons as controls confirmed it to be pathogenic. CONCLUSIONS Functional impairment of the WDR62 gene can lead to severe neurodevelopmental defects, brain malformations and reduced head size. A missense mutation of exon 30 changed alanine to aspartate in the WDR62 protein leading to the typical MCPH phenotype. LIMITATIONS Mutation was identified in a single family. M icrocephaly (MCPH) is an abnormally small head size. There are two sub categories: primary microcephaly is present at birth and is a static developmental anomaly; secondary microcephaly develops later in life (postnatally) and is a progressive neurodegenerative condition. Primary microcephaly is a congenital condition associated with incomplete brain development, where a baby's head is smaller than expected when compared to babies of A novel WDR62 mutation causes primary microcephaly in a large consanguineous Saudi family RESULTS: The exome variant analysis identified a novel missense mutation (c.3878C>A) in WDR62 gene in exon 30 resulting in amino acid change from alanine to aspartate (p.Ala1293Asp). Further validation in the affected patients and healthy members of family and 100 unrelated healthy persons as controls confirmed it to be pathogenic. CONCLUSIONS: Functional impairment of the WDR62 gene can lead to severe neurodevelopmental defects, brain malformations and reduced head size. A missense mutation of exon 30 changed alanine to aspartate in the WDR62 protein leading to the typical MCPH phenotype. LIMITATIONS: Mutation was identified in a single family. the same sex and age. It serves as an important neurological indication or warning sign, but no uniformity exists in its definition. It is usually defined as a head circumference (HC) more than four standard deviations (SD) below the mean for age and sex. 1 Primary microcephaly (MCPH) is very rare; the rate is estimated to be 1 in 30 000 in Japan 2 and 1 in 250 000 in Holland. 3 This rate increases to 1 in 10 000 in regions where interfamily and cousin marriages are common. 4 The birth incidence varies from 1.3 to 150 per 100 000 persons, depending on various factors, mainly population type and consanguinity. 5 So far, 17 genes have been identified that underlie autosomal recessive primary microcephaly. These include microcephalin at MCPH1, 6 WDR62 at MCPH2, [7][8][9] CDK5RAP2 at MCPH3, 10 CASC5 at MCPH4, 1 [23][24][25] Furthermore, recently identified syndromic microcephaly AGMO, RTTN and PGAP2 genes have been reported in the Saudi population. [26][27][28] However, the majority of mutations have been identified in two genes: ASPM, accounting for more than half of all mutations and WDR62, which accounts for around 10% of all cases. 29 All of the WDR62 mutation cases presented up to now have shown the presence of mental retardation and also the presence of prominent microcephaly on physical examination, while some of the patients also suffered from seizures. The examination of the brain of the patients under high field strength (3 Tesla) magnetic resonance imaging (MRI) have identified hallmarks of a wide range of severe cortical malformations. 8 In our study, we have ascertained the presence of a novel mutation in a consanguineous family from Saudi origin. There were two affected siblings born to a consanguineous union in this family. Whole exome sequencing was performed to identify the underlying genetic cause because of the potential of this technology in the molecular diagnostics of similar genetic disorders. 30 We identified a novel mutation in the WDR62 gene in these Saudi patients. Sample collections The pedigree (family chart) of the family was drawn by obtaining information from elders of the family ( Figure 1). Pedigree construction suggested an autosomal recessive pattern of inheritance ( Figure 1). Before the study was initiated, written informed consent was taken from all participants of the study. The study was also approved from ethical committee of the Center of Excellence in Genomic Medicine Research, King Abdulaziz University, Jeddah (013-CEGMR-2-ETH). The blood samples were collected from six members of the family (two affected and four normal individu- Figure 1. A pedigree of a consanguineous family from Saudi Arabia showing the disease phenotype segregating in an autosomal recessive manner. The samples available for genetic testing are marked with asterisks. als) and one hundred unrelated healthy people of Saudi origin as controls. Both the affected individuals underwent medical examination at King Abdulaziz University Hospital, Jeddah. We ruled out possible environmental factors and infections such as rubella, toxoplasma, cytomegalovirus, and Zika virus that may lead to microcephaly. Patient 1 Proband (IV-1) was a 23-year-old female who presented with MCPH and suffered from mental retardation. She was the first child of consanguineous healthy parents ( Figure 1). She had a normal weight and height, but could not walk properly, had speech problems, had brain atrophy and reported seizures. Her head circumference was less than 5 standard deviations below the mean. Patient 2 Proband (IV-4) was a 7-year-old male who presented with MCPH and suffered from mental retardation. This individual was born after five normal offspring as shown in the pedigree (Figure 1). He was not able to move due to muscular dystrophy, could not speak properly, and had a history of seizures. The head circumference was less than 5 standard deviations below the mean. Whole exome sequencing To identify the underlying pathogenic mutation behind this disease phenotype we planned whole exome sequencing (HiSeq 2500 System, Illumina, San Diego, CA, United States). Extraction of genomic DNA (gDNA) was performed using standard procedures. Briefly, the blood sample was used for direct gDNA isolation using the QIAamp DNA Blood Mini Kit, Cat. Nr. 51106 (Qiagen, Hilden, Germany) following manufacturer's instructions and modified where necessary. The gDNA was analyzed by the Bioanalyzer system (Agilent Technologies, Santa Clara, CA, United States) and quantified by a NanoDrop Spectrophotometer and the genomic DNA was stored under appropriate conditions for future analysis. The samples were prepared according to an Agilent SureSelect Target Enrichment Kit preparation guide (Capture kit, SureSelect_v6) by using genomic DNA directly. The genomic DNA libraries were sequenced using the Illumina HiSeq 2000/2500 sequencer. The resulting VCF (variant call format) file contained 89064 variants. These variants were filtered based on quality, frequency, genomic position, protein effect, pathogenicity and previous associations with the phenotype. Sanger sequencing To confirm the mutation in patients and family, we did Sanger sequencing (ABI 3700). To confirm the mutation as pathogenic, we also sequenced this DNA variant in 100 unrelated control people. RESULTS Candidate variants were first searched in a broad panel of genes which had been previously associated with microcephaly, intellectual disability, and one of the reported features (in human and model organisms). The exome variant analysis yielded a plausible candidate variant in the WDR62 gene where cytosine (C) at position 3878 is replaced by adenine (A), resulting in conversion of amino acid alanine at position 1293 to aspartate, thus showing a novel missense mutation 3878C>A, at exon 30 in affected members of the family as shown in Figure 2. Furthermore, in the Greater Middle East variome, the minor allele frequency was 0.0005 and there was single heterozygous and no homozygous individuals found in the database. Moreover, PolyPhen, MutationTaster and SIFT predicted this disease-causing mutation. This mutation was absent in the Human Gene Mutation database (HGMD, www.hgmd.cf.ac.uk/) and MIM. 1000 genome (http://www.internationalgenome.org/) and Exome Aggregation Consortium (http://exac.broadinstitute.org/) database. Other mutations in this gene are known to cause primary autosomal recessive microcephaly type 2, with or without cortical malformations (MCPH2). This disorder shows phenotypic overlap with the symptoms reported for this individual. Hence, this variant was considered as a plausible candidate, but needed further investigation to validate its clinical significance. To check the other mutations elsewhere in the whole exomes, we also broadened the analysis to all genes applying all inheritance modes. However, no additional potential candidate variant that might be of relevance for the reported phenotype could be identified.To confirm this novel mutation in WDR62 gene, further validation was carried out in all available affected patients and healthy members of family and 100 unrelated healthy persons as controls. This mutation was not detected in any healthy individual, which confirmed it to be pathogenic. The parents of the affected members were heterozygous, which also confirms the autosomal recessive mode of inheritance. DISCUSSION To date about 25 mutations have been identified in WDR62 gene ( Table 1). In our family, a rare, homozygous missense variant was detected in the WDR62 gene in patients in a homozygous state. This gene encodes a protein which is required for proper neurogenesis and cerebral cortical development. 31 It is proposed to play a role in neuronal proliferation and migration. 7,8 The expression of WDR62 was found to be widespread in the developing mouse brain, with highest expression in the forebrain. 7 Mechanistically, WDR62 associates and genetically interacts with Aurora A to regulate spindle formation, mitotic progression and brain size. It is also reported that WDR62 interacts with Aurora A to control mitotic progression, and loss of these interactions leads to mitotic delay and cell death of neural progenitor cells (NPCs) which could be a potential cause of human microcephaly. 32 Mutations in this gene have been associated with microcephaly 2, primary, autosomal recessive, with or without cortical malformations (MCPH2). [7][8][9][33][34][35][36] This is a disease characterized by microcephaly associated with other manifestations and shows wide phenotypic variability. 37 Associated features include moderate to severe mental retardation, and various types of cortical malformations in most patients. Cortical malformations may include pachygyria with cortical thickening, microgyria, lissencephaly, hypoplasia of the corpus callosum, and schizencephaly. All affected individuals have delayed psychomotor development. Some patients have seizures. The variant detected here is a substitution that affects a highly conserved alanine residue that is the last amino acid of the last WD domain (WD stands for tryptophan-aspartic acid dipeptide). The underlying common function of all WD-repeat proteins is coordinating multi-protein complex assemblies, where the repeating units serve as a rigid scaffold 2083delA; 2472_2473delAG; c.390G>A; c.2527dupG; p.R438H; p.D955Afs*112). 8,9,34,35,[38][39][40] The frameshift mutations reported by Murdock et al were reported to cause nonsense-mediated mRNA decay and loss of function. 34 However, the effect of the missense variant detected here remains to be elucidated. Interestingly, this variant is predicted to be deleterious by the majority of in silico prediction tools. This variant is absent from population databases such as 1000 Genomes Project and the Exome Aggregation Consortium. The variant was tested for likely pathogenic effect by in silico tools like MutationTaster, 44 SIFT and Polyphen, and it was predicted as "disease causing" with high pathogenicity scores ( Table 2). Further functional analysis is needed to determine the functional effect of this variant and its possible contribution to the reported phenotype. Conflict of interest The authors declare no conflict of interest.
2018-04-03T04:48:27.240Z
2017-01-21T00:00:00.000
{ "year": 2017, "sha1": "409c221553a0ba4a3c5d113544e81b740e36d998", "oa_license": "CCBYNCND", "oa_url": "https://www.annsaudimed.net/doi/pdf/10.5144/0256-4947.2017.148", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "409c221553a0ba4a3c5d113544e81b740e36d998", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
145869154
pes2o/s2orc
v3-fos-license
Set-Valued Additive Functional Equations A BSTRACT . In this paper, we introduce set-valued additive functional equations and prove the Hyers-Ulam stability of the set-valued additive functional equations by using the fixed point method. INTRODUCTION AND PRELIMINARIES Set-valued functions in Banach spaces have been developed in the last decades. The pioneering papers by Aumann [4] and Debreu [11] were inspired by problems arising in Control Theory and Mathematical Economics. We can refer to the papers by Arrow and Debreu [2], McKenzie [24], the monographs by Hindenbrand [18], Aubin and Frankowska [3], Castaing and Valadier [7], Klein and Thompson [22] and the survey by Hess [17]. The stability problem of functional equations originated from a question of Ulam [37] concerning the stability of group homomorphisms. Hyers [19] gave a first affirmative partial answer to the question of Ulam for Banach spaces. Hyers' Theorem was generalized by Aoki [1] for additive mappings and by Rassias [35] for linear mappings by considering an unbounded Cauchy difference. A generalization of the Rassias theorem was obtained by Gȃvruta [16] by replacing the unbounded Cauchy difference by a general control function in the spirit of Rassias' approach The functional equation f (x + y) = f (x) + f (y) is called an additive functional equation. In particular, every solution of the additive functional equation is said to be an additive mapping. The functional equation 2f x+y 2 = f (x) + f (y) is called a Jensen additive functional equation. In particular, every solution of the Jensen additive functional equation is said to be a Jensen additive mapping. The stability problems of several functional equations have been extensively investigated by a number of authors and there are many interesting results concerning this problem (see [15,16,20,36]). Let (X, d) be a generalized metric space. An operator T : X → X satisfies a Lipschitz condition with Lipschitz constant L if there exists a constant L ≥ 0 such that d(T x, T y) ≤ Ld(x, y) for all x, y ∈ X. If the Lipschitz constant L is less than 1, then the operator T is called a strictly contractive operator. Note that the distinction between the generalized metric and the usual metric is that the range of the former is permitted to include the infinity. We recall the following theorem by Margolis and Diaz. Theorem 1.1. [8,12] Let (X, d) be a complete generalized metric space and let J : X → X be a strictly contractive mapping with Lipschitz constant L < 1. Then for each given element x ∈ X, either d(J n x, J n+1 x) = ∞ for all nonnegative integers n or there exists a positive integer n 0 such that (1) d(J n x, J n+1 x) < ∞, ∀n ≥ n 0 ; (2) the sequence {J n x} converges to a fixed point y * of J; (3) y * is the unique fixed point of J in the set Y = {y ∈ X | d(J n0 x, y) < ∞}; (4) d(y, y * ) ≤ 1 1−L d(y, Jy) for all y ∈ Y . In 1996, Isac and Rassias [21] were the first to provide applications of stability theory of functional equations for the proof of new fixed point theorems with applications. By using fixed point methods, the stability problems of several functional equations have been extensively investigated by a number of authors (see [9,10,13,26,32,34]). Let Y be a Banach space. We define the following: On 2 Y we consider the addition and the scalar multiplication as follows: It is easy to check that Furthermore, when C is convex, we obtain (λ + µ)C = λC + µC for all λ, µ ∈ R + . For a given set C ∈ 2 Y , the distance function d(·, C) and the support function s(·, C) are respectively defined by For every pair C, C ∈ C b (Y ), we define the Hausdorff distance between C and C by where B Y is the closed unit ball in Y . The following proposition reveals some properties of the Hausdorff distance. Lemma 1.1. [11] Let C(B Y * ) be the Banach space of continuous real-valued functions on B Y * endowed with the uniform norm · u . Then the mapping j : (C cb (Y ), ⊕, H) → C(B Y * ), given by j(A) = s(·, A), satisfies the following properties: [6]). In this case, the Debreu integral of f in Ω is the unique element Then we obtain that ((Ω, C cb (Y )), +) is an abelian semigroup. Set-valued functional equations have been extensively investigated by a number of authors and there are many interesting results concerning this problem (see [5,27,28,29,30,31,33]). Using the fixed point method, we prove the Hyers-Ulam stability of the following set-valued additive functional equations Throughout this paper, let X be a real normed space and Y a real Banach space. x for all x, y ∈ X. Every solution of the set-valued Jensen additive functional equation is called a setvalued Jensen additive mapping. be a function such that there exists an L < 1 with for all x, y ∈ X. Suppose that F : X → (C cb (Y ), H) is a mapping satisfying for all x, y ∈ X. Let r and M be positive real numbers with r > 1 and diam F (x) ≤ M x r for all x ∈ X. Then there exists a unique set-valued additive mapping A : X → (C cb (Y ), H) such that for all x ∈ X. Consider S := {g : g : X → C cb (Y ), g(0) = {0}} and introduce the generalized metric on X, where, as usual, inf φ = +∞. It is easy to show that (S, d) is complete (see [ Let g, f ∈ S be given such that d(g, f ) = ε. Then for all x ∈ X. Hence for all x ∈ X. So d(g, f ) = ε implies that d(Jg, Jf ) ≤ Lε. This means that for all g, f ∈ S. It follows from (2.6) that d(F, JF ) ≤ L 2 . By Theorem 1.1, there exists a mapping A : X → Y satisfying the following: (1) A is a fixed point of J, i.e., for all x ∈ X. The mapping A is a unique fixed point of J in the set This implies that A is a unique mapping satisfying (2.7) such that there exists a µ ∈ (0, ∞) satisfying (2) d(J n F, A) → 0 as n → ∞. This implies the equality lim n→∞ 2 n F x 2 n = A(x) for all x ∈ X; (3) d(F, A) ≤ 1 1−L d(F, JF ), which implies the inequality for all x, y ∈ X. Since diam F (x) ≤ M x r for all x ∈ X, diam 2 n F x 2 n ≤ 2 n 2 rn M x r for all x ∈ X and so A(x) = 2 n F x 2 n is a singleton set and A(x+y) = A(x)⊕A(y) for all x, y ∈ X. Corollary 2.1. Let p > 1 and θ ≥ 0 be real numbers, and let X be a real normed space. Suppose that F : X → (C cb (Y ), H) is a mapping satisfying for all x, y ∈ X. Let r and M be positive real numbers with r > 1 and diam F (x) ≤ M x r for all x ∈ X. Then there exists a unique set-valued additive mapping A : X → Y satisfying Proof. The proof follows from Theorem 2.2 by taking L := 2 1−p and for all x, y ∈ X. for all x ∈ X. Proof. It follows from (2.5) that for all x ∈ X. The rest of the proof is similar to the proof of Theorem 2.2. for all x, y ∈ X. STABILITY OF THE SET-VALUED JENSEN ADDITIVE FUNCTIONAL EQUATION (1.2) Using the fixed point method, we prove the Hyers-Ulam stability of the set-valued Jensen additive functional equation (1.2). Theorem 3.4. Let ϕ : X 2 → [0, ∞) be a function such that there exists an L < 1 with for all x, y ∈ X. Suppose that F : for all x, y ∈ X. Let r and M be positive real numbers with r > 1 and diam F (x) ≤ M x r for all x ∈ X. Then there exists a unique set-valued Jensen additive mapping A : for all x ∈ X. Proof. Let y = 0 in (3.9). Since F (x) is convex, we get for all x ∈ X. Consider S := {g : g : X → C cb (Y ), g(0) = {0}} and introduce the generalized metric on X, where, as usual, inf φ = +∞. It is easy to show that (S, d) is complete (see [ for all x, y ∈ X. Let r and M be positive real numbers with r > 1 and diam F (x) ≤ M x r for all x ∈ X. Then there exists a unique set-valued Jensen additive mapping A : X → Y satisfying Proof. The proof follows from Theorem 3.4 by taking L := 2 1−p and ϕ(x, y) := θ( x p + y p ) for all x, y ∈ X.. Theorem 3.5. Let ϕ : X 2 → [0, ∞) be a function such that there exists an L < 1 with ϕ(x, y) ≤ 2Lϕ x 2 , y 2 for all x, y ∈ X. Suppose that F : X → (C cb (Y ), H) is a mapping satisfying F (0) = {0} and (3.9). Let r and M be positive real numbers with r < 1 and diam F (x) ≤ M x r for all x ∈ X. Then there exists a unique set-valued Jensen additive mapping A : X → (C cb (Y ), H) such that for all x ∈ X. Proof. It follows from (3.11) that for all x ∈ X. The rest of the proof is similar to the proofs of Theorems 2.2 and 3.4. for all x, y ∈ X.
2019-05-07T14:05:55.804Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "0bc9f9beada71659c2a62384d89a812cdf51297b", "oa_license": "CCBYNC", "oa_url": "https://dergipark.org.tr/en/download/article-file/690998", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "5df2f958c051f6c842971dcac090f9751448a9f5", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
90240935
pes2o/s2orc
v3-fos-license
Physico-chemical and Functional Properties of Fresh Eel (Mastacembelus armatus) Muscle Proteins The freshwater spiny eel (Mastacembelus armatus) has great economic value especially in inland areas of India (Talwar and Jhingran, 1991). Among the commercially exploited fresh water fishery resources Indian major carps, minor carps, Chinese carps and eels constitute an important commercial fishery in riverine and reservoir fisheries. Eels represent an important commercially exploited fish in the local fish markets in the inland areas. Proteins are endowed with a number of physico-chemical and functional characteristics, which make them suitable for varied food applications such as thickeners, emulsifiers etc. Functional properties of proteins play a significant role in food applications and are very much influenced by their structure in food systems (Ramachandran et al., 2007). The molecular basis of functionality is related to their International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 7 Number 03 (2018) Journal homepage: http://www.ijcmas.com Introduction The freshwater spiny eel (Mastacembelus armatus) has great economic value especially in inland areas of India (Talwar and Jhingran, 1991). Among the commercially exploited fresh water fishery resources Indian major carps, minor carps, Chinese carps and eels constitute an important commercial fishery in riverine and reservoir fisheries. Eels represent an important commercially exploited fish in the local fish markets in the inland areas. Proteins are endowed with a number of physico-chemical and functional characteristics, which make them suitable for varied food applications such as thickeners, emulsifiers etc. Functional properties of proteins play a significant role in food applications and are very much influenced by their structure in food systems (Ramachandran et al., 2007). The molecular basis of functionality is related to their structure and ability to interact with other food ingredients (Zayas, 1997). Actin and myosin constitute the basic functional component of the myofibrillar proteins (MFPs). There are reports that fish undergo dramatic changes in the fiber type composition (myosin expression and organization of fiber type) and in the isoforms of myofibrillar molecule during post hatching growth (Watabe and Ikeda, 2006). Understanding physico-chemical characteristics is of utmost importance as it is directly related to the final quality of products like surimi, sausage and battered products. The functional properties of meat depend mainly on myofibrillar proteins (Goll et al., 1977) and are related to the composition and structure of proteins and their interaction with other substances present in the food (Colmenero and Borderias, 1983). The functional properties of myofibrillar proteins are important in determining the quality of the product (Roura and Crupkin, 1995). Functional properties of various fresh water fish protein were studied viz. tilapia (Parthiban et al., 2005(Parthiban et al., , 2015, rohu (Mohan et al., 2006). Materials and Methods Fresh eel (Mastacembelus armatus) harvested from Krishna river in Sangli district and brought in ice condition to Ratnagiri fish landing center were purchased. The fish had total length of 30 to 55 cm and weight ranged between 225±13.22 g. Fishes were de-skinned and filleted. The fillets were minced in a kitchen mixer/grinder and boneless meat was used under a temperature of 2-4 o C throughout the experiment. Extraction of muscle protein fraction was estimated according to the method of King and Poulter (1985). Protein determination of MFP and SPP extracts were estimated according to Gornall et al., (1948) by Biuret method. Extraction of natural actomyosin was prepared according to the method described by Benjakul et al., (1997). The ATPase assay of actomyosin was estimated according to the method of MacDonald and Lanier (1994). Inorganic phosphates were estimated by the method of Fiske and Subbarow (1925). The total SH groups of myofibrillar protein fraction were estimated according to Sedlak and Lindsay (1968). The ability of the proteins, SPP and MFP to form emulsion was estimated as emulsion activity index (EAI) according to the method of Pearce and Kinsella (1978) and as per modification of Cameron et al., (1991). About 20 ml sample was prepared for estimation of viscosity of salt soluble and water-soluble protein at different concentrations (2.5 and 5.0 mg/ml). It was determined with a (Model DV II + Pro, Brookfield) viscometer at shear rate 100 rpm as described by Mohan et al., (2006). Foam ability of the protein was determined by the method of Wild and Clark, (1996).The waterwashed fish mince was used to get the concentrate of MFP. Heat-induced gels were prepared from MFP concentrate by grinding with 3% Sodium chloride for 2 min at 4 o C (Lan et al., 1995). Water holding capacity (WHC) of mince was carried out by the method of Kocher and Foegeding (1993) with slight modification. Solubility of proteins The functional properties of proteins are often affected by protein solubility and those most affected are thickening, foam expansion, emulsifying and gel strength. As an indication whether or not denaturation has taken place in myofibrillar protein, a method commonly used is to measure the quantity of myofibrillar protein extracted from the muscle by salt solution with 0.45-0.6 ionic strength (Suzuki, 1981). Extractability is related to the solubility of the protein and characteristics of the muscle structure. The solubility characteristics of proteins are related to the amino acid composition at protein surface and its interaction with the solvent (Bigelow, 1967). In the present study, solubility of sarcoplasmic and myofibrillar fractions of fresh eel were observed to be 44.93 and 59.16 mg/g respectively. Partiban et al., (2005) observed the 66% solubility of MFP and 34% of SPP of total soluble protein of fresh tilapia fish, Mohan et al., (2006) reported the solubilities of SPP and MFP of rohu fish were 47.6 and 76.5 mg/g respectively. Ramachandran et al., (2009) Parthiban et al., (2015) observed the solubilities of SPP and MFP of tilapia fish were 21.26 and 35.23 g/100g respectively. Ca 2+ ATPase activity of actomyosin of eel The myosin globular has ATPase activity (which releases the energy for muscle contraction) and binds to actin in the absence of ATP (post-mortem). Therefore, Ca 2+ ATPase activity can be used as an indicator of the integrity of myosin molecules (Benjakul et al., 1997) as the globular heads of myosin are responsible for Ca 2+ ATPase activity (Benjakul et al., 2003;Ramachandran et al., 2007). In the present study, the Ca 2+ ATPase activity of actomyosin of fresh eel was 0.50 µmole Pi/mg protein/min. Ramachandran et al., (2009) (Parthiban et al., 2015). Total sulfhydryl content of MFP of eel Sulfhydryl groups are considered to be the most reactive functional groups in proteins and any destruction of cysteine or cystine during storage of fish is indicated by disappearance of SH groups. The SH groups represent the reactivity of the proteins and the content of surface reactive SH groups increases with the unfolding of protein during exposure to extreme conditions (Sankar and Ramachandran, 2005). In the present study, the content of total sulfhydryl groups of MFP fraction of fresh eel was 48.27 µmole SH/g protein. The Mohan et al., (2006) reported SH groups content of 88 μmoles/min/mg in AM from fresh rohu fish. Ramachandran et al., (2009) the concentration of reactive sulphydryl groups ranged from 23.5 µ moles SH/g protein to 44.7 µ moles SH/g protein among fishes and the highest values were recorded in MFP from silver carp and the lowest in common carp. Parthiban et al., (2015) observed the total SH content of actomyosin of fresh tilapia fish was 23μmoles/min/mg AM. Emulsion activity index (EAI) and Emulsion stability (ES) of SPP and MFP of eel The emulsifying properties of proteins are evaluated by several methods, such as size distribution of oil droplets formed, emulsifying activity, emulsion capacity and emulsion stability (Kinsella and Melachouris, 1976). The physical and sensory properties of protein-stabilized emulsion depend on the size of the droplets formed and the total interfacial area created. The ability of proteins to bind fat in comminuted meat is of great importance. Proteins being amphoteric molecules, are surface-active agents and thus concentrate on fat-water interface. Emulsion stability refers to the ability of a protein to form an emulsion that remains unchanged for a particular duration, under specific conditions. Protein-stabilized emulsions are often stable for days. In the present study, the EAI of MFP and SPP of eel at concentration of 2.5 mg/ml were 2.8 and 2.5 m 2 /g respectively. Whereas, the EAI of MFP and SPP of fresh eel recorded at concentration of 5.0 mg/ml were 3.1 and 2.2 m 2 /g respectively. Partiban et al., (2005) observed the EAI value was 124 m 2 /g for extracted total soluble protein from fresh tilapia fish. Mohan et al., (2006) reported the EAI of SPP at concentration of 2.5 mg/ml was 11.86 m 2 /g whereas, the EAI content of MFP at concentration of 2.5 and 5.0 mg/ml were 1.09 and 6.25 m 2 /g respectively. Ramachandran et al., (2009) In the present study, the ES of MFP and SPP of fresh eel at concentration of 2.5 mg/ml were 55 and 38 min respectively. The ES of MFP and SPP at concentration of 5.0 mg/ml was 60 and 40 min respectively. Partiban et al., (2005) observed the ES value was 570 sec of fresh tilapia sample. Mohan et al., (2006) reported the ES values for SPP at concentration of 2.5 mg/ml was 52 min whereas, the ES content of MFP at concentration of 2.5 and 5.0 mg/ml were 87 and 364 min respectively. Ramachandran et al., (2009) Viscosity of SPP and MFP of eel proteins The consumer acceptability of several liquid and semisolid type foods (e.g., gravies, soups, beverages, etc.) depends on the viscosity or consistency of the product. Viscosity is a functional property which is greatly exploited when proteins are added to liquid foods as thickeners, and it affects several other functional properties. Myosin present in muscle proteins is the major contributor to the viscosity of aqueous muscle extracts. In the present study, the viscosity of eel MFP at concentration of 2.5 and 5.0 mg/ml was 2.3 and 2.5 cP respectively whereas, the viscosity of eel SPP at concentration of 2.5 and 5.0 mg/ml was 1.9 and 2.3 cP respectively. Partiban et al., (2005) reported the viscosity of tilapia fish protein was 3.25 mm/sec. Mohan et al., (2006) observed the viscosity of rohu SPP at concentration of 2.5 mg/ml was 1.36 cP and for eel MFP at concentration of 2.5 and 5.0 mg/ml was 4.45 and 16.20 cP respectively. Ramachandran et al., (2009) Foam expansion (FE) and foam volume stability (FVS) of SPP and MFP of eel proteins Foams consist of an aqueous continuous phase and a gaseous (air) dispersed phase. The unique textural properties and mouthfeel of these products stem from the dispersed tiny air bubbles. In most of these products, proteins are the main surface active agents that help in the formation and stabilization of the dispersed gas (foam) phase. In the present study, the foam expansion of MFP of eel at concentration of 2.5 and 5.0 mg/ml were 3.1 and 3.0% respectively. The foam expansion of SPP of eel at concentration of 2.5 and 5.0 mg/ml were 2.4 and 2.3% respectively. Foam stability refers to the ability of proteins to stabilize foam against gravitational and mechanical stresses whereas, the foam volume stability of MFP of eel at concentration of 2.5 and 5.0 mg/ml were 60.5 and 65.3% whereas, the FVS of SPP of eel at concentration of 2.5 and 5.0 mg/ml were 28.8 and 35.6% respectively. Mohan et al., (2006) observed the FE of SPP of rohu at concentration of 2.5 mg/ml was 41.33% and the FE of MFP of rohu at concentration of 2.5 and 5.0 mg/ml were 105.33 and 134.66%. The FVS of SPP of rohu at concentration of 2.5 mg/ml was 27.33% whereas, the FVS of MFP of rohu at concentration of 2.5 and 5.0 mg/ml were 89.00 and 87.00% respectively. Ramachandran et al., (2009) Gel strength of muscle proteins of MFP of eel fish Gel is made up of polymers cross-linked via either covalent or non -covalent bonds to form network i.e capable of entrapping water and other small molecular weight substances. The strength of the gel depends on the extent of cross links that occur in the polypeptide chain. Proteins from fish differ in their ability to cross link to form network and found to be highly species specific (Mehta et al., 2011). In the present study, the gel strength of fresh eel was 250 g.cm. The similar study done by other researcher, Partiban et al., (2005) recorded the gel strength of fresh tilapia fish was 710 g.cm. Ramachandran et al., (2009) Water holding capacity of muscle proteins of eel fish (WHC) In food applications, the WHC of the protein is more important than water binding capacity (WBC). It refers to the sum of bound water, hydrodynamic water and physically entrapped water. The physically entrapped water contributes more to the water holding capacity. Many functional properties of proteins depend on water-protein interaction as water molecules bind to several groups in proteins. Myofibrils are the largest waterholding filament lattices and most of the water in the meat is held within the myofibrils in the narrow channels between the filaments (Sankar, 2009). In the present study, the water holding capacity (WHC) of fresh eel mince was 2.53 g / g muscle. Similar study was done by Partiban et al., (2005) observed the WHC of fresh tilapia was 2.8 g / g muscle.
2019-04-02T13:13:38.299Z
2018-03-20T00:00:00.000
{ "year": 2018, "sha1": "20b2717354eb79c868e8486fe0605a45901d319c", "oa_license": null, "oa_url": "https://www.ijcmas.com/7-3-2018/Rohini%20Mugale2,%20et%20al.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "454b92ab18db638fb7dc95d0f3bd109ab52b039c", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
221722384
pes2o/s2orc
v3-fos-license
Creatine Supply Attenuates Ischemia-Reperfusion Injury in Lung Transplantation in Rats Ischemia-reperfusion injury (IRI) is one of the factors limiting the success of lung transplantation (LTx). IRI increases death risk after transplantation through innate immune system activation and inflammation induction. Some studies have shown that creatine (Cr) protects tissues from ischemic damage by its antioxidant action. We evaluated the effects of Cr supplementation on IRI after unilateral LTx in rats. Sixty-four rats were divided into four groups: water + 90 min of ischemia; Cr + 90 min of ischemia; water + 180 min of ischemia; and Cr + 180 min of ischemia. Donor animals received oral Cr supplementation (0.5 g/kg/day) or vehicle (water) for five days prior to LTx. The left lung was exposed to cold ischemia for 90 or 180 min, followed by reperfusion for 2 h. We evaluated the ventilatory mechanics and inflammatory responses of the graft. Cr-treated animals showed a significant decrease in exhaled nitric oxide levels and inflammatory cells in blood, bronchoalveolar lavage fluid and lung tissue. Moreover, edema, cell proliferation and apoptosis in lung parenchyma were reduced in Cr groups. Finally, TLR-4, IL-6 and CINC-1 levels were lower in Cr-treated animals. We concluded that Cr caused a significant decrease in the majority of inflammation parameters evaluated and had a protective effect on the IRI after LTx in rats. Introduction Lung transplantation (LTx) is a well-established therapeutic option for the treatment of several diseases, such as chronic obstructive pulmonary disease, pulmonary fibrosis, bronchiectasis, and primary pulmonary hypertension [1][2][3]. The major cause of post-transplant death during the first 30 days is graft failure. Other causes include multiple organ and cardiovascular failure and technical failures related to the transplant procedures, as well as bronchiolitis obliterans syndrome [3]. The success Surgical Procedures Donor animals: twenty-four hours after the last gavage, donors were anesthetized with isoflurane 5% (Isothane, Baxter), orotracheally intubated, and mechanically ventilated (FlexiVent, SCIREQ, Montreal, CA) with 10 mL/kg, 90 cycles/min, and PEEP 3 cm H2O. General anesthesia during surgical procedures was maintained with isoflurane (2%). After median laparotomy, 50 IU of heparin was injected into the inferior vena cava. Then, median sternotomy was performed, and the pulmonary artery was cannulated for anterograde perfusion with 20 mL of low-potassium dextran (LPD) solution (Perfadex, Vitrolife, Sweden) at 4 • C with constant pressure (20 cm H2O) [13]. Before perfusion, the inferior vena cava was sectioned to decrease venous return, and the left atrial appendage was amputated to drain the LPD. Animals were euthanized by exsanguination. After perfusion, the cardiopulmonary block was excised and kept inflated by tracheal occlusion at the end of inspiration. The cardiopulmonary block was allocated in a moistened petri plate with Perfadex solution at 4 • C, and the left hilum was dissected and cuffs were fixed in the artery, vein, and bronchus, as previously described [14]. The grafts were kept inflated during cold ischemia for 90 or 180 min. Receptor animals: anesthesia, intubation, and ventilation were performed in the same way as in the donors. Left thoracotomy at the fifth intercostal space was performed, and the left hilum was dissected and clamped as proximal as possible using a stereomicroscope (Model SZ61, Olympus, Tokyo, Japan). Then, graft implantation was performed by introducing graft cuffs into a small hole made in the ventral wall of the artery, vein, and bronchus. After cuff fixation using a 7.0 polypropylene silk, the clamps of the bronchus, vein, and artery were removed to re-establish airflow and circulation in the graft. After surgery, the animals received analgesia (dipyrone 400 mg/kg) through the orogastric duct. Closure of the receptor incision was performed using 2.0 monofilament nylon sutures. The period of reperfusion was 120 min for all animals. All procedures are summarized in Figure 1B. Respiratory Mechanics After immediate reperfusion (re-established airflow and circulation) and 120 min of reperfusion of the graft, impedance of the respiratory system of the animals was calculated by the forced oscillation model [15]. We used airway resistance (RAW), tissue elastance (HTIS), and tissue damping (GTIS) parameters. Exhaled Nitric Oxide (NOex) After respiratory mechanics, anesthesia was closed, and ventilation was performed using only ambient air. After 5 min, a mylar balloon was connected to the FlexiVent expiratory way, and NOex was collected for 3 min [8,13]. The NOex concentrations were measured by chemiluminescence using a fast-responding analyzer (NOA 280, Sievers Instruments, Inc., Boulder, CO, USA). Blood Gases After NOex sample collection, median laparotomy was performed to collect blood by puncturing the abdominal aorta with a heparinized syringe. Analysis of the blood samples was performed on the Stat Profile 10 apparatus. Peripheral Blood Cell Count and Euthanasia Five milliliters of blood from the inferior vena cava were collected to determine the total leukocyte number and differential leukocyte count (200 cells/slide). Blood was centrifuged, and creatinine levels were measured in plasma. Subsequently, the animal was euthanized by abdominal aorta artery section, and the cardiopulmonary block was excised. Bronchoalveolar Lavage Fluid (BALF) and Inflammatory Mediators BALF was selectively performed on the left lung by instillation of 5 mL of saline solution. We evaluated the total and differential number of inflammatory cells [16]. The supernatant of BALF was used to measure the levels of IL-6, IL-10, TNF-α, and CINC-1 by ELISA (RD Systems, CA, USA). Histomorphometric and Immunohistochemistry Study The left lobe was kept in paraformaldehyde solution for 24 h. After that, it was included in paraffin and slides were prepared with 5 µm thickness sections. The slides were stained with hematoxylin-eosin for the analysis of the density of mono/polymorphonuclear cells in the lung parenchyma compartment and for the perivascular edema index [8,13,16]. The lung parenchyma corresponds to lung areas without airways, constituted by alveoli and alveolar saci and pulmonary vessels, including capillaries. For analysis of the perivascular edema index, the area considered was between the external border of the vascular smooth muscle until the adventitia of the vessel. For immunohistochemistry, histological sections were incubated with anti-PCNA (Dako M0879, 1:50), anti-Caspase-3 (NB500-210, 1:200), anti-TLR-4 (SC30002, 1:100), and anti-TLR-7 (SC30004, 1:25); point-counting histomorphometry was applied to quantify inflammatory cells (macrophages as representant of mononuclear cells and neutrophils as representant of polymorphonuclear cells) in the lung parenchyma. Statistical Analysis The normal distribution and homogeneity of variances were evaluated with the Shapiro-Wilk and Levene tests, respectively. We used the Student's t-test or Mann-Whitney Rank test according to the normality of data distribution, and the statistical analysis was performed using SigmaPlot 11 statistical software. Statistical significance was defined as p-value < 0.05. Results Eight animals per group were evaluated. The mean weight of the recipient animals was 394 ± 40 g, while the lung and heart weights were 3081 ± 599 mg and 1208 ± 104 mg, respectively. There was no difference between groups (Table 1). Respiratory Mechanics Lung mechanics data were evaluated at the beginning of reperfusion (immediate reperfusion) and after 2 h (final reperfusion). There was an increase in RAW and a decrease in GTIS and HTIS in animals treated with Cr in the immediate reperfusion. However, there was no change in RAW in the final reperfusion (Table 2). Data from donor rats were collected prior to exsanguination (Appendix A Table A1). NOex Concentration The creatine-treated animals showed a lower NOex concentration in both ischemia times ( Figure 2). NOex Concentration The creatine-treated animals showed a lower NOex concentration in both ischemia times ( Figure 2). Creatinine and Blood Gas Concentration There was an increase in plasma creatinine concentration in creatine-treated animals in the two evaluated ischemia times. There was an improvement in the oxygenation of creatine-treated animals after 90 min of ischemia, with a decrease in pCO2 and increase in pO2. There was no difference in lactate concentration ( Table 3). The creatinine concentration in donor rats was measured prior to exsanguination (Appendix Table A1). Creatinine and Blood Gas Concentration There was an increase in plasma creatinine concentration in creatine-treated animals in the two evaluated ischemia times. There was an improvement in the oxygenation of creatine-treated animals after 90 min of ischemia, with a decrease in pCO2 and increase in pO2. There was no difference in lactate concentration (Table 3). The creatinine concentration in donor rats was measured prior to exsanguination (Appendix A Table A1). Inflammatory Cells in the Peripheral Blood and BALF There was a decrease in the total number of leukocytes, neutrophils, and monocytes/macrophages counted in the blood smears and BALF of creatine-treated animals. There was a difference in the number of lymphocytes in the blood smears only in creatine-treated animals after 180 min of ischemia (Table 4). Lung Parenchyma Inflammation and Edema Index The number of mononuclear and polymorphonuclear cells in the lung tissue and the perivascular edema index was lower in creatine-treated animals compared to those in the control groups ( Figure 3A-C or Figure 4). The illustrative photomicrograph of perivascular edema is shown in Figure 3D. Proliferation, Apoptosis, and Immune Response Creatine-treated animals at the two ischemia times had decreased proliferation, apoptosis, and TLR-4 expression of macrophages and neutrophils in the lung parenchyma. There was no change in TLR-7 expression (Figures 5 and 6). Proliferation, Apoptosis, and Immune Response Creatine-treated animals at the two ischemia times had decreased proliferation, apoptosis, and TLR-4 expression of macrophages and neutrophils in the lung parenchyma. There was no change in TLR-7 expression (Figures 5 and 6). Levels of Inflammatory Mediators in BALF Creatine-treated animals showed lower levels of IL-6 and CINC-1. The IL-10 level was higher in creatine-treated animals after 180 min of ischemia ( Figure 7A-D). Levels of Inflammatory Mediators in BALF Creatine-treated animals showed lower levels of IL-6 and CINC-1. The IL-10 level was higher in creatine-treated animals after 180 min of ischemia ( Figure 7A-D). Discussion Cr supplementation in donor animals for five days before LTx attenuated the effects of IRI. The main findings in the Cr groups were an improvement in pulmonary function and oxygenation as well as a decrease in inflammatory cells in peripheral blood, BALF, and lung parenchyma. Additionally, Cr appears to promote the regulation of inflammatory mediators and immune system. Although further studies are needed to establish the causality of these findings, several previous studies provide evidence that energetic changes and deregulation of the creatine/creatine kinase (Cr/CK) pathway are closely linked to the etiology of hypoxic disorders and inflammatory drugs [17,18]. All isoenzymes catalyze the reversible transfer of gamma-phosphate from ATP to the guanidine group of Cr to generate phosphocreatine and ADP, thereby mediating efficient storage in the cytosol of high-energy phosphates for rapid and focal replenishment of ATP [18][19][20]. Clinical studies highlight the neuroprotective properties of Cr and the beneficial effects of phosphocreatine on the attenuation of cardiovascular stress [18]. Therefore, we believe that the inclusion of Cr in the diet could be a promising candidate for prophylactic treatment or as a complement to conventional therapies for ischemic diseases or even surgical procedures that require temporary organ ischemia, such as organ transplantation. IRI is the main cause of PGD and it is associated with the high morbidity and mortality rate of recipient patients in the first days after LTx. Several structural and functional changes modify the integrity of the alveolar-capillary barrier and cause intra-alveolar and interstitial edema [21]. In contrast, IRI is attenuated by the reduced expression of chemokines and proinflammatory cytokines and a decrease in alveolar macrophages [22]. Discussion Cr supplementation in donor animals for five days before LTx attenuated the effects of IRI. The main findings in the Cr groups were an improvement in pulmonary function and oxygenation as well as a decrease in inflammatory cells in peripheral blood, BALF, and lung parenchyma. Additionally, Cr appears to promote the regulation of inflammatory mediators and immune system. Although further studies are needed to establish the causality of these findings, several previous studies provide evidence that energetic changes and deregulation of the creatine/creatine kinase (Cr/CK) pathway are closely linked to the etiology of hypoxic disorders and inflammatory drugs [17,18]. All isoenzymes catalyze the reversible transfer of gamma-phosphate from ATP to the guanidine group of Cr to generate phosphocreatine and ADP, thereby mediating efficient storage in the cytosol of high-energy phosphates for rapid and focal replenishment of ATP [18][19][20]. Clinical studies highlight the neuroprotective properties of Cr and the beneficial effects of phosphocreatine on the attenuation of cardiovascular stress [18]. Therefore, we believe that the inclusion of Cr in the diet could be a promising candidate for prophylactic treatment or as a complement to conventional therapies for ischemic diseases or even surgical procedures that require temporary organ ischemia, such as organ transplantation. IRI is the main cause of PGD and it is associated with the high morbidity and mortality rate of recipient patients in the first days after LTx. Several structural and functional changes modify the integrity of the alveolar-capillary barrier and cause intra-alveolar and interstitial edema [21]. In contrast, IRI is attenuated by the reduced expression of chemokines and proinflammatory cytokines and a decrease in alveolar macrophages [22]. After LTx the majority of alveolar macrophages in the allograft are donor-derived. Donor-derived alveolar macrophages are predominant for at least 2 to 3 years after LTx [23,24]. The early resident leukocyte responses are likely to play a significant role in "initiating" IRI during LTx, and represent potentially important therapeutic targets for reducing PGD [25]. Macrophages maintain high amounts of intracellular creatine. Ji et al. showed that through transporter Slc6a8-mediated uptake, macrophages accumulate high amounts of intracellular creatine that reprogram their polarization by inhibiting IFN-γ while promoting IL-4 [26]. Our study on experimental lungs points to an important role for creatine treatment in donor lungs. Cr supplementation probably improves the conditions against IRI, and studies at the molecular level are required to prove our findings. In the period of ischemia, the pulmonary parenchyma cells release chemotactic substances [27], resulting in massive adhesion of inflammatory cells connected to arterioles, venules, and alveolar capillaries. The accumulation of red blood cells in IRI is not fully understood [27]. Wolf et al. studied a model of anoxia/reoxygenation in the lung in normothermia and showed a significant increase in the accumulation of red blood cells in the lung [28]. Eppinger et al. showed the accumulation of red blood cells in the non-ischemic collateral lung after 30 min of reperfusion, and suggested the role of chemotactic signaling from the ischemic lung reperfused to the non-ischemic lung [29]. This process of vascular congestion can result in acute edema and impairment of the gas exchange function at the alveolar-capillary membrane level. We observed an improvement in oxygen and carbon dioxide levels in the samples collected from creatine-treated animals. After reperfusion, pulmonary vascular resistance can also be increased up to three times in relation to normal levels due to vasoconstriction of the pre-capillary pulmonary system after lung IRI [30]. Moreover, increased pulmonary vascular resistance in association with increased vascular permeability [31] results in pulmonary edema in the ischemic [32] and reperfusion periods [33]. As a result, the increase in total and extravascular water content in the lung causes poor gas exchange and worsening of lung mechanics, which leads to low pO2 [33], increased peak airway pressure, and a high alveolar-arterial oxygen gradient [27]. Our results showed the attenuation of vascular edema in creatine-treated animals. Previously, we demonstrated that Cr was able to decrease the deleterious effects of IRI on pulmonary mechanics and edema formation [8]. Moreover, there was an improvement in lung tissue resistance and elastance in animals supplemented with Cr. However, in the present study, these beneficial effects of Cr on pulmonary mechanics were partially observed. Creatine-treated lung function was preserved in immediate reperfusion, but after 2 h of reperfusion, only tissue elastance decreased in both Cr groups (90 and 180 min). We believe that the absence of difference in other parameters in lung mechanics, as observed in the previous study, is due to the fact that the unilateral LTx model is much more complex than the IRI model. For example, this model implies several physiological interactions between the graft and new body. In addition, the graft remained in cold ischemia for 90 or 180 min, where it is not free of damage to cells and tissues. Apoptosis is a process of programmed cell death during which cells retain membrane integrity and do not release danger-associated molecular patterns (DAMPs), however, these apoptotic cells actively produce anti-inflammatory signals [34]. According to Lockinger et al. [30], after cold ischemia, apoptosis is only found during reperfusion and influenced by the duration of cold ischemia. A moderate time of cold ischemia of 6-12 h before reperfusion triggers more apoptosis in the lung tissue than necrosis. However, a longer cold ischemia time (24 h before reperfusion) resulted in necrosis-dominated cell death [27]. In our study, in which the animals remained in ischemia for 90 or 180 min, increased apoptosis (as denoted by increased caspase-3 expression) and alveolar proliferation of macrophages/neutrophils in lung parenchyma (as denoted by increased density of these cells) were observed at both times in control animals, indicating that damage occurs even within shorter periods of time. Furthermore, the Cr group showed attenuation of this cellular damage. Innate-immune cells such as macrophages and neutrophils are pivotal in the pathogenesis of IRI. Some studies have documented that, in addition to adaptive immunity, innate immune cells play an important role in transplantation rejection, as allograft loss [34]. IRI is attenuated when alveolar macrophages are reduced, which occurs due to reduced expression of proinflammatory cytokines and chemokines. Reports point to the fundamental role of alveolar macrophages as orchestrators of innate immune responses within the lung [22]. The M1 macrophage is a mainly proinflammatory and tissue destructive subset, which is characterized by increased expression of CD86, inducible nitric oxide synthase (iNOS), TNF-α, IL-1, and IL-6. On the other hand, the M2 macrophage is the anti-inflammatory and tissue-repairing subset, which is characterized by high expression of CD163, CD206, Arg1, and IL-1028. In the present study, we could not perform such analysis, however, it will be addressed in further studies. The production of TNF-α by alveolar macrophages increases the secretion of proinflammatory cytokines and chemokines by alveolar epithelial cells [35]. The initial phase of IRI is neutrophil independent and characterized by a predominance of TNF-α and IL-1β, whereas the late phase is dependent on recruitment and activation of neutrophils and characterized by increased vascular permeability, chemokines and heterogeneous cytokines [36]. In our study, there was no change in TNF-α in creatine-treated animals. If we relate these data to inflammation of lung tissue and BALF, we can observe that, in our model, there was inflammation mediated by the increase in mononuclear/macrophages cells. We believe that more studies are needed to investigate the production pathway of proinflammatory interleukins in the LTx model in rats. IL-6 is produced by innumerable cells, including bronchial and lung epithelial cells, whereas IL-10 is released primarily by T-helper type 2 cells. The high level of these two markers means an increased inflammatory state and may have potential for prognosis [37]. Strieter et al. (2002) found that IL-10 gene expression increases when proinflammatory cytokines are regulated in acute inflammatory responses [38]. In our model, we observed a decrease in IL-6 production and increase in IL-10 production in the creatine groups, which suggests an equilibrium in the regulation of interleukins in cell protection. This can also be observed due to decreased CINC-1 production. The innate immune system recognizes a wide variety of pathogens such as viruses, bacteria, and fungi by standard recognition receptors. These receptors may be connected to the membrane, such as the TLRs, which are expressed by a variety of immune cells including macrophages, monocytes, dendritic cells, and neutrophils [39]. Inflammatory stimuli have been associated with increased expression and activation of TLRs [40]. In our study, there was a decrease in TLR-4 expression in both Cr groups and a tendency to increase TLR-7 expression in 90 min in the Cr group. As in our previous study [8], Cr was able to modulate innate immune system response. The role of TLRs has been investigated in organ transplants and especially with regard to IRI, acute and chronic rejection, and infection [41]. However, the exact role of TLRs in organ transplantation is not fully understood. Another parameter that is indicative of the inflammatory process is the NOex level. NO is a biological mediator produced by various cell types, including vascular endothelium. It is also an inhibitor of platelet aggregation and neutrophil adhesion and modulates vascular permeability. Additionally, NO acts as a bronchodilator and neurotransmitter [42]. Previous studies have shown that NOex can be used as an indirect marker of pulmonary inflammation and is correlated with severity and response to treatment [43]. In addition, NO is a highly reactive free radical gas that reacts with a wide variety of biomolecules to produce reactive nitrogen species [44]. Almeida et al. [8] showed that Cr is able to ameliorate oxidative damage caused by pulmonary IRI due to the decrease in NOex levels. In this study, with the application of Cr in the IRI model in LTx, we also observed a reduction in oxidative damage at the different ischemia times. Despite understanding the role of oxidants in inflammatory progression, treatments that use a variety of antioxidant approaches have not yet been successful. The failure of some antioxidant clinical trials indicates a gap in our general understanding of whether oxidative stress is beneficial or prejudicial in inflammation [39,45]. More studies that use these antioxidant agents in several experimental models are needed in order to assert their efficacy in simple models of inflammation or more complex models such as IRI in LTx. New preclinical studies should be conducted to test the best moment, best way and best dosage of Cr administration for improving IRI. For example, Cr could be also used in the graft preservation solution for either flushing or cold storage before the surgical procedure, or it could be administered as an adjuvant therapy for recipients in the postoperative period. Besides, considering species to specific differences, other animal models should be tested in new studies. Based upon the beneficial effects of Cr in attenuating the inflammatory cells influx after IR and in promoting cellular homeostasis presented in this study and previous ones, clinical studies investigating the effects of Cr supplementation are guaranteed. Conclusions In conclusion, Cr supplementation in rats submitted to unilateral pulmonary transplantation attenuated the deleterious effects caused by IRI, as observed through a reduction in inflammation and the preservation of the structure and function of the lung tissue. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
2020-09-16T13:06:19.351Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "c1efb4408215f93e01d2e98fb4bf0d2003c2460d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/12/9/2765/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "09624a0be8f0f97e434ffaf33c6fa364b86e5f19", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4375406
pes2o/s2orc
v3-fos-license
On Algorithms Employing Treewidth for L-bounded Cut Problems Given a graph G = (V,E) with two distinguished vertices s, t ∈ V and an integer parameter L > 0, an L-bounded cut is a subset F of edges (vertices) such that the every path between s and t inG\F has length more than L. The task is to find an L-bounded cut of minimum cardinality. Though the problem is very simple to state and has been studied since the beginning of the 70’s, it is not much understood yet. The problem is known to be NP-hard to approximate within a small constant factor even for L ≥ 4 (for L ≥ 5 for the vertex–deletion version). On the other hand, the best known approximation algorithm for general graphs has approximation ratio only O(n) in the edge case, and O( √ n) in the vertex case, where n denotes the number of vertices. We show that for planar graphs, it is possible to solve both the edge– and the vertex–deletion version of the problem optimally in O((L+2)n) time. That is, the problem is fixed-parameter tractable (FPT) with respect to L on planar graphs. Furthermore, we show that the problem remains FPT even for bounded genus graphs, a super class of planar graphs. Our second contribution deals with approximations of the vertex– deletion version of the problem. We describe an algorithm that for a given graph G, its tree decomposition of width τ and vertices s and t computes a τ -approximation of the minimum L-bounded s− t vertex cut; if the decomposition is not given, then the approximation ratio is O(τ √ log τ). For graphs with treewidth bounded by O(n1/2− ) for any > 0, but not by a constant, this is the best approximation in terms of n that we are aware of. Submitted: September 2017 Reviewed: November 2017 Revised: December 2017 Reviewed: December 2017 Revised: January 2018 Accepted: January 2018 Final: Februray 2018 Published: February 2018 Article type: Regular paper Communicated by: D. Wagner This research was partially supported by project GA15-11559S of GA ČR E-mail address: petr.kolman@mff.cuni.cz (Petr Kolman) 178 P. Kolman On Algorithms Employing Treewidth for L-bounded Cut Introduction The subject of this paper is a variation of the classical s−t cut problem, namely the minimum L-bounded edge (vertex) cut problem: given a graph G = (V, E) with two distinguished vertices s, t ∈ V and an integer parameter L > 0, find a subset F of edges (vertices) of minimum cardinality such that every path between s and t in G \ F has length more than L. The problem has been studied in various contexts since the beginning of the 70's (e.g., [1,26,2]) and occasionally it appears also under the name the short paths interdiction problem [19]. Closely related is the shortest path most vital edges and vertices problem (e.g.[3,4,5]): given a graph G, two distinguished vertices s and t and an integer k, the task is to find a subset F of k edges (vertices) whose removal maximizes the increase in the length of the shortest path between s and t.If we introduce an additional parameter -the desired minimum distance of s and t -we obtain a parameterized version of the L-bounded cut problem: given a graph G, two distinguished vertices s and t and integers k and L, does there exist a subset F of at most k edges (vertices) such that every path between s and t in G \ F has length more than L? We also note that N P-hardness of the shortest path most vital edges (vertices) problem immediately implies N P-hardness of the L-bounded edge (vertex) cut problem, and vice versa. In contrast to many other cut problems on graphs (e.g., multiway cut, multicut, sparsest cut, balanced cut, maximum cut, multiroute cut), the known approximations of the minimum L-bounded cut problem are substantially weaker.In this work we focus on algorithms for restricted graph classes, namely planar graphs, bounded genus graphs and graphs with bounded, yet not constant, treewidth, and provide new results for the L-bounded cut problem on them; the results for planar graphs solve one of the open problems suggested by Bazgan et al. [5].We also remark that the L-bounded cut problem does not fit into the framework of Czumaj et al. [9] that is applicable for some N P-hard problems in graphs with superlogarithmic treewidth. Related Results.N P-hardness of the shortest path most vital edges problem (and, thus, as noted above, also of the L-bounded cut problem) was proved by Bar-Noy et al. [4].The best known approximation algorithm for the minimum L-bounded cut problem on general graphs has approximation ratio only O(min{L, n/L}) ⊆ O( √ n) for the vertex case and O(min{L, n 2 /L 2 , √ m}) ⊆ O(n 2/3 ) for the edge case, where m denotes the number of edges and n the number of vertices [2].On the lower bound side, the edge-deletion version of the problem is known to be N P-hard to approximate within a factor of 1.1377 for L ≥ 4, and the vertex-deletion version for L ≥ 5 [2]; for smaller values of L the problem is solvable in polynomial time [26,27].Independently, Khachiyan et al. [19] proved that a version of the problem with edge lengths is N P-hard to approximate within a factor smaller than 1.36.Recently, assuming the Unique Games Conjecture, Lee [23] proved that the problem is N P-hard to approximate within any constant factor. An instance of the L-bounded edge (vertex, resp.)cut problem on a graph G = (V, E) of treewidth τ can be cast as an instance of constraint satisfaction problem (CSP) with |V | variables, domain of size L + 2 (L + 3, resp.) and treewidth τ . 1 As CSP instances with n variables, treewidth bounded by τ and domain by D can be solved in O(D τ n) time [15] (when a tree decomposition of width τ of the constraint graph is given), the problem is fixed-parameter tractable with respect τ .Dvořák and Knop [10] provide a direct proof of the same result with a slightly worse dependance on L and τ ; they also prove that the problem is W [1]-hard when parameterized by the treewidth only. From the point of view of parameterized complexity, the problem was also studied by Golovach and Thilikos [16], Bazgan et al. [5] and by Fluschnik et al. [14]. For planar graphs, the problem is known to be N P-hard [13,30], too, and the edge-deletion version of the problem has no polynomial-size kernel when parameterized by the combination of L and the size of the optimal solution [14]. For more detailed overview of other related results and applications, we refer to the papers [19,2,27].For more background on parameterized algorithms, we refer to the textbook by Cygan et al. [8]. Our Contribution.We show that on planar graphs, both the edge-and the vertex-deletion version of the problem are solvable in O((L + 2) 3L n) time.That is, we show that on planar graphs the minimum L-bounded cut problem is fixedparameter tractable (FPT) with respect to L. Furthermore, we show that the problem remains FPT even for bounded genus graphs, a super class of planar graphs.This is in contrast with the situation for general graphs -the problem is NP-hard even for L = 4 and L = 5, for the edge-and vertex-deletion versions, respectively. Our second contribution is a τ -approximation algorithm for the vertexdeletion version of the problem, if a tree decomposition of width τ is given.If the decomposition is not given, then using the best known algorithm to compute a tree decomposition of a given graph, we obtain an O(τ √ log τ )-approximation for general graphs with treewidth τ , and an O(τ )-approximation for planar graphs, graphs excluding a fixed minor and graphs with treewidth bounded by O(log n).For graphs with treewidth bounded by τ = O(n 1/2− ) for any > 0, but not by a constant, in terms of n, this is the best approximation we are aware of. Our results are based on a combination of observations about the structure of L-bounded cuts and various known results.The proofs are straightforward but apparently non-obvious, considering the attention given to the problem in recent years. Preliminaries Throughout the paper, given a graph G = (V, E), we use m to denote the number of edges in G, that is, m = |E|, and for u, v ∈ V , we use d(u, v) to denote the shortest path distance between u and v, that is, the number of edges on a shortest path.For a graph G = (V, E) and a subset of vertices W ⊂ V , a subgraph of G induced by W is the graph (W, F ) where F is the subset of edges with both vertices in W , that is, F = {{u, v} ∈ E | u, v ∈ W }. For a graph G = (V, E) and a subset of edge F ⊂ E, we use G \ F to denote the graph (V, E \ F ), and for a subset of vertices W ⊂ V , we use G \ W to denote the subgraph of G induced by V \ W . Given a graph G = (V, E) with two distinguished vertices s and t, a subset of vertices For notions related to the treewidth of a graph and tree decomposition we stick to the standard terminology as given in the book by Kloks [21].A tree decomposition of a graph G = (V, E) is a tree T with a node set V (T ) in which each node a ∈ V (T ) has an assigned set of vertices B(a) ⊆ V , called a bag, such that a∈T B(a) = V with the following properties: • for any {u, v} ∈ E, there exists a node a ∈ V (T ) such that u, v ∈ B(a), ) for all nodes c on the path between a and b in T . The tree decomposition is rooted if one of the nodes in the tree T is specified as the root.The treewidth of a tree decomposition T is the size of the largest bag of T minus one.The treewidth of a graph G is the minimum treewidth over all possible tree decompositions of G. To distinguish vertices of a graph G and of a tree decomposition T of G, we call the vertices of the tree decomposition nodes.A tree decomposition satisfies the non-containment condition if no bag is contained in any other bag. A simple yet important property of tree decompositions is stated in the following lemma. Lemma 1 (Folklore) Let G be a graph and T a tree decomposition of G satisfying the non-containment condition.Then Note that the size of the cut B(a) ∩ B(b) in Lemma 1 is at most the width of the tree decomposition T .In a rooted tree, the parent of a node is the node connected to it on the path to the root; every node except the root has a unique parent.A child of a node v is a node of which v is the parent.A descendant of any node v is any node which is either the child of v or is (recursively) the descendant of any of the children of v.A leaf is a vertex having no child. Fixed-parameter Tractability on Planar and Bounded Genus Graphs Our main tools are the following two well-known results. Theorem 1 (Robertson and Seymour [28], Bodlaender [6]) The treewidth of a planar graph with radius d is at most 3d. Theorem 2 (Freuder [15]) CSP instances with n variables, treewidth bounded by τ and domain by D are solvable in O(D τ n) time. Since the minimum L-bounded edge (vertex, resp.)cut problem on a graph G = (V, E) of treewidth τ can be cast as a CSP instance with |V | variables, treewidth τ and domain of size ), as already stated in the introduction and explained in Appendix A. The main result of this section says that the L-bounded cut problem on planar graphs is fixed-parameter tractable, with respect to the parameter L. Proof: We prove the theorem for the edge-deletion version; the proof for the vertex-deletion version is analogous. Given In words, V is the subset of vertices lying on paths of length at most L between s and t.Without loss of generality we assume that d(s, t) ≤ L -otherwise the problem is trivial.Let G be the subgraph of G induced by V .Note that the radius of G is at most L as, by definition, d(s, v) ≤ L for every v ∈ V . The set V (and, thus, the subgraph G ) can be computed using the O(n)time algorithm for single-source shortest paths on planar graphs by Klein et al. [20] that we run twice, once for s and once for t.Note that both s and t belong to V . Obviously, G is a planar graph, and by Theorem 1, its treewidth is at most 3L.We solve the L-bounded problem for G and s and t by Theorem 2 in O((L + 2) 3L n) time.Let F be the optimal solution for G .We claim that F is an optimal solution for the original instance of the problem on G as well.To prove feasibility of F , assume, for contradiction, that there exists an s − t-path p of length at most L in (V, E \ F ).As there is no such path in G \ F , p has to use at least one vertex v from V \ V .However, this yields a contradiction: on the one hand, d(s, v) + d(v, t) ≤ L as v is on an s − t-path of length at most L, on the other hand, d(s, v) + d(v, t) > L as v is not in V .Concerning the optimality of F , it is sufficient to note that the size of an optimal solution for the subgraph G is a lower bound on the size of an optimal solution for G. Theorem 1 was generalized by Eppstein [11] to graphs of bounded genus and this result makes it possible to generalize Theorem 3 also to graphs of bounded genus. Theorem 4 (Eppstein [11]) There exists a constant ĉ such that the treewidth of every graph with radius d and genus g is at most ĉdg. In the same way as we used Theorem 1 to prove fixed-parameter tractability for the L-bounded cut problem on planar graphs (Theorem 3), we can use Theorem 4 to prove fixed-parameter tractability of the L-bounded cut problem on graphs of bounded genus.The only other change is that instead of the O(n)time single-source shortest path algorithm for planar graphs [20] we use the O(n + m)-time single-source shortest path algorithm for general graphs [29].Considering the fact that by Euler's formula, genus g graphs have O(n + g) edges [17], we obtain the following theorem. τ -Approximation for L-bounded Vertex Cuts In this section we describe an algorithm for the L-bounded s − t vertex cut problem whose approximation ratio is parameterized by the width τ of a tree decomposition T of the input graph G. Throughout this section we assume that the vertices s and t are not connected by an edge -in such a case there is no L-bounded s − t vertex cut in G. Without loss of generality we also assume that tree decompositions in this section satisfy the non-containment condition. Consider a graph G and a rooted tree decomposition T of G of width τ .By d(G, s, t) we denote the distance between s and t in G. Given a subset R ⊆ V (T ) of nodes inducing a connected subtree of T , a deepest node in R is a node in R with no child in R. The following simple observation captures the main properties of G and T that make the algorithm of this section work.For notational simplicity, in the rest of the section we use the term L-bounded path for an s − t path of length at most L. The L-bounded cut is computed using the recursive procedure Lbounded cut(G, T, s, t, L) described in Algorithm 1.In step 12, prune(G, T, C) is a procedure that for a graph G = (V, E), a tree decomposition T and a vertex set C ⊂ V , deletes from G the vertices in C and all adjacent edges, and modifies the tree decomposition T by deleting the vertices in C from all bags. Algorithm 1 L-bounded cut(G, T, s, t, L) # no need to remove anything b ← a deepest node in R (G , T ) ← prune( Ḡb , Tb , B(b) \ {s, t}) 13: S ← L-bounded cut(G , T , s, t, L) The main result of this section is obtained from Lemmas 3 and 4. Lemma 3 Given a graph G = (V, E), two vertices s, t ∈ V and a tree decomposition T of G of width τ , Algorithm 1 finds in polynomial time an L-bounded s − t vertex cut. Proof: To prove the correctness of Algorithm 1, we proceed by induction on the recursion depth.We start by showing the correctness of the final recursive calls.To this end we distinguish the following three cases dealt with in the algorithm: Case 1. d(G, s, t) > L. As there is no need to remove anything in this case, the correctness is obvious from the description of the algorithm.Inductive step.Consider a run of the procedure with a graph G and its tree decomposition T , and let R and b be the objects defined by the procedure in steps 3 and 4. Note that the set R induces a connected subgraph of T . The inductive assumption (i.e., S is an L-bounded cut in G ) combined with the second point of Claim 2 implies that the set S ∪B(b)\{s, t} is an L-bounded s − t cut in G, completing the inductive step in the proof of the correctness. Concerning the running time, the second point of Claim 2 implies that the vertex b selected as a deepest node from R in some iteration will not belong to the set R in any of the future recursive calls.Thus, the size of the set R decreases by at least one with each new recursive call, yielding an upper bound V (T ) on the number of recursive calls.Apart from the recursive call, each level of recursion can be implemented in time O(τ • |V (T )|), yielding an upper bound O(τ • |V (T )| 2 ) on the total running time. Let cost(G, T ) be the size of the solution computed by Algorithm 1 for a graph G and a tree decomposition T of G, and let opt(G) be the size of an optimal solution for the graph G. Lemma 4 Given a graph G = (V, E), two vertices s, t ∈ V and a tree decomposition T of G of width τ , then cost(G, T ) ≤ τ • opt(G) . Proof: Similarly as in the proof of Lemma 3, we proceed by induction on the recursion depth.We start by showing the correctness of the bound for the final recursive calls and, as before, we distinguish the following three cases: Case 1. d(G, s, t) > L. For graphs with no L-bounded s − t path, the claim is obvious as cost(G, T ) = 0 in this case.Inductive step. From the description of the algorithm we know that cost(G, T ) ≤ τ + cost(G , T ).Points 1 and 3 of Claim 2 imply opt(G) ≥ 1 + opt(G ).Combining these observations with the inductive assumption cost(G , T Putting Lemmas 3 and 4 together, we get the main result of this section. Theorem 6 Given a graph G, a rooted tree decomposition T of G of width τ , vertices s and t and an integer L, Algorithm 1 finds in polynomial time a τ -approximation of the minimum L-bounded s − t vertex cut. Remark.At the cost of increasing the approximation ratio to τ + 1, the steps 5-10 of the algorithm can be simplified as follows: In the case that we are not given a tree decomposition on input, we start by constructing it using one of the known algorithms: Feige et al. [12] describe a polynomial time algorithm that yields, for a given graph of treewidth τ , a tree decomposition of width O(τ √ log τ ); for planar graphs and for graphs excluding a fixed minor, the width is in O(τ ).Similarly, for graphs with treewidth bounded by O(log n), Bodlaender et al. [7] describe how to find in polynomial time a tree decomposition of width O(τ ).Depending on the input graph, one of these algorithms is used to obtain a desired tree decomposition.Thus, we obtain the following corollary. Corollary 7 There exists an O(τ √ log τ )-approximation algorithm for the minimum L-bounded vertex cut on graphs with treewidth τ ; for planar graphs, graphs excluding a fixed minor and graphs with treewidth bounded by O(log n), there exists an O(τ )-approximation algorithm. Open problems Having shown fixed-parameter tractability of the L-bounded cut problem on planar and bounded genus graphs by giving L O(L) n time algorithms, the question arises whether the presented bounds are optimal.Could the dependence on the parameter L be improved to 2 O(L) ?As our proofs of fixed-parameter tractability rely on the existence of the algorithm for CSP, a much more general class of problems, on graphs of bounded treewidth, it is conceivable that a better bound is possible; on the other hand, under the Strong Exponential Time Hypothesis [18], matching lower bounds for some problems expressible as CSP (e.g., q-coloring) do exist [25]. A natural open problem for planar graphs is whether the shortest path most vital edges (vertices) problem is fixed-parameter tractable on them, with respect to the number k of deleted edges (vertices).Despite the close relation of the Lbounded cut problem and the shortest path most vital edges (vertices) problem, fixed-parameter tractability of one of them does not seem to easily imply fixedparameter tractability of the other problem. The τ -approximation for L-bounded vertex cuts is based on the fact that bags in a tree decomposition yield vertex cuts of size at most equal the width of the decomposition.Unfortunately, this is not the case for edge cuts -one can easily construct bounded treewidth graphs with no small balanced edge cuts.Thus, another open problem is to look for better approximation algorithms for minimum L-bounded edge cuts, for graphs with treewidth bounded by τ . Yet another challenging and more general open problem is to narrow the gap between the upper and lower bounds on the approximation ratio of algorithms for the L-bounded cut for general graphs: the best upper bound for the edgeand vertex-deletion version of the problem is O(n 2/3 ) and O( √ n), resp., while the best lower bound is constant. Finally, we note that the edge-deletion version of the L-bounded cut problem in a graph G = (V, E) is a kind of a vertex ordering problem.We are looking for a mapping from the vertex set V to the set {0, 1, . . ., L, L + 1} such that (s) = 0, (t) = L + 1 and the objective is to minimize the number of edges {u, v} ∈ E for which | (u) − (v)| > 1; given a solution F ⊆ E, the lengths of the shortest paths from s to all other vertices in G \ F yield such a mapping of cost |F |.There are plenty of results dealing with linear vertex ordering problems where one is looking for a bijective mapping from the vertex set V to the set {1, 2, . . ., n} minimizing some objective function (e.g., the minimum cut linear arrangement problem, the minimum feedback arc set problem [24]).However, the requirement that the mapping is a bijection to a set of size n seems crucial in the design and analysis of approximation algorithms for these problems.The question is whether it is possible to obtain good approximations for some nontrivial non-linear vertex ordering problems. A Appendix L-bounded Cut as a CSP Instance An instance Q = (V, D, H, C) of CSP [22] consists of • a set of variables z v , one for each v ∈ V ; without loss of generality we assume that V = {1, . . ., n}, • a set D of finite domains D v (also denoted D(v)), one for each v ∈ V , For a vector z = (z 1 , z 2 , . . ., z n ) and Given an edge-deletion version of the L-bounded cut instance G = (V, E) with s, t ∈ V and an integer L, we construct the corresponding minimization CSP instance Q = (V, D, H, C) as follows.The set of variables of Q coincides with the set V of vertices of G and for each v ∈ V , the corresponding domain D v is {0, 1, . . ., L, L + 1}.The set of hard constraints H consists of two constraints, for each edge {i, j} ∈ E of the graph G. To see that a feasible solution for the constructed instance Q of CSP corresponds to a feasible solution of the L-bounded cut problem of the same cost, and vice versa, we observe the following. Given an optimal solution F ⊂ E of the edge-deletion version of the Lbounded cut problem, we distinguish two cases.If s and t belong to the same component of connectivity in (V, E \ F ), then the vector of shortest path distances from s to all other vertices in (V, E \ F ) yields a feasible solution for the CSP instance Q (to be more precise, if some of the distances are larger than L + 1, we replace in the vector every such value by L + 1); if s and t do not belong to the same component of connectivity in (V, E \ F ), we obtain a feasible solution for Q by assigning the value 0 to every vertex in the s-component and the value L + 1 to every vertex in the t-component.Note that in both cases the cost of the L-bounded cut and the cost of the CSP instance Q are the same.We also note that for every feasible solution (z 1 , . . ., z n ) of the instance Q, the set F = {{u, v} ∈ E | |z u − z v | > 1} is an L-bounded cut of the same cost.Finally, we note that the constraint graph of Q coincides with the original graph G. For the vertex-deletion version of the L-bounded cut problem in G = (V, E), the corresponding minimization CSP instance Q = (V, D, H, C) is defined similarly.For each v ∈ V , we have D v = {−1, 0, . . ., L, L + 1} -the domain of every vertex is extended by an extra element −1 representing the fact that v belongs to the L-bounded cut.The set of hard constraints H contains constraints C {s} = {0} and C {t} = {L + 1}, and for each edge {i, j} ∈ E also a constraint The set of soft constraints contains for each vertex u other than s and t a constraint C {u} = {0, 1 . . ., L, L + 1} . Given an optimal solution U ⊂ V of the vertex-deletion version of the Lbounded cut problem, we distinguish two cases.If s and t belong to the same component of connectivity of G = G \ U , then assigning to every v ∈ U the value −1 and assigning to every v ∈ V \ U its distance from s in G yields a feasible solution for the CSP instance Q (to be more precise, if some of the distances are larger than L + 1, we replace in the vector every such value by L + 1); if s and t do not belong to the same component of connectivity in G , we obtain a feasible solution for Q by assigning the value 0 to every vertex in the s-component, the value L + 1 to every vertex in the t-component and the value −1 to every v ∈ U .Note that in both cases the size of the L-bounded cut and the cost of the CSP instance Q are the same.We also note that for every feasible solution (z 1 , . . ., z n ) of the instance Q, the set U = {v ∈ V | z v = −1} is an L-bounded cut of size equal the cost of Q. Given a node b of the rooted tree decomposition T , we denote by T b the subtree of T consisting of b and of all its descendants, and by G b the subgraph of G induced by vertices in bags of T b ; similarly, we denote by Tb the subtree of T consisting of all nodes in T including b and excluding the descendants of b and by Ḡb the subgraph of G induced by vertices in bags of Tb .Note that b is the only node of the tree T that appears in both subtrees T b and Tb . Claim 2 If b is a deepest node in the set R = {a ∈ V (T ) | d(G a , s, t) ≤ L} and G = Ḡb \ (B(b) \ {s, t}), then the following holds: 1.There is at least one L-bounded path in G b .2. There is no L-bounded path in G b \ (B(b) \ {s, t}). 3 . The L-bounded paths in G b are internally vertex disjoint with the Lbounded paths in G .Proof: The first point follows from the membership of b in the set R. The second point is obvious if b has none or exactly one child.Assume that b is a node with two or more children and that there is an L-bounded path p between s and t in G b \ (B(b) \ {s, t}).Then, by the choice of b (i.e., a deepest node in R), there exist children c and c of b and vertices x, x on the path p such that x ∈ V (G c ) \ V (G c ) and x ∈ V (G c ) \ V (G c ).Consider the vertex cut B(b) (cf.Lemma 1) and note that x and x belong to different components of connectivity of G b \ B(b).Thus, the sub-path of p between x and x has to contain as an inner vertex a vertex from B(b) \ {s, t}, a contradiction.We conclude that there is no L-bounded path in G b \ (B(b) \ {s, t}).For the third point, note that any L-bounded path in G either intersects the set B(b) \ {s, t} or appears in G \ (B(b) \ {s, t}).As every L-bounded path in G b intersects, by the second part of this claim, the set B(b) \ {s, t}, and as G is a subgraph of G \ (B(b) \ {s, t}), the third part of the Claim follows. Case 2 . |B(b) ∩ {s, t}| = 1 (where b is the node selected in step 4).As there exists at least one L-bounded path in G b , both vertices s and t appear in G b , and as |B(b) ∩ {s, t}| = 1, one of the vertices s and t appears in G b \ B(b).By the second point of Claim 2, B(b) \ {s, t} is an L-bounded cut in G b , and every L-bounded path in G disjoint with B(b) \ {s, t} has to use a vertex that does not appear in G b .However, as B(b) is a vertex cut in G separating G b \ B(b) from the rest of the graph, there is no L-bounded path in G disjoint with B(b)\{s, t}.We conclude that G \ (B(b) \ {s, t}) is an L-bounded s − t vertex cut in G. Case 3. B(b) ∩ {s, t} = ∅ (where b is the node selected in step 4).The argument is similar as in the previous case.On one hand, as there exists at least one L-bounded path in G b , both vertices s and t appear in G b , and as none of them belongs to the set B(b), there must be a child c of b such that s ∈ G c .On the other hand, the second point of Claim 2 implies that every Lbounded path in G disjoint with B(b) \ {s, t} has to use a vertex that does not appear in G b .As B(b) ∩ B(c) is a vertex cut in G separating G c from the rest of the graph, we conclude that there is no L-bounded path in G \ (B(b) ∩ B(c)). if |B(b) ∩ {s, t}| ≤ 1 then return B(b) \ {s, t} By this change, if |B(b) ∩ {s, t}| = 1, the output of the algorithm will not change.If B(b) ∩ {s, t} = ∅, the modified algorithm outputs B(b) = B(b) \ {s, t} instead of the original output B(b) ∩ B(c); obviously, this will not break the correctness of the algorithm but the bound in Lemma 4 will change to cost(G, T ) ≤ (τ + 1) • opt(G) as B(b) may be of size τ + 1. we define the projection of z on U as z| U = (z i1 , z i2 , . . ., z i k ).A vector z = (z 1 , z 2 , . . ., z n ) satisfies the constraint C U ∈ C ∪ H if and only if z| U ∈ C U .We say that a vector z = (z 1 , . . ., z = {{u, v} | ∃C U ∈ C ∪ H s.t.{u, v} ⊆ U }.We say that a CSP instance Q has bounded treewidth if the constraint graph of Q has bounded treewidth. n ) is a feasible solution for Q if z ∈ D 1 × D 2 × ...×D nand z satisfies every hard constraint C ∈ H.In the maximization (minimization, resp.)version of CSP, the task is to find a feasible solution that maximizes (minimizes, resp.) the number of satisfied (unsatisfied, resp.)softconstraints; the cost of a feasible solution is the number of satisfied (unsatisfied, resp.)softconstraints.The constraint graph of Q is defined as H = (V, E) where E
2018-04-03T00:30:52.819Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "21603c74f69a05d0c70da37735a53d095231b4d8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7155/jgaa.00462", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "21603c74f69a05d0c70da37735a53d095231b4d8", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
19119242
pes2o/s2orc
v3-fos-license
Role of mast cells in inflammatory and reactive pathologies of pulp, periapical area and periodontium various inflammatory states of the oral cavity. These cells play a key role in the inflammatory process and, as a consequence, their number changes in different pathologic conditions of the oral cavity, such as gingivitis and periodontitis. By understanding the role of MCs in the pathogenesis of different inflammatory diseases of the oral cavity, these cells may become therapeutic targets that could possibly improve the prognosis. Therefore, this review summarizes the current understanding of the role of MCs in various inflammatory pulpal, periapical and periodontal pathophysiological conditions. INTRODUCTION Mast cells (MCs) are important cells of the immune system and are of the hematopoietic lineage. [1] MCs are found in all connective tissue types of the oral cavity, including the periodontal ligament, the dental pulp and the gingiva. [2] The induction of inflammation by MCs is consequent upon the release of preformed biological mediators as well as secondary mediators. [3] Most of the periapical and gingival lesions are inflammatory in origin, and these lesions are a response to periapical tissues to the microbial and chemical stimuli coming from the pulp through root canal system. Among the cells found in periapical lesions, MCs have been detected in inflammatory infiltrate of periapical granulomas and cysts, suggesting a role of MCs in inflammatory mechanism of these lesions. Chronic generalized inflammatory gingival hyperplasia and pyogenic granuloma are common inflammatory gingival lesions. They represent inflammation and repair attempts that are stymied due to ongoing etiological stimulation. [4] Among the cells found in periodontal tissues, MCs are detected both in healthy and inflamed gingiva. [5] The role of MCs in allergic diseases, anaphylaxis and autoimmunity has been well documented. However, their role in the pathogenesis of oral pathologies is still debatable. Hence, the present review article aims to explore the role of MCs in the initiation and progression of inflammatory pulpal, periapical and periodontal conditions. MAST CELL MEDIATORS Each MC typically contains between 80 and 300 granules. When activated (induced by a range of stimuli [ Table 1]), they may either undergo explosive degranulation and then re-synthesize their granules or they may release solitary granules into their environment on an ongoing basis, a process termed "piecemeal degranulation" that has been observed in both the oral mucosa and skin. [6] Following degranulation, MC mediators [ Table 2] are deposited in large quantities in the extracellular environment, where they exert effects on endothelial cells and other cell types. PULP INFLAMMATION AND MAST CELLS The inflammatory process in human dental pulp consists in vascular changes and migration of inflammatory cells to the site of inflammation. No MCs are normally present in human dental pulp, but they are known as active cells in the inflammatory response. [10] According to the studies of Miller et al. in 1978, MCs are occasionally found in inflammed pulp. [11] Some evidence concerning the oral cavity suggested that neurogenic inflammation involved the participation of MCs. [2] In the human dental pulp tissue that has been subject to inflammation, high concentrations of TNF-α have been detected. The source of TNF-α may be oral MCs granules, which is released upon degranulation process. [12] Some data suggest that MC histamine, which is a strong vessel dilator and mediator of vascular permeability, may play a role in the initiation of dental pulp inflammation. [13] [8,9] Substance P may mediate MC degranulation following MC activation at the level of the dental pulp. [14] Bacterial invasion of the pulp during caries formation may also provoke MC activation. In 2009, Karapanou et al. came up with a hypothesis that MC activation may take place through the neuropeptides that are locally released in the pulp, and, afterward, pro-inflammatory mediators that are liberated from the MC granules may participate in the inflammatory process of the dental pulp and may serve as diagnostic markers for inflammatory pulpal diseases. [15] INFLAMMATORY PERIAPICAL LESIONS Cyst and granulomas are chronic periapical lesions mediated by a set of inflammatory mediators that develop to contain a periapical infection. [16] Inflammatory infiltrate of periapical lesions is composed mostly of plasmacytes, lymphocytes, macrophages and MCs. [17] In a study by de Oliveira Rodini et al., MCs were present in greater number in periapical cysts than in granulomas. In cysts MCs were more numerous in the region of active inflammation. Furthermore, MCs tended to be more common in peripheral regions of both the periapical lesions and were often found in close proximity to lymphocytes. [18] Role of mast cells in lymphocyte recruitment Following activation, MCs induce T-cell migration either directly by release of exosomes and chemokines such as lympholactin, IL-16 and MIP-1 or indirectly by induction of adhesion molecule expression on endothelial cells. [18] Histamine increases vascular permeability through structural changes which include endothelial contraction and intercellular gap formation. In addition, histamine promotes leukocyte adhesion to endothelium through transient mobilization of adhesion molecule P selectin to the endothelial surface. [19] This functional relationship between MC and T-lymphocytes has been shown to be bidirectional, fulfilling mutually regulatory and/or modulatory roles, including influences on cellular processes such as growth, proliferation, activation and antigen presentation. In addition, T-cell-derived mediators, such as β-chemokines, directly induce MC degranulation. These findings led to propose a functional relationship between these two cell population that may facilitate elicitation of an immune response contributory to the initiation of pathogenesis of periapical lesions. [18] CYST INITIATION Gao et al. stated that the presence of immune cells in periapical granuloma and cysts initiates cell-mediated and humoral immunoreactions in these lesions which may be associated with epithelial proliferation within periapical lesions. [20] Role of mast cells in epithelial proliferation The stimulated MCs may release IL-1, which causes increased epithelial proliferation. This property of MC-derived IL-1 may act as one of the factors in the proliferation of epithelium (cell rests of Malassez) in periapical granuloma leading to cyst formation. [21] CYST ENLARGEMENT/EXPANSION The hydrostatic pressure of the luminal fluid is important in cyst enlargement, and MC activity might contribute to this by increasing the osmotic pressure. [22] Teronen et al. stated that MCs were found to be located in inflammatory cell-rich tissue areas and just beneath the cyst epithelium. MCs located at the border of bone were observed to be degranulated and release tryptase at the regions of early bone destruction, suggesting that the MC tryptase may contribute significantly to jaw cyst tissue remodeling during growth of cyst and to the destruction of surrounding bone, resulting in jaw cyst expansion. [23] Smith et al. observed high staining activity of heparin in connective tissue just beneath the epithelium of cysts and suggested that local tissue metabolism and inflammation will contribute to the release of glycosaminoglycans from connective tissue into the luminal fluid and are thought to be important in the expansile growth of cysts. [24] Role of mast cells in increasing the hydrostatic pressure of cystic luminal fluid The increase in osmotic pressure could be in three ways as follows: 1. By direct release of heparin into luminal fluid 2. By release of hydrolytic enzymes which could degrade capsular extracellular matrix (ECM) components, thereby facilitating their passage into the fluid 3. By the action of histamine on smooth muscle contraction and vascular permeability, encouraging transudation of serum proteins. [22] Role of mast cells in degradation of extracellular matrix and bone resorption MCs contain tryptase and chymases (proteolytic enzymes) that take part in the degradation of ECM. [25] Thus, they help in the breakdown of proteoglycans of connective tissue capsule of periapical lesions leading to expansion. The MC-derived mediators such as histamine, heparin, tryptaes, TNF, IL-1, IL-6, PGs and other arachidonic acid metabolites are known to cause bone resorption. [24] TNF-α and IL-1 and IL-6 increase the bone resorption by intensifying the osteoclastic activity. [25] Heparin is involved in bone resorption and also has been associated with the inhibition of collagen synthesis. [26] MC-derived tryptase can activate matrix metalloproteinase (MMP) which help in ECM degradation and bone resorption for the enlargement of the lesions. [25] GINGIVA AND MAST CELLS Gingival inflammation It has been reported that MCs were constantly present in healthy gingival tissue and located between epithelial cells and connective tissues. [27] In gingival lesions, the products from the bacterial plaque at the gingival margin may directly or indirectly induce proliferation of MCs. The proposed source of allergens includes not only bacteria and their products but also denatured host tissue. [28] Pro-inflammatory cytokines that are released during the initial stage of the inflammation infuence the migration of MCs. Following degranulation, MC mediators are deposited in large quantities in the extracellular environment, where they exert effects on endothelial cells and other cell types. Early events in this process include key growth factors, transforming growth factor-ββ and fibroblast growth factor as well as inflammatory cytokines and chemokines, facilitating MC recruitment and activation. The release of proteolytic enzymes such as MMPs mediates fibrogenic injury, and the overall balance of activators and inhibitors is altered in a manner favoring net matrix deposition. MC effectively supports this process by elaborating the cytokines and chemokines. MC produces mediators such as histamine, heparin and TNF-α which can influence fibroblast proliferation, ECM synthesis and degradation. TNF-α also upregulates C-C chemokine receptor Type 1 and RANTES expression which in turn triggers further MC degranulation. MC chymase is also known to stimulate MMP-9 which mediates basement membrane disruption. On the whole, the cyclic activity of inflammatory mediator secretion and MC degranulation may result in the chronicity of gingival inflammation and fibrosis. [27] Role of mast cells in collagen synthesis and fibrosis MC-derived IL-1 causes increased fibroblastic response and tryptase causes increased production of Type 1 collagen and fibronectin, thereby leading to increased fibrosis. [21,29] Histamine may also interact with and promote increased fibroblastic proliferation. This could suggest a possible role of MC in enhanced synthesis and fibrosis seen in later stages of inflammatory periapical and gingival lesions [ Figure 1]. [30][31][32] PYOGENIC GRANULOMA Pyogenic granuloma represents a reactive lesion resulting from local etiological factors such as gingival inflammation, calculus or trauma, which activates MC resulting in the release of mediators, which leads to subsequent changes in tissue leading to the formation of pyogenic granuloma. MC on stimulation undergoes degranulation and causes inflammatory and vascular changes leading to the formation of pyogenic granuloma [ Figure 2]. [33] Haritanont et al. stated that the number of MCs and blood vessel counts in oral pyogenic granuloma was significantly higher than those in normal oral tissues. This suggests that MCs may function as one of the factors for angiogenesis in oral pyogenic granulomas. [34] Role of mast cells in angiogenesis MCs play a significant role in angiogenesis, probably by secreting several potent angiogenic factors including heparin, histamine, vascular endothelial growth factor and tryptase. Tryptase, in particular, could directly induce proliferation of human dermal microvascular endothelial cells. [35] Heparin increases the half-life of basic fibroblastic growth factor, which is a potent angiogenic substance. These findings indicate that the MC may function as one of the factors for angiogenesis in pyogenic granuloma, chronic inflammatory gingival hyperplasia and also in periapical lesions. [21,36] INFLAMMATORY PERIODONTAL DISEASES Bacterial plaque has been implicated as the primary etiological factor in inflammatory periodontal disease, but recently, several studies have focused on the role of the immune system in the development of periodontal disease, indicating that bacterial antigens trigger an immunopathological reaction. [37] Among the cells found in the periodontal tissues, MCs have been detected in both healthy and inflamed gingiva. [38] When triggered by locally produced cytokines or bacterial products, for example, lipopolysaccharides, the cells can release a large number of prestored mediators. One of the biological and biochemical factors is histamine, which breaks down the tissue barrier, causes edema and helps cellular infiltration. [39] Another reason is that the expression of MMPs 1, 2 and 8 is strongest in MCs. MMPs are crucial in the degradation of the main components in ECMs. Furthermore, tryptase can cleave the third component of collagen and activate latent collagenase that can participate in tissue destruction in periodontitis. A change from gingivitis to periodontitis involves a shift from predominantly T-cell lesion to a B-cell/plasma cell lesion. MCs seem to be able to present antigens to T-cells. The resultant T-cell activation would activate MCs, leading to both degranulation and cytokine release. In summary, periodontitis is not unidirectional, but rather it is interactive; the same cells that produce the destructive pro-inflammatory cytokines can also produce mediators that activate the healing process. [40] CONCLUSION MCs play a critical role in the development of inflammation at the levels of the dental pulp and periodontium, both during the early stages and during the transition from acute to chronic inflammation. MCs in these inflammatory lesions are associated with increased vascular permeability, angiogenic response, collagen synthesis, regulation of inflammation, bone resorption and ECM destruction. Based on the concept that MCs play an important role in the chronicity of inflammation, it may be possible to use drugs therapeutically in order to influence MC secretion and thereby thwart inflammation. In the future, it may be possible to develop novel approaches that influence the release of pro-inflammatory molecules or neuropeptides to ameliorate MC-driven inflammation. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2018-05-09T00:43:46.005Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "2dd7a8f51a70f012c302cc7f9f11962a6f50d09a", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "02d3645358a6e378d9cd33510c0f34d9b08c0a03", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
208190949
pes2o/s2orc
v3-fos-license
Uptake of subsoil water below 2 m fails to alleviate drought response in deep-rooted Chicory (Cichorium intybus L.) Deep-rooted agricultural crops can potentially utilize deep soil moisture to reduce periods where growth is water limited. Chicory (Cichorium intybus L.) is a deep-rooted species, but the benefits of deep roots to water uptake has not been studied. The aim of this study was to investigate the value of deep roots (>2 m) under topsoil water limitation. Chicory grown in 4 m deep soil-filled rhizotrons was exposed to either topsoil drought or resource competition from the shallow-rooted species ryegrass (Lolium perenne L.) and black medic (Medicago lupulina L.). The effect on deep water uptake was assessed using non-destructive measurements of roots, soil water and tracers. Water uptake occurred below 1.7 m depth in 2016, and below 2.3 m depth in 2017 and contributed significantly to chicory water use. However, neither surface soil drying nor intercropping increased deep water uptake to relieve water deficit in the shoots. Chicory benefits from deep-roots during drought events, as it acceses deep soil moisture unavailable to more shallow rooted species, yet deep water uptake was unable to compensate for the reduced topsoil water uptake due to soil drying or crop competition. Introduction Minimizing water limitation during growth of agricultural crops is crucial to achieve full yield potential. Crop yield losses vary according to the timing and severity of water limitations, but even short-term drought can be a major cause of yield losses (Zipper et al. 2016). Deeprooted crops can potentially utilize otherwise inaccessible deep-water pools as a source of water for transpiration during periods where crop production is water limited. In areas where precipitation is sufficient to rewet the soil profile during a wet season, more shallow-rooted crops still experience water limitation during the growing season, as they do not have access to the water stored deeper in the profile. The potential influence of deep roots on water uptake has been highlighted numerous times (e.g. Canadell et al. 1996;Lynch and Wojciechowski 2015), yet information about the contribution of deep roots to water uptake remains scarce. Maeght et al. (2013) suggest that this is related to the absence of tools to measure deep root activity with sufficient throughput and standardization at affordable costs, and to the widespread assumption that as deep roots only represent a small fraction of the overall root system, their contribution to root system function is marginal. Deep root growth under field conditions has been suggested to be restricted by high soil strength, and unfavourable conditions such as e.g. hypoxia, acidity, and low nutrient availability, to substantially benefit the crop (Lynch and Wojciechowski 2015;Gao et al. 2016). While some soils can restrict deep root growth, others allow roots to grow in the deeper soil layers (Sponchiado et al. 1989;Thorup-Kristensen and Rasmussen 2015). In addition, even though the majority of the root biomass is found in the topsoil, deep roots can contribute significantly to water supply in crops, as there is little connection between root biomass and root activity (Mazzacavallo and Kulmatiski 2015). Gregory et al. (1978) found that in the field, winter wheat had less than 5% of its root biomass below 1 m depth, and as long as the water supply was sufficient in the 0-1 m layer, the biomass reflected the water uptake well. However, when the topsoil dried, the roots between 1 and 2 m depth supplied the plants with up to 20% of the total water use. It has also been demonstrated that relatively small amounts of subsoil water can be highly valuable to wheat grain yield under moderate post-anthesis drought stress (Kirkegaard et al. 2007). Indirectly, deeper root growth in crops has also been associated with deep-water uptake, as rooting depth has been shown to correlate positively with yield under drought in the field in wheat (Lopes and Reynolds 2010), bean (Sponchiado et al. 1989;Ho et al. 2005), rice (Uga et al. 2013) and maize (Zhu et al. 2010). In addition, modelling studies indicate that selection for deeper roots in grain crops could significantly improve deep-water acquisition and yield in dry seasons (Manschadi et al. 2006;Lilley and Kirkegaard 2011). Common to most of these studies is that deep root growth is considered to be in the range of 0.5 to 2 m depth. But several agricultural crops have the capability to grow roots below 2 m depth or even deeper within a season (Canadell et al. 1996;Ward et al. 2003;Thorup-Kristensen 2006;Rasmussen et al. 2015), and thereby access an extra pool of water originating from wet season surplus precipitation stored in the soil. For example, lucerne has shown to decrease the soil water content at 5 m depth (Fillery and Poulter 2006). Hydrological isotope tracer techniques have over the last two decades become an increasingly popular tool to acquire information on temporal and spatial water use patterns in plants (e.g. Bishop and Dambrine 1995;Peñuelas and Filella 2003;Beyer et al. 2016). Injection of tracer into specific soil depths has proven to be a precise method to detect the relative water uptake from the chosen depth (Kulmatiski et al. 2010;Kulmatiski and Beard 2013;Bachmann et al. 2015;Beyer et al. 2016). The hydrological tracer techniques utilize the fact that no isotopic fractionation against isotope forms of hydrogen or oxygen occurs during soil water uptake by roots (Wershaw et al. 1966;Dawson and Ehleringer 1991;Bariac et al. 1994;Mensforth and Walker 1996). The anthropocentric discussion of the importance of deep root growth for crop production is put into perspective by the fact that some plant species have evolved the potential to grow deep roots. Under what circumstances is that strategy beneficial? In this study, we hypothesize that deep root growth can help plants escape topsoil drought. Using chicory as a case study species, we aimed to test the following hypotheses; 1) Chicory can grow roots below 3 m depth within a single growing season. 2) Chicory has significant water uptake from the deeper part of the root zone despite low root intensity. 3) When chicory is exposed to either topsoil drying or resource competition from shallow-rooted species, deep water uptake can increase to compensate for the decreased topsoil water uptake. Chicory is commonly grown in pasture mixtures for animal fodder or as a cash crop to produce chicons. It is known to be able to reach at least 2.5 m depth (Thorup- Kristensen and Rasmussen 2015) and to be drought resistant (Monti et al. 2005;Skinner 2008;Vandoorne et al. 2012a). To test the hypotheses, we grew chicory as a sole crop and in an intercrop with two shallow-rooted speciesryegrass (Lolium perenne L.) and black medic (Medicago lupulina L.) in 4 m deep rhizotrons. We allowed extensive root development before imposing a drought, as our focus was on the potential of deep roots to acquire water during drought, rather than on deep root growth during drought. Experimental facility We conducted the experiment in a facility at University of Copenhagen, Taastrup (55°40′08.5"N 12°18′19.4″E), Denmark and repeated it for the two consecutive seasons, 2016 and 2017. We grew the crops in 4 m deep rhizotrons placed outside on a concrete foundation. The rhizotrons where 1.2 × 0.6 m rectangular columns constructed of steel frames. A waterproof plywood plate divided the rhizotrons lengthwise into an east-and a west-facing chamber with a surface area of 1.2 × 0.3 m. The rhizotrons stood on a north-south axis, narrow sides facing towards one another (Fig. 1). On the east-and the west facing fronts of the rhizotrons, 20 transparent acrylic glass panels allowed inspection of root growth at the soil-rhizotron interface on the entire surface. Each panel was 1.2 m wide and could be removed to allow direct access to the soil column. Every third panel was 0.175 m tall, and the rest were 0.21 m tall. We used the narrow panels for placement of equipment and soil sampling. The tall panels were only used for root observations. To avoid disturbance of root growth, these panels were not removed during the experiment. All sides of the rhizotrons where covered in white plates of foamed PVC of 10 mm thickness to avoid light exposure of soil and roots. On the fronts, the foamed PVC plates were also divided into 20 panels. These were fixed in metal rails, allowing them to slide off whenever we wished to observe the roots. A wick in the bottom of the rhizotrons allowed water to be drawn out to prevent waterlogging at depth. We used field soil as growth medium. The bottom 3.75 m of the rhizotrons was filled with subsoil taken from below the plough layer at Store Havelse, Denmark (Table 1). We filled the upper 0.25 m with a topsoil mix of 50% sandy loam and 50% fine sandy soil, both from the University's experimental farm in Taastrup, Denmark. To reach a soil bulk density comparable to field conditions we filled the soil into the rhizotrons stepwise at portions of approximately 0.15 m depth and used a steel piston to compact each portion by dropping it several times on the soil. We filled the rhizotrons in August 2015 and did not replace the soil during the two years. At the time of the experiment, average subsoil bulk density was 1.6 g m −3 , which is close to field conditions for this soil type. We constructed rainout shelters to control water supply. In 2016, we covered the soil in the drought stressed treatment with a transparent tarpaulin that had a hole for each plant stem. The tarpaulins were stretched out and fixed with a small inclination to let the water run off. It turned out that this design failed to keep out water during intense precipitation events, which happened twice during the season. In 2017, we designed barrel roof rainout shelters instead, using the same clear tarpaulin and placed them on all rhizotrons. The rain-out shelters were open at the ends and on the sides to allow air circulation but were wider than the rhizotrons to minimize the water that reached the chambers during precipitation under windy conditions. The barrel roof rainout shelters covered all treatments. We installed a drip irrigation pipe (UniRam™ HCNL) with a separate tap in each chamber. The pipe supplied 5 l hour −1 , equivalent to 14 mm hour −1 according to the surface area of the growth chambers. Experimental design We had two treatments in 2016 and four in 2017. In both years we grew chicory (C. intybus L., 2016: cv. Spadona Fig. 1 The rhizotron facility, consisting of 12 columns of 4 m height each divided into an east-and a west-facing chamber. See text for a detailed description from Hunsballe frø. 2017: cv. Chicoree Zoom F1 from Kiepenkerl) in monoculture under well-watered (WW) and drought stressed (DS) conditions. In 2017, we also grew chicory intercropped with either ryegrass (L. perenne L.) or black medic (M. lupulina L.), both in a WW treatment. For chicory, we chose to work with a hybrid vegetable type cultivar in the second year to reduce the variation among plants in size and development speed observed in the forage type used the first year. In 2016, we transplanted four chicory plants into each chamber. In 2017, we increased the number to six in order to reduce within-chamber variation. For the two intercrop treatments in 2017, we transplanted five plants of ryegrass or black medic in between the six chicory plants. For the 2016 season, chicory plants were sown in May 2015 in small pots in the greenhouse and were transplanted into the rhizotrons on 30 September. Despite our attempt to compact the soil inside the rhizotron chamber, precipitation made the soil settle around 10% during the first winter, i.e. it had sunken 0.4 m. Consequently, on 29 February 2016, we carefully dug up the chicory plants, removed the topsoil, filled in subsoil before filling topsoil back in and replanting the chicory plants. A few chicory plants did not survive the replanting and in March we replaced them with spare plants sown at the same time as the original plants and grown in smaller pots next to the rhizotrons. In 2017, we sowed chicory in pots in the greenhouse on 11 April and transplanted them to the rhizotron chambers on 3 May (Table 2). Chicory is perennial, it produces a rosette of leaves in the first year and the second year it grows stems and flowers. We grew all treatments in three randomized replicates. The soil inside the six chambers not used for the experiment in 2016 but included in 2017 had also sunken during the 2015/2016 winter and the same procedure was used to top up soil in these chambers before transplanting the chicory plants. In 2016, we fertilized all chambers with NPK 5-1-4 fertilizer equivalent to 100 kg N ha −1 , half on 1 April and the other half on 21 June. In 2017, we fertilized all chambers on 3 May and 1 June following the same procedure. In 2016, we pruned the plants at 0.5 m height, several times between 24 May and 12 July to postpone flowering and induce leaf and root growth. We started drying out the DS treatments on 26 June in 2016 and on 13 July in 2017, where we stopped irrigation and mounted the rainout shelters. In 2016, we repeatedly irrigated the WW treatments. In 2017, where the rainout shelters excluded precipitation in all chambers we repeatedly irrigated all treatments apart from the DS. However, we chose to supply the same amount of water in all the irrigated chambers, which led to different levels of soil water content. We irrigated regularly to ensure sufficient water supply. Biomass and 13 C We harvested aboveground biomass on 28 July in 2016 and on 11 September in 2017. We dried the biomass at 80°C for 48 h. Belowground biomass could not be harvested, as the plants were used for further experiments after regrowth. The biomass was analysed for 13 C/ 12 C ratio on an elemental analyser interfaced to a (2011): where R sample is the ratio of the less abundant to the more abundant isotope ( 13 C/ 12 C) in the sample and R standard the ratio in a standard solution. For δ 13 C the international standard Vienna PeeDee Belemnite (R standard = 11180.2 × 10 −6 ) was used. Analytical precision (σ) was 0.2‰. The 13 C/ 12 C ratio in plants is directly related to the average stomatal conductance during growth, as discrimination between 12 CO 2 and 13 CO 2 during photosynthesis is greatest when stomatal conductance is high. When stomates are partially or completely closed, a greater part of the CO 2 inside the leaf is absorbed resulting in less fractionation and thereby higher δ 13 C values of the plant tissue (Farquhar and Richards 1984;Farquhar et al. 1989). Root measurements We documented the development in root growth by taking photos of the soil-rhizotron interface through the transparent acrylic glass panels. For this purpose, we designed a "photo box" that could be slid on the metal rails in place of the foamed PVC panels, and thereby excluded the sunlight from the photographed area. We placed a light source consisting of two bands of LED's emitting light at 6000 K in the photo box. We used a compact camera (Olympus Tough TG 860). For each 1.2 m wide panel we took four photos to cover the full width of the panel. We photographed the roots on 21 June and 18 July 2016 and on 6 July, 16 August and 12 September 2017, corresponding to the time of drought initiation in the DS treatment, 2 H tracer injection (see below) and for 2017, harvest. In 2017, harvest was postponed until 20 days after the 2 H tracer-experiment, due to measurements of nutrient uptake later in the season (Rasmussen et al. 2019) We recorded the roots using the line intersects method (Newman 1966) modified to grid lines (Marsh 1971;Tennant 1975) to calculate root intensity, which is the number of root intersections m −1 grid line in each panel (Thorup-Kristensen 2001). To make the counting process more effective we adjusted the grid size to the number of roots, i.e. we used coarser grids when more roots were present and finer grids for images with only a few roots. This is possible because root intensity is independent of the length of gridline. We used four grid sizes: 10, 20, 40 and 80 mm. To minimize the variance of sampled data we used grid sizes that resulted in at least 50 intersections per panel (Ytting 2015). Soil water content We installed time-domain reflectometry sensors (TDR-315/TDR-315 L, Acclima Inc., Meridian, Idaho) at two depths to measure volumetric water content (VWC) in the soil. In 2016, the sensors were installed at 0.5 and 1.7 m depth. In 2017, the sensors were installed at 0.5 and 2.3 m depth. Soil water content was recorded every 5 min in 2016 and every 10 min in 2017 on a datalogger (CR6, Campbell Scientific Inc., Logan, Utah). Discrepancies in measured VWC among the sensors at field capacity (FC) let us conclude that the sensors were precise but not particularly accurate, meaning that the change over time in VWC was reliable but not the measured actual VWC. We have therefore estimated a sensor reading for each sensor at FC and reported changes in VWC from FC. We estimated FC as the mean VWC over a 48-h interval after wetting. In 2017, the measurement was made in the autumn after excess water from a heavy rainfall had drained. In the autumn, there is little evaporation and no plant transpiration to decrease VWC below FC, making it an optimal time to estimate FC. We did not have data from autumn 2016, so instead, we estimated FC in early spring. Water uptake We estimated water uptake from the VWC readings. We assume that water movement in the soil is negligible when VWC is below FC. Hence, the decrease in VWC can be interpreted as plant water uptake. Water uptake is therefore estimated as the mean decrease in VWC over a given time interval. We attempted to use intervals corresponding to the time of the 2 H tracer studies. In 2016, the interval was a postponed a few days and in 2017, the time interval did not cover the first two days of the tracer study. For the period from onset of drought to harvest 2017, we tested whether the daily water uptake at 2.3 m depth was affected by daily mean VWC at 0.5 m depth across all treatments. For this period, the VWC at 2.3 m depth was close to FC in all treatments and therefore the water content in 2.3 m itself was unlikely to affect the water uptake. As transpiration demand is high at this time of the year and plants are large, we assumed that topsoil water limitations would limit total water uptake unless it is balanced by an increased water uptake lower in the profile. We excluded days in which the chambers were irrigated and one day after irrigation events to exclude periods with large soil water movement. H tracer We used 2 H labeled water injected into 2.3 m depth to trace water uptake from this depth. We mixed 90% 2 H 2 O tracer with tap water 1:1, to achieve an enrichment of δ 5,665,651 ‰ and injected 100 ml per chamber. We removed one of the acrylic panels in each chamber temporarily to allow tracer injection and distributed it over 100 injection points in the soil. The injections were made at two horizontal rows of each 10 equally distributed holes 5 cm above and below 2.3 m depth respectively. In each of these 20 holes, we injected 5 ml tracer distributed between five points: 5, 10, 15, 20 and 25 cm from the horizontal soil surface. Tracer injection was made on 19 July 2016 and 15 August 2017. We captured the tracer signal by collecting transpiration water using plastic bags. For studies using tracers, collecting transpiration water is considered valid, as the tracers increase the enrichment level several orders of magnitudes, which make the fractionation negligible (Thorburn and Mensforth 1993;Beyer et al. 2016). We sampled the transpiration water one day before tracer injection as a control and one, two, three, four and six days after in 2016, and three and six days after in 2017. We fixed a plastic bag over each plant with an elastic cord that minimized air exchange with the surroundings. Transpiration water condensed on the inside of the plastic bag, which was folded inwards under the elastic cord to create a gutter for the water drops. Plastic bags were mounted on the plants two hours before noon and removed at noon. We removed the plastic bags one by one, shook them to unite the drops, and transferred each sample to a closed plastic beaker. Later we filtered the samples through filter paper to remove soil and debris contamination and transferred the samples to glass vials. We collected water from all plants and in most cases mixed the individual plant samples before analysis. Plastic bags could not be removed without a certain loss of water, thus the total water collected was not a measure of total transpiration. Therefore we chose to mix equal amounts of water from each sample. Day 2 in 2016 and day 6 in 2017, we analysed the samples from each plant separately to get data on within chamber variation. The samples were analysed for 2 H at Centre for Isotope Research, University of Groningen, The Netherlands on a continuous flow isotope ratio mass spectrometer (IRMS, Isoprime 10) combined with a chromium reduction system (Europa PyrOH, Gehre et al. 1996). Isotope values are expressed in delta notation (δ) as given in eq. 1. R sample is the 2 H/ 1 H ratio in the sample and R standard for δ 2 H is Vienna standard mean ocean water (R standard ≈ 1/6412). Analytical precision (σ) was 0.7 ‰. In order to identify whether tracer was present in a sample, we adapted the criteria proposed by Kulmatiski et al. (2010). If a sample had a δ 2 H-value at least two standard deviations higher than the control samples, tracer was assumed to be present. Statistics Data analyses were conducted in R version 3.4.4 (R Core team 2018). The effect of treatment on aboveground biomass of chicory, black medic and ryegrass was tested in a mixed effects one-way ANOVA. Separate harvest of single plants allowed the inclusion of chamber as random effect to account for the fact that the two intercropped species are not independent. The effect of soil depth and treatment on root intensity was tested in a mixed effects two-way ANOVA. We included chamber as random effect to account for the fact that the different depths are not independent. To meet assumptions of normality, depths where at least one of the treatments had no roots in any of the replicates, were excluded from the model. Separate analyses were made for each date. The effect of soil depth and treatment on water uptake during a given time interval was tested in a mixed effect three-way ANCOVA with time as covariate. In 2016, we excluded the sensors from one replicate of the DS treatment because water reached it during a cloudburst. In 2017, we excluded two of the sensors at 0.5 m depth from the analysis, one in a chicory and ryegrass intercrop treatment and one in a chicory and black medic intercrop treatment. The first due to noise in the readings and the second due to readings showing a pattern in VWC that did not resemble the pattern of any of the other sensors. The effect of treatment and time on 2 H concentration in transpiration water was tested in a mixed effect twoway ANOVA. We log-transformed the response variable to meet the assumptions of homoscedasticity. The effect of treatment on δ 13 C was tested in a oneway ANOVA. For 2017, the model is a mixed effects model because samples for each plant were analysed separately. In all cases, separate analyses were made for each year. All models met the assumptions of normality and homoscedasticity. Differences were considered significant at P < 0.05. Tukey test P-values for pairwise comparisons were adjusted for multiplicity, by single step correction to control the family-wise error rate, using the multcomp package (Hothorn et al. 2008). For root intensity, we decided to control the family-wise error rate for each root depth. For the 2 H concentration, we only made pairwise comparisons for the last date. Results Plants grew well in both years, and as hypothesized, roots were observed below 3 m depth by the end of the growing season. Both the uptake of 2 H tracer and sensor readings showed that chicory acquired water from 2.3 m depth. However, our results do not suggest that compensation takes place, i.e. deep water uptake was not increased to balance the decreased topsoil water uptake during drought. Biomass Plant development differed between the two experimental years. In 2016, the chicory plants were in their second growth year and went into the generative stage right from the start of the growing season. They started flowering in late May. In 2017, the chicory plants were in their first year of growth and stayed in the vegetative state. Aboveground biomass of chicory did not differ significantly between the two treatments in 2016 and was 6.52 and 6.85 t ha −1 in the WW and DS treatment, respectively. In 2017, chicory biomass was 4.65 and 3.64 ton ha −1 in the WW and DS treatment respectively and 2.80 and 2.21 t ha −1 when intercropped with either black medic or ryegrass. Biomass of black medic and ryegrass was 5.89 and 7.68 ton ha −1 respectively. Both intercrop treatments significantly reduced chicory biomass compared to the WW treatment. Ryegrass produced significantly more biomass than black medic (Fig. 2). Root growth Root growth showed a similar pattern across the four treatments; however intercropping decreased total root intensity down to around 2 m depth (only significant in few depths), except for 0.11 m depth, where the chicory and ryegrass intercrop treatment had a significantly higher root intensity than the other treatments. Roots of intercropped species could not be distinguished and the reported root intensities are thus the sum of two species in the intercrop treatments. The month-long summer drought did not influence root intensity in any depths. In 2016, roots had reached 2 m depth at the time of drought initiation, which was 3.5 months after transplanting. A month later, at the time of tracer injection the rooting depth of chicory had reached below 3 m depth ( Fig. 3a and b). In 2017, roots were observed near the bottom of the rhizotrons prior to drought initiation, 2 months after transplanting. However, only a few roots were present below 2 m depth. At the time of tracer injection, which was again 3.5 months after transplanting, root intensity had started to increase down to 2.5 m depth, and at harvest, 4.5 months after transplanting this was the case down to around 3 m depth (Fig. 3c, d, and e). Soil moisture and water uptake During the drought, 135 and 97 mm of water were excluded from the DS treatment in 2016 and 2017, respectively compared to the other treatments. In 2016, the soil dried out gradually at both 0.5 and 1.7 m depth in the DS treatment and in the WW treatment between the precipitation and irrigation events. As a result, the soil was drier in the DS than in the WW treatment at both depths recorded at the time of the tracer-experiment ( Fig. 4a and b). Although chicory WW and the two intercrop treatments in 2017 received the same amount of water, less water reached the sensors at 0.5 m in the chicory and black medic intercropping than in the WW and the chicory and ryegrass intercropping. This indicates that the soil above the sensors was drier and therefore could withhold more water compared to the two other irrigated treatments. At the time of the tracer-experiment, soil water content under the chicory and black medic intercropping was similar to the DS treatment, which was lower in comparison to two other treatments ( Fig. 4c and d). During the tracer-experiment, chicory plants in the WW treatment acquired 3.7 and 2.3 mm water m −1 soil column day −1 from 0.5 m in 2016 and 2017, respectively. The uptake from 0.5 m depth was reduced by more than 50% in the DS treatment compared to the WW treatment in both years. In the WW treatment, chicory took up 1.9 mm water m −1 soil column day −1 from 1.7 m depth in 2016, whereas the uptake was 0.44 mm water m −1 soil column day −1 from 2.3 m depth in 2017. In 2016, drought significantly reduced water uptake of chicory from 1.7 m depth, whereas no effect of drought was observed at 2.3 m depth in 2017. Common for both years was that the amount of water taken up from 0.5 m depth in the DS treatment was equal to the uptake from 1.7 and 2.3 m depth in 2016 and 2017 respectively. Both intercrop treatments significantly reduced water uptake at 0.5 m depth compared to the WW treatment, but no effect was seen at 2.3 m depth (Fig. 5). We did not find any effect of mean daily soil VWC at 0.5 m depth on water uptake at 2.3 m depth, giving no indication of compensatory deep water uptake (Data not shown). H enrichment Chicory took up 2 H tracer from 2.3 m depth in both years (Fig. 6a). Two days after tracer application in 2016, 21 out of 23 chicory plants demonstrated isotope ratios that were two standard deviations or more above controls. Six days after tracer application in 2017, it was 30 out of 64 chicory plants that showed the enrichment. No ryegrass or black medic plants indicated tracer uptake (Fig. 6b). In 2016, the 2 H concentration in chicory plants in the DS treatment tended to be higher compared to the WW treatment, but the difference was not significant. In 2017, no differences were seen in tracer concentration among chicory plants across the treatments. Black medic and ryegrass plants revealed significantly lower 2 H enrichment in comparison to intercropped chicory. In 2016, there was no effect of drought on the 13 C concentration of the chicory biomass (Fig. 7). Similarly, there was neither an effect of drought nor intercropping with ryegrass in 2017. However, intercropping with black medic increased the 13 C concentration in chicory indicating that chicory was more drought stressed in this treatment than in any of the other treatments. Deep root growth In accordance with our hypothesis, chicory demonstrated its capability to grow roots below 3 m depth and did so within 4.5 months. However, root intensity decreased markedly below 2 m in 2016 and below 2.5 m depth in 2017. The root intensity below 2 m depth at drought initiation, 2.5 m depth at tracer injection and 3.5 m depth at harvest in 2017 was very low and could be a result of roots from the 2016 crop still visible on the rhizotron surface. Studies covering a longer growing season have found extensive root growth in chicory down to 2.5 m, where equipment limitations prevented observations deeper down (Thorup-Kristensen 2006;Thorup-Kristensen and Rasmussen 2015). In the field, factors such as high soil strength (Stirzaker et al. 1996;Gao et al. 2016), biopores (Han et al. 2015), and hypoxia might restrict deep root growth, which is less likely in our facility with drained repacked soil. However, we did use field soil with a soil bulk density comparable to field soils. Both intercrop treatments decreased total root intensity especially from 0.5 to 2 m depth. This has to be seen in the light of a total aboveground biomass that was twice as high as in the WW sole crop treatment. Observing that chicory biomass, on the other hand, was reduced to almost half when intercropped, suggests that both black medic and ryegrass had much lower root intensity below 0.3 m depth than sole cropped chicory and that the interspecific competition reduced both above-and belowground growth of chicory. Black medic and ryegrass are both shallow rooted and are unlikely to reach below 1 m depth (Kristensen and Thorup-Kristensen 2004;Thorup-Kristensen and Rasmussen 2015), thus the deep roots observed in the intercrop treatments are assumed to be chicory roots. Deep water uptake The sensors documented water uptake in both treatments from 1.7 m depth in 2016 and 2.3 m depth in 2017. In fact, the sensors showed that in the WW treatment in 2016, chicory water uptake at 1.7 m depth was c. 30% of its water uptake at 0.5 m depth. In 2017, chicory water uptake at 2.3 m depth was c. 10% of its uptake at 0.5 m depth when well-watered. In absolute terms, water uptake from 1.7 m depth in The 2 H tracer uptake by chicory from 2.3 m depth both years support the sensor-based water uptake calculations. Furthermore, the tracer study confirmed that neither black medic nor ryegrass had roots deep enough to acquire water from 2.3 m depth. This is a clear example of resource complementarity in root competition in intercropping (Tilman et al. 2001;Postma and Lynch 2012). drought stressed (DS) chicory sole crop treatments and in the chicory intercrop treatment with ryegrass and black medic respectively. We tested significant differences in a mixed effects twoway ANOVA. To meet the assumptions of homoscedasticity data were log-transformed. Separate analyses were made for each year and pairwise comparisons were only made for the last date. There was no effect of treatment in 2016. In 2017, the 2 H concentration in chicory and in black medic in the intercrop treatment differed. Likewise in the chicory and ryegrass intercropping. Differing treatments are marked with identical symbols Response to water stress and intercropping Water uptake from 0.5 m depth was significantly reduced in the DS treatment compared to the WW treatment indicating that the soil water potential was low enough to limit plant water uptake in the DS treatment. Contrary to our expectations, we did not find a higher water uptake neither at 1.7 m depth in 2016 nor at 2.3 m depth in 2017 when plants were water limited in the topsoil. As biomass was not significantly reduced, whereas water uptake was reduced by 59 and 74%, the reduction in water uptake cannot be explained by a reduced water need. Both years the drought had little effect on aboveground biomass. The drought was initiated late in the season after chicory had established its aboveground biomass and concluded aboveground biomass accumulation in favour of belowground biomass accumulation in the taproot. For this reason, aboveground biomass is not a good indicator of the severity of the drought. On the other hand the equal biomass in the WW and DS treatments secured equal water uptake potential in the two treatments, and thereby revealed that the drought increased the water use efficiency, as less water was taken up per aboveground biomass in the DS than in the WW treatment. Although not significant, the 2 H tracer study indicated a higher 2 H concentration in the transpiration water in the DS compared to the WW treatment in 2016. This suggests a higher relative water uptake from 1.7 m depth. A higher relative uptake from a certain depth can logically be explained by an increase in water uptake from the given depth, a decrease in water uptake somewhere else in the soil profile or a combination of both. As the water uptake based on the sensor calculations show a significantly lower water uptake from 0.5 m depth in the DS than in the WW treatment in 2016, it is likely that what we observed was the effect of decreased uptake in the topsoil. We only observed a significant increase in 13 C concentration in chicory when intercropped with black medic. Samples were taken from the total aboveground biomass, and not from plant parts developed during the drought, which might explain why the treatment effects were only captured in the chicory and black medic intercropping, where black medic appeared to have induced drought stress in chicory before onset of the drought stress we induced. Intercropping reduced total root intensity at 0.5 m depth by over 40%. Still, water uptake from this depth was only slightly decreased indicating that the lower root intensity did not restrict water uptake in wellwatered conditions. Root density in upper soil layers of well-established crops does not correlate well with water uptake (Anblin and Tennant 1987;Katayama et al. 2000), which can be explained by the high mobility of water in the soil, making a dense root system superfluous. Following the logic behind Walter's two-layer hypothesis (Walter 1939(Walter , 1971Walker and Noy-Meir 1982), intercropping would lead to a vertical niche partitioning resulting in increased water uptake by the deep-rooted chicory when intercropped with a shallowrooted species. However, we did not observe an increase in deep water uptake. Absence of a deep water compensation effect We suggest three possible explanations for why we did not observe the hypothesized increase in deep water uptake during drought or intercropping. 1) The hydraulic resistance is too high to increase deep water uptake. Theoretically, the ability of root systems to extract water from deep roots depends not only on root system depth but also on root system hydraulics (Javaux et al. 2013). Root hydraulic conductivity limits the potential water uptake, and differs among species, but also among different roots in a root system Meunier et al. 2018). The Error bars denote standard errors, and letters indicate significant differences among treatments in a one-way ANOVA. Separate analyses were made for each year. For 2017, the model is a mixed effects model because samples for each plant in a chamber were analysed separately ability of a root system to compensate, i.e. extract water where it is easily available, for instance from deeper soil depths, is, therefore, a function of (1) the xylem conductance between the roots in the extraction zone and the root crown and (2) the radial root conductance in the wet zone. Compensation has been observed in chicory below 0.6 m depth, but this was in a study allowing root growth down to only 1.5 m depth (Vandoorne et al. 2012a). In our experiment, the xylem conductance might simply have been too low in the deeper part of the root zone to allow compensation, possibly because the deep soil measurements were made in a zone with a low density of young roots (McCully 1995;Meunier et al. 2018). However, chicory had 31% fewer roots in the chicory and black medic intercropping than in the WW treatment at 2.3 m, with no reduction in water uptake, not supporting such a relationship between root density and water uptake. 2) Insufficient water supply in the topsoil induces root-to-shoot signalling causing stomatal closure, despite sufficient water supply in deeper soil layers. Signals by phytohormones like Abscisic acid (ABA), produced when parts of the root system are under low water potential, might reduce plant transpiration and consequently root water uptake also from deeper depths by triggering stomatal closure Davies 1990a, 1990b;Tardieu et al. 1992;Dodd et al. 2008). Split-root experiments, where one side of the root system is under low water potential, have found reduced stomata conductance, despite sufficient water supply (Blackman and Davies 1985;Zhang and Davies 1990b). However, experiments with vertical heterogeneity in soil water content yield ambiguous results (Puértolas et al. 2015;Saradadevi et al. 2016). The hormonal signalling during topsoil drying has not been tested for chicory. But chicory does show an isohydric behaviour, decreasing stomatal conductance and maintaining leaf water potential during moderate drought stress (Vandoorne et al. 2012b). 3) Deep water uptake compensation might have occurred, but was not captured in this experimental set-up. Water uptake compensation could have happened between or below the depths covered by the sensors. At the end of the drought in 2016, VWC was not only lower at 0.5 m depth in the DS treatment compared to the WW treatment but also in 1.7 m depth, which could have impaired the water uptake from this depth, too. There is however, no indication of an increasing water uptake from 1.7 m at any point in time during the drought. As the soil is drying out from the top, the depth of a possible compensation would at some point have been at exactly 1.7 m, which was not the case. Water uptake could also have been confounded with water redistribution in the soil column, leading to an underestimation of water uptake in depths where water is moving to, and an overestimation in depths where water is moving from. We however assume that the water redistribution is small compared to the daily water uptake, and thus can be neglected. In summary, chicory can grow roots down to 3 m depth within 4.5 months and benefit from a significant water uptake from below 2 m depth both during wellwatered and drought conditions. Our study highlights the benefit of deep root growth for crop water uptake, but questions whether further compensation in deep water uptake takes place when water is limited in the topsoil. A compensation might however, be pronounced for other crop species or for crops which have had more time to establish a deep root system.
2019-11-21T16:41:44.703Z
2019-11-21T00:00:00.000
{ "year": 2019, "sha1": "f9c2a8d9997aaad77a4b23ad3c4b463a0bf85fb9", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11104-019-04349-7.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "230da3b87b3ed9c94c0dcc35bf4058f426773edd", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
46375058
pes2o/s2orc
v3-fos-license
A Slice-Based Change Impact Analysis for Regression Test Case Prioritization of Object-Oriented Programs Test case prioritization focuses on finding a suitable order of execution of the test cases in a test suite to meet some performance goals like detecting faults early. It is likely that some test cases execute the program parts that are more prone to errors and will detect more errors if executed early during the testing process. Finding an optimal order of execution for the selected regression test cases saves time and cost of retesting. This paper presents a static approach to prioritizing the test cases by computing the affected component coupling (ACC) of the affected parts of object-oriented programs. We construct a graph named affected slice graph (ASG) to represent these affected program parts. We determine the fault-proneness of the nodes of ASG by computing their respective ACC values. We assign higher priority to those test cases that cover the nodes with higher ACC values. Our analysis with mutation faults shows that the test cases executing the fault-prone program parts have a higher chance to reveal faults earlier than other test cases in the test suite. The result obtained from seven case studies justifies that our approach is feasible and gives acceptable performance in comparison to some existing techniques. Introduction In the software development life cycle, regression testing is considered an important part.This is because it is essential to validate the modification and to ensure that no other parts of the program have been affected by the change [1].Regression testing [2][3][4][5][6] is defined as the selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still complies with its specified requirements [2].Therefore, this paper follows a selective approach [3][4][5] to identify and retest only those parts of the program that are affected by the change.Thus, it is even more important to improve the effectiveness of regression testing and reduce the cost of test case execution.Therefore, in this paper, we focus on test case prioritization (TCP) of a given test suite .Test case prioritization problem is best described using the example in Table 1.If the test cases are executed in the order {1, 2, 3, 4, 5, 6}, then we achieve 100% coverage of faults only after the sixth test case is executed, whereas if the ordering of the test case execution is changed to {6, 5, 4, 1, 3, 2}, then we achieve 100% coverage after the execution of the fourth test case.Therefore, finding the order of test case execution is essential to detect the faults [7][8][9] early during regression testing.All the existing approaches target finding an optimal ordering of the test cases based on the rate of fault detection or rate of satisfiability of coverage criterion under consideration.However, these existing techniques [3,6,[10][11][12][13] were primarily developed to target procedural programs.Very few existing works [14][15][16][17][18] focus on the test case prioritization for object-oriented programs.This paper presents a static approach of prioritizing the test cases of object-oriented programs.It is reported that a module having high coupling value is more erroneous than other modules [19,20].As a result, a test case that executes a module with high coupling value will reveal more faults than other test cases.Many techniques and metrics [21] exist to measure the coupling value of the program segments [19] and establish these values as an indicator of fault-proneness [20].None of the prioritization techniques available in the literature have reported the use of coupling measure to prioritize the test cases.Thus, this paper uses the coupling value of the affected program parts for prioritizing the selected test cases for regression testing. % of faults detected by two sample test case orderings 1, 2, 3, 4, 5, 6 37.5 50 75.075.0 87.5 100 6, 5, 4, 1, 3, 2 62.5 75 87.5 100 100 100 Based on the above motivations, we propose an approach to prioritize a selected test suite of an object-oriented program using the coupling value of the affected program parts covered by the test cases.For experimentation, we have taken a sample Java program shown in Algorithm 1.A total of twenty test cases (1-20) were taken along with their node coverage information.All those test cases that covered the affected nodes (with respect to a modification point) are selected hierarchically.Finally, five test cases (6-10) are selected for prioritization.For hierarchical regression test case selection details interested readers are requested to refer to [19].In this approach, we propose a technique to prioritize the selected test cases (6-10).Thus, we fix our research objectives as follows: (i) To identify and represent the affected program parts and compute the coupling value of these affected program parts. (ii) To cluster the coupling values [22] into groups and assign a weight to each group based on their criticality. (iii) To prioritize the test cases by sorting them in the decreasing order of their computed weights. So, the contributions of this paper lies in the following: (i) Proposing an algorithm for prioritizing the selected test cases. (ii) Implementing the proposed algorithm for the fifteen experimental programs. (iii) Carrying out mutation analysis. (iv) Comparing the performance of our approach with an existing work. The rest of the paper is organized as follows: Section 2 introduces the technique used in this paper for prioritizing the test cases.We describe our proposed process of prioritization in Section 3. We also discuss the working and complexity analysis of our algorithm in this section.The details of our implementation and experimental studies are given in Section 4. Here, we present the experimental study settings, describe the characteristics of the program samples taken for our experimentation and mutation analysis, and analyze the results. In Section 5, we discuss and compare our work with some related work.We also highlight some of the limitations of our approach in this section.We conclude the paper in Section 6 with some insights into probable extensions to our work. Preliminary Study In this section, we discuss the techniques that are used in this work to accomplish our research objectives. Program Slicing. This paper uses program slicing to identify the affected program parts for change impact analysis. Program slicing was originally introduced by Weiser [23] as a method for automatically decomposing programs by analyzing their data flow and control flow starting from a subset of a program's behavior.Slicing reduces the program to a minimal form that still produces the same behavior as the original program.Program slicing is a method of separating out the relevant parts of a program with respect to a particular computation [24][25][26][27].The input that the slicing algorithm takes is usually an intermediate representation of the program under consideration [28][29][30][31].The first step in slicing a program involves specifying a point of interest, which is called the slicing criterion and is expressed as a tuple (, V), where is the statement number and V is the variable that is being used or defined at . Li et al. [25] proposed the concept of hierarchical slicing that computes the slices at package, class, method, and statement levels.Here, we adopt an approach of slicing that is different from that given in [1,25].We name this slicing approach hierarchical decomposition (HD) slicing.We first construct a single intermediate graph of the program taking into account the possible dependences among different program elements.Then, we perform HD slicing to obtain the affected program parts with respect to the change made to the program.The slice thus obtained is graphically represented and named affected slice graph (ASG).The steps of HD slicing are given in [32].A comparison of hierarchical slicing [1] versus HD slicing in terms of number of nodes selected and computation time is shown in Table 2.At each level, we obtain more accuracy in test case selection from a coarse-grain level to a finer-grain level by discarding the test cases that are not relevant. Coupling in Object-Oriented Programming. Coupling is defined as the degree of interdependence between two modules.However, in an object-oriented programming environment, coupling can exist not only at the level of methods but also at the class level and package level.Therefore, coupling represents the degree of interdependence between methods, between classes, between packages, and so forth.Many researchers have proposed different slicing based mechanisms [19,20,32] to measure coupling in an objectoriented framework.There are many factors, such as information hiding, encapsulation, inheritance, message passing, Similarly, the container packages of the coupled classes are also said to be coupled.The coupling mechanism adopted in this paper is given in Section 3. Regression Test Case Prioritization. Testing is an important phase in the software life cycle.This phase incurs approximately 60% of the total cost of the software.Therefore, it becomes highly essential to devise proper testing techniques in order to design test cases that tests the software to detect early bugs.It becomes a big challenge to manage the retesting process with respect to the time and cost, especially when the test suite becomes too large.Therefore, selective retest technique attempts to identify those test cases that can exercise the modified parts of the program and the parts that are affected by the modification to reduce the cost of testing.However, test case prioritization can complement selective retest technique and faults can be detected early by prioritizing these selected test cases.Thus, test case prioritization (TCP) problem, stated by Rothermel et al. [6], is as follows, given that is a test suite; is the set of permutations of ; is a function from to the real numbers. Problem.Find ∈ such that where is the set of all possible orderings of the test cases in and is a function that maps the ordering with an award value.This prioritization approach can be used with the selective retest technique to obtain a version specific prioritized test suite [2].Rothermel et al. [6] proposed a metric to ensure the efficiency of any of the existing prioritizing techniques.This metric is named as Average Percentage of Fault Detected (APFD) and is used by many researchers to evaluate the effectiveness of their proposed techniques.APFD measure is calculated by taking the weighted average of the number of faults detected during execution of a program with respect to the percentage of test cases executed.Let be a test suite and let be a permutation of .The APFD for is defined as follows: Here, is the number of test cases in , is the total number of faults, and is the position of the first test case that reveals the fault .The value of APFD can range from 0 to 100 (in percentage).The higher the APFD value for any ordering of the test cases in the test suite is, the higher the rate at which software faults are discovered is. Our Proposed Approach In this section, we discuss our proposed approach to prioritize a given test suite based on the test cases selected for regression testing.We consider the example Java program shown in Algorithm 1 to discuss our proposed approach.This program defines a class named Shape which is inherited by the classes Rectangle and Triangle.The class TestShape contains the main method and computes the area of a rectangle and triangle, respectively, based on the user inputs given through the console, and displays the result.Though this program is very small in size, it represents all the important features of a Java program and is helpful in explaining the working of this approach.The prioritization steps are summarized as follows. Step 1. Construct the ASG and compute the coupling value of each node of the ASG. Step 2. Cluster the coupling values and assign weight to the nodes of ASG. Step 3. Compute the weights of test cases and prioritize. ASG and Computation of Coupling Values.ASG is the graphical representation of the slice that is computed with respect to some change made to the program.The point of change is taken as the slicing criterion to compute the slice.The slicing algorithm comprises both forward and backward traversal to discover the affected program parts.The forward traversal discovers the program parts affected by the change, and the backward traversal discovers those parts that affect the parts discovered in the forward traversal.The steps of hierarchical decomposition (HD) slicing to compute the slice and construct the ASG are given as follows: (i) Traverse the EOOSDG in forward direction, starting from the point of modification, that is, slicing criterion, except method overridden edges. (ii) Mark and Add each node of the EOOSDG that is reached by the forward traversal to a worklist, 1 . (iii) Perform two-pass backward traversal for each ∈ 1 as the starting point. (1) Pass-1: This set of affected nodes and their dependences are then modeled graphically to form the affected slice graph (ASG).To avoid repetition of the concepts, details are not reproduced here; interested readers are requested to refer to [32] for details.Algorithm 2 takes the ASG as input and calculates the ACC of each node.We discuss the working of the proposed Algorithm 2 in Section 3.4.In this approach, we use the concept of information inflow and outflow for coupling measurement.The ASG represents all forms of information flow between any two nodes in the form of edges.Thus, our proposed affected component coupling (ACC) for a given node is computed as the normalized ratio of the sum of inflow and outflow of with total nodes in ASG.The direction of couplings between any two nodes is given equal weight and was not considered separately.This goes with the hypothesis that a ripple change can propagate in any direction along a coupling dimension.Below, we define the terms related to the computation of affected component coupling (ACC) values.Definition 1.To measure the coupling, we define a set Inflow() that comprises all those nodes on which depends.For any node in ASG, The outflow of in ASG is defined as the set comprising all those nodes that depends on : Thus, the dependence set () of each node is defined as the union of all the Inflow() and Outflow(): Definition 2. Hence, affected coupling of a given node is computed as the normalized ratio of dependence of , (), to the total number of affected nodes in the ASG, | | − 1, as the node under consideration is excluded.This coupling is measured with respect to the change made to the program that was taken as slicing criterion to generate ASG.This coupling measure is given as Definition 3. The updated coupling of a method node in ASG = ( , ) is defined as the average of the coupling values of all its elements (parameters and statements) along with its own coupling measure.Let a method node have number of elements, that is, 1 , 2 , . . ., .Thus, coupling of the method node is given as Definition 4. The updated coupling of a class node in ASG = ( , ) is defined as the average of the coupling values of all its elements (attributes and methods) along with its own coupling measure.Let a class node have number of elements, that is, 1 , 2 , . . ., .Thus, cohesion of the class node is given as Definition 5.The updated coupling of a package node in ASG = ( , ) is defined as the average of the coupling values of all its elements (classes and subpackages) along with its own coupling measure.Let a package node have number of elements, that is, 1 , 2 , . . ., .Thus, coupling of the package node is given as The detail computation of ACC value for some of the nodes is shown in Section 3.4.The reason for taking coupling into consideration is that any node having higher ACC value is an indicator that the node is likely to be more errorprone [20].This is because higher ACC value of a node indicates more dependence of other nodes on this source of information. Clustering and Assigning Weights. Once the ACC values of all the nodes have been computed, then the values are clustered.Clustering [33] of the nodes is based on the notion that not all the nodes of ASG are equally erroneous.Some nodes are more erroneous than others.So, we need to identify the set of nodes that can be categorized into different levels of fault-proneness.-means clustering technique [22,34] is used here to cluster the ACC values.-means clustering is a technique of automatically partitioning a set of given data into groups.The cluster centres are chosen randomly from the data set.The value of for our approach is 3 as we divide the coupling values into three categories as shown in Figure 3.These three categories of fault association are critical fault association, moderate fault association, and weak fault association.The computed ACC values can belong to either of these three categories.We propose an algorithm named find Weighted Affected Component Coupling (findWACC).It takes the ASG and its total number of nodes as input.It uses the formula given in (6) to compute the ACC value of each node in the ASG.It computes the outflow of a node at Line (3) and inflow of a node at Line (4).Algorithm 2 computes the ACC values of each node and then updates these values for some specific nodes such as package nodes, class nodes, method nodes, and method call nodes.It then assigns weight to the nodes of ASG.Any value of weights can be chosen to signify the faultiness of one set of nodes compared to the other sets.However, in this paper, we use the following weights: if the ACC value of a node belongs to the category of critical fault association, that is, 0.7 ⩽ ACC() < 1.0, then is assigned a weight 3. Similarly, if ACC value of a node belongs to the category of moderate fault association, that is, 0.6 ⩽ ACC() < 0.7, then is assigned a weight 2. Otherwise, belongs to the category of weak fault association and is assigned a weight 1.The ACC value of each node of ASG and the corresponding weights assigned to them are shown in Figure 2. Computation of Test Case Weights and Prioritization. The program under consideration is executed with each selected test case in a given test suite to find the coverage information as shown in Table 3.The weight of a test case depends upon the weight of the nodes that it covers.All the critical and moderate nodes (nodes with weights 3 and 2, resp.) are shown in bold in the nodes covered column of Table 3.We propose Algorithm 3 named Hierarchical Prioritization of Test Cases using Affected Component Coupling (H-PTCACC) to compute the weights and prioritize the given test suite.Algorithm 3 takes the selected test cases along with their coverage information and ACC values of each node in the ASG as its input.The output of the algorithm is a prioritized set of test cases.For any test case ∈ , Algorithm 3 first computes its critical weight (Wtc), that is, the sum of the weights of all the critical fault-prone nodes covered by .Similarly, Algorithm 3 computes the moderate weight (Wtm), that is, the sum of the weights of all the moderate fault-prone nodes covered by .In the same way, Algorithm 3 computes the weak weight (Wtw), that is, the sum of the weights of all weak fault-prone nodes for each test case.Thus the weight of test case is given as the sum of its critical weight (Wtc), moderate weight (Wtm), and weak weight (Wtw).Table 4 shows the different weights computed for each of the test cases 6, 7, 8, 9, and 10.Algorithm 3 assigns priority to the test cases based on their different computed weights.The test case having a higher total weight is given higher priority in the test suite.If any of the two test cases have the same total weight then their priority is decided based on their critical weight.The test case with higher critical weight is given higher priority.Similarly, if the critical weights 4 shows the final prioritized sequence of the selected test cases. Working of the Algorithm. In this subsection, we discuss the working of our proposed algorithms.Algorithm 2 uses the formula given in (6) to compute the ACC value of each node in the ASG.For example, we show the ACC calculation for the class Triangle represented as node 24 in Figure 1. Initially, ACC value of node 24 is computed as Figure 4 shows the sets associated with the computation of ACC of node 24.The inflow set for node 24 is shown in Figure 4(a) and outflow set is shown in Figure 4(b).Figure 4(c) shows the set in which node 24 is a member.Once the computation of ACC of all member nodes of node 24, shown in Figure 4(d), is complete then ACC(24) is updated.Similarly, the ACC values of all the associated nodes (25, 26, 27, f3, f4, 29, 30, f27 1 out, f27 2 out, 33, f3 out, and 34) with node 24 as shown in Figure 1 Then, Algorithm 2 updates the ACC value of each node of ASG.The reason behind this update is that, for any node that represents a method, the statements contained inside that method also contribute to the ACC of the method.Even if a method does not have any statement inside it, still it will have some ACC value as some other method may be overriding it.Therefore, we have taken the average of all the ACC values of all the statements and the ACC value of the method under consideration, to compute the updated ACC value of the method.For example, the ACC values of nodes {24, 27, 33} are updated.The average ACC value of node 27 along with the ACC values of all its member nodes {f3, f4, 29, 30, f27 1 out, f27 2 out} are computed and assigned to node 27; that is, ACC Therefore, ACC value of class Triangle in Algorithm 1 represented as node 24 in Figure 1 is found to be 0.68866.Similar procedure is followed to update the ACC values of all the nodes representing the classes and packages in the ASG.Algorithm 3 computes the critical fault-prone weight Wtc( ), moderate fault-prone weight Wtm( ), weak faultprone weight Wtw( ), and the total weight Wt( ) for each test case ∈ .For example, the nodes covered by test case 8 as given in the second column of Then, the algorithm sorts the test cases in the decreasing order of their total weights Wt( ).If there exist some test cases , such that Wt( ) = Wt( ), then the algorithm sorts and based on their critical fault-prone weights Wtc( ) and Wtc( ).If for some test cases Wtc( ) = Wtc( ), then and are sorted based on their moderate fault-prone weights Wtm( ) and Wtm( ).If, again, Wtm( ) = Wtm( ), then test cases are sorted by their weak fault-prone weights Wtw( ) and Wtw( ).In a very unlikely case, if the weak fault-prone weights are still identical, that is, Wtw( ) = Wtw( ), then the test cases are given equal priority.The prioritized order of the test cases 6-10 based on their respective weights is obtained as {7, 8, 9, 10, 6}. Complexity Analysis of the Algorithms. The complexity analysis of the proposed algorithms is given as follows. Space Complexity. Let the computed slice represented as ASG have nodes.Each node in the ASG corresponds to each statement of the computed slice along with the actual and formal arguments present.Hence, the space requirement is given as ().Each node may have dependences on other nodes.These dependences on other nodes are represented as edges.Since each node can be dependent on maximum ( − 1) other nodes, the space requirement for the edges is ( 2 ).Hence, the total space requirement for the algorithm is ( 2 + ) ≡ ( 2 ). Time Complexity. Let be the set of nodes in the ASG.To compute the inflow to the input node, each node is traversed only once, so the time complexity is ().If the time spent in each recursive call is ignored, then each node can be processed in (1 + pred[]), where pred[] represents the set of predecessor nodes of .If each node has every other node in the graph as its predecessor node, then each node has ( − 1) predecessor nodes.So, the time complexity to process each node is (1 + ( − 1)) ≈ ().Similarly, to compute the outflow from the input node the time complexity is calculated as ().Then, the total time required to compute the coupling values of all the nodes is calculated as ( 2 ). Let , , and be the number of method nodes, class nodes, and package nodes, respectively, whose ACC values need to be updated.If each method node has member nodes, then the time required to update method nodes is ( 2 ).Since and are small bounded positive integers, the time complexity is calculated as ( 2 ).Similarly, if each class node has member nodes and each package node has member nodes, then the respective time complexities for class nodes and package nodes are ( 2 ) and ( 2 ).Since , , , and are small bounded positive integers, the time complexities are calculated as ( 2 ) and ( 2 ), respectively, for the class and package nodes.As nodes are there with ACC values, so the time required to assign a weight to each of the nodes depending on their respective ACC value is ().Therefore, the worst-case run-time of the findWACC algorithm is calculated as ( 2 + 2 + 2 + 2 +) ≡ ( 2 ). Let be the number of test cases to be prioritized in the given test suite .Suppose a test case covers at most number of nodes.Let , , and be the critical, moderate, and weak fault-prone nodes, respectively, covered by a test case, such that = + + .So, the time complexity to compute the weight of each test case is calculated as ( + + ) ≡ ().As a result, the total time complexity to compute the weight of test cases in the given test suite is ().Assuming ≡ , the time complexity to compute the weights is calculated as ( 2 ).The time complexity to sort the ≡ test cases is calculated as ( 2 ).Therefore, the worst-case run-time of the H-PTCACC algorithm is calculated as ( 2 + 2 ) ≡ ( 2 ). Implementation In this section, we briefly describe the implementation of our work.We implemented our code and all the algorithms using Java and Eclipse v3.4 IDE on a standard Windows 7 desktop.The proposed approach of change impact analysis is completely based on the intermediate graph of the modified program.The identification of the dependences to construct the intermediate graph follows the build-on-build approach; that is, we use the existing APIs and tools to build the graph instead of developing the source code parser from scratch.Source code instrumentation and generation of the intermediate graph are implemented by using XPath parser on srcML (SouRce Code Markup Language) representation of the input Java program.Thus, srcML is the XML (eXtended Markup Language) representation of the input Java program.The input program is converted to srcML using src2srcml tool.This srcML representation is then used to extract the dependences between program parts by using the XPath parser.The details of the program conversion and fact extraction process can be referred to in [26,35].Many other APIs and tools (such as Document Object Model (DOM) and Simple API for XML (SAX)) can be used to extract facts from the srcML representation.In this paper, the fact and dependence extraction is done using XPath.XPath is a language support used by XSLT (extensible stylesheet language) parser [36] to address specific part(s) of the entire XML document.The choice of using XPath is because of its simplicity and easy extraction by direct tracing to the location of the information.This also works on both visioXML and srcML formats of XML.The XPath expression "U function [name = "getArea"]," directly traces to the function definition with the name "getArea."The source code is first instrumented and then dependences in the program are identified and extracted into the program dictionary to construct the intermediate graph.The modified statement (instrumented number) is taken as input along with the intermediate graph, to slice the affected nodes.Most of the dependences at package level, class level, and method level are extracted from the Imagix4D XML data.Imagix4D is a static analysis tool that gives the graphical representation of most of these dependences.The statement level dependences such as control dependence and data dependence [35] are extracted from the srcML representation of the program.The program dictionary stores the following information: A change set is maintained that refers to the set of concurrent changes carried out on the program.-means algorithm is implemented in Matlab for clustering the coupling value. Experimental Program Structure. To implement our technique and show its effectiveness, we have taken total fifteen programs of different specifications as shown in Table 5.Out of these fifteen programs, ten benchmark programs (Stack, Sorting, BST, CrC, DLL, Elevator spl, Email spl, GPL spl, Jtopas, and Nanoxml) are taken from Software-Artifact Infrastructure Repository (SIR) [37] and other five programs are developed as academic assignments.These smaller programs are chosen to ascertain the correctness and accuracy of the approach, keeping in mind that they represent a variety of Java features and applications, the test cases are available and otherwise easily developed, and coverage information can be computed. The smallest program has 54 LOC, and the largest program has 7646 LOC.The total LOC for all the fifteen programs is 19369 and the average LOC per program are 1291.The total number of classes in all the fifteen programs is 185 with an average of 12 classes per program.Our example program in Algorithm 1 has smallest number of classes and GPL spl has the highest, 111 number of classes.The total number of methods in all the programs is 2048 with an average of 137 methods per program.We have constructed a total of 150 ASGs for all the programs.The smallest ASG has 33 nodes, and the largest has 5233 nodes.The total number of affected nodes in all the fifteen programs is 28452, and the average number of nodes affected per each change made to the programs is 152.Similarly, the total number of test cases considered for all the programs is 295 with a mean of 20 test cases per program.Only those test cases that had a coverage value of more than 90% were chosen for each of the experimental programs.The coverage of the test cases were found using JaBUTi, a coverage analysis tool for Java programs.The total number of test cases selected for prioritization using our approach for all the fifteen programs is 131.The smallest number of selected test cases for prioritization is 5 for the our example program in Algorithm 1 and the highest is 14 for GPL spl program. Mutation Analysis. To generate the mutants for the input program, we used an Eclipse plugin of MuJava known as MuClipse [38].Fault mutants are considered to be good representative of real faults [37,39,40].MuClipse supports both the traditional and object-oriented operators for mutation analysis.Table 6 gives an overview of the mutation operators considered in the experimental study.A brief description of the operators is given for every operator in Table 6.The first five operators are the traditional operators.The remaining 23 operators relate to the faults in object-oriented programs.Out of which JTD, JSC, JID, and JDC are specific to Java features that are not available in all object-oriented languages.Apart from this, there are some other operators, such as EOA, EOC, EAM, and EMM, that reflect the typical coding mistakes common during development of an object-oriented software.The mutant generator generates the mutants for the sliced program (representing the affected program parts) according to the operators selected by the testers.Very large number of mutants are generated.The location of these mutants in the source code is visualized through mutant viewer.It allows a tester to select appropriate number of mutants and design test cases to kill the mutants.As the number of generated mutants are too large, we randomly selected a less number of mutants for our experimental programs.This process was repeated for 10 times and the rate of fault detection for the prioritized test suite was computed.The average number of mutants selected for every program is shown in Table 5.The test cases are written in a specific format such that each test case is in a form of invoking a method in the class under test.The test method has no parameters and returns the result in the form of a string.The mutant is said to be killed if the obtained output does not match the output of the original program.The test cases for the input program are generated using JUnit Eclipse plugin as the JUnit test cases closely match the required format.The total number of fault mutants for all the fifteen programs is 514, and the average number of mutants per program is 34. Results. Figure 5 shows the boxplots of the results of our mutation analysis for all the experimental programs.Figure 5(a) shows the presence of mutants in percentage in the affected parts of the programs.The presence of mutants in the affected parts of the programs ranges from a minimum of 12% (DLL program) to a maximum of 94% (Sorting program).The affected program parts in five programs have more than 90% of the mutants and four programs have little more than 10% mutants.The result shows that an average of 47% of mutants are scattered in the affected program parts of the sample programs.Figure 5(b) shows the percentage of mutants killed in each of the experimental programs.The percentage of mutants killed by the prioritized test cases varies from 70% to 95%.The average percentage of mutants killed by the prioritized test suite is 85%.This shows that our prioritized test cases are efficient in revealing the faults. The average percentage of affected nodes covered by the prioritized test cases using the approach of Panigrahi and Mall and our approach is shown in Figures 6 and 7, respectively, for the experimental program given in Algorithm 1. From Figures 6 and 7, it may be observed that the average Advances in Software Engineering percentage of nodes covered (APNC) using the approach of Panigrahi and Mall [18] is 77.2%, whereas the APNC value using our approach is 80.6%.Thus, there is an increase of 3.4% in APNC measure by our approach.Hence, our approach detects faults better than the approach of Panigrahi and Mall [18] as our approach covers more number of fault-prone nodes.We evaluated the effectiveness of our approach by using APFD metric.We named Panigrahi and Mall approach [18] as Affected Node Coverage (ANC) and our approach as Fault-Prone Affected Node Coverage (FPANC) in Figure 8.The comparison of APFD values for these fifteen different programs obtained using ANC and FPANC approaches is shown in Figure 8.The results show that our FPANC approach achieves approximately 8% increase in the APFD metric value over ANC approach. The experimental results show that the performance of our approach varies significantly with program attributes, change attributes, test suite characteristics, and their interaction.To assume that a higher APFD implies a better technique, independent of cost factors, is an oversimplification that may lead to inaccurate choices among prioritization techniques.For a given testing scenario, cost models for prioritization can be used to determine the amount of difference in APFD that may yield desirable practical benefits, by associating APFD differences with measurable attributes such as prioritization time.A prioritization technique would be acceptable provided the time taken is within acceptable limits, which also reflects the cost of retesting.Korel et al. [41] have also focused on less time of execution to decrease the overhead of prioritization process.However, the acceptable time limit greatly depends upon the testing time available with the tester.An empirical analysis on the prioritization time is outside the scope of this paper and is kept for our future work.We have reported the prioritization time of our approach to indicate the time taken to prioritize the test cases when the precomputed test coverage information and the ASG are available with the tester.The last column of Table 5 shows the time taken for prioritizing the selected test cases.The prioritization time varies from a minimum of 1.3 seconds to a maximum of 3.87 seconds for the experimental programs.The total time taken to prioritize the test cases of all the programs is 35.48 seconds and the average time for prioritizing the test cases is 2.4 seconds.The prioritization time includes the time for computing the weights of the test cases and the time taken to order the test cases in decreasing order of their weights. Comparison with Related Work In this section, we give a comparative analysis of our work with some other related works. Elbaum et al. [42] performed an empirical investigation to find out the testing scenarios where a particular prioritization approach will prove to be efficient.They analyzed the rate of fault detection that resulted from several prioritization techniques such as total function coverage, additional function coverage, total binary-diff function coverage, and additional binary-diff function coverage.The authors considered eight C programs for their experimentation.They used the documentation on the programs and the parameters and special effects that they determined to construct a suite of test cases that exercise each parameter, special effect, and erroneous condition affecting program behavior.Then they augmented those test suites with additional test cases to increase code coverage at the statement level.The regression fault analysis was done on the faults inserted by the graduate and undergraduate students with more than two years of coding experience.The experimental results show that the performance of test case prioritization techniques varies significantly with program attributes, change attributes, test suite characteristics, and their interaction.Our results also confirm similar findings.However, our approach concerns Java programs.We have considered the dependencies caused by the object-oriented features in our proposed intermediate graph.Our approach targets the coverage of those affected nodes that have a high potential of exposing faults; hence, it is more change-based than the approach in [42]. Korel et al. [41] proposed a model based test prioritization approach.The approach is based on the assumption that execution of the model is inexpensive as compared to execution of the system; therefore the overhead associated with test prioritization is relatively small.This approach is based on the EFSM system models.The original EFSM model is compared with the modified EFSM model to identify the changes.After the changes are identified, the EFSM model is executed with the test cases to collect different information that are used for prioritization.The authors propose two types of prioritization: selective test prioritization and model dependence-based test prioritization.The selective test prioritization approach assigns higher priority to the test cases that execute the modified transitions.Model dependencebased test prioritization mechanism carries out dependency analysis between the modified transitions and other parts of the model and uses this information to assign higher priorities to the test cases.EFSM models consist of two types of dependences: control and data dependences.The results show that model dependence-based test prioritization (considering only two types of dependences) gives improvement in the effectiveness of test prioritization.The corresponding system for each model was implemented in C language.In another work, Korel et al. [43] compared the effectiveness of different prioritization heuristics.The results show that model based prioritization along with heuristic 5 gave the best performance.Heuristic 5 states that each modified transition should have the same opportunity of getting executed by the test cases.Korel et al. [44] proposed another approach of prioritization using the heuristics discussed in [43].In this new approach, they considered the changes made in the source code and identified the elements of the model that are related to these changes to prioritize the test cases.In our approach, we have considered the object-oriented programs in Java.The program is represented by our proposed intermediate graph.The graph is constructed by considering many more dependences that exist among the program parts in addition to control and data dependences, giving a clear visualization of the dependences.Then, we identify the effect of modifications and represent the affected program parts in another graph.Our representation is more adaptable to the frequent changes of the software and our approach relies on the execution of these affected program parts.Thus, our prioritization approach is based on both the coverage of the affected program parts and the fault exposing potential of the test cases. Jeffrey and Gupta [45] proposed a prioritization approach using the relevant slices.They also aimed for early detection of faults during regression testing process.This approach considers the execution of the modified statements for prioritizing the test cases.The assumption is that if any modification results in some faulty output for a test case, then it must affect some computation in the relevant slice of that test case.Therefore, the test case having higher number of statements is given higher priority assuming that they have a better potential to expose the faults.However, intuitively, not all statements depending upon some modification will have the same level of fault-proneness.It may so happen that a test case executing less number of statements will detect more faults than another test case that executed more number of statements.The level of fault-proneness of the statements executed by the test cases affects the fault exposing potential of that test case.Therefore, in our approach, we computed the coupling values of the affected program parts to identify the probable fault-proneness of these programs parts.Our approach assigns a higher priority to that test case which executes maximum number of high fault-prone statements.Further, unlike our hierarchical decomposition slicing approach, relevant slicing depends upon the execution trace of the test cases and is proposed to work on C programs.Even though execution trace based slicing would result in slices of smaller sizes, the computational overhead is very high.The efficiency of our slicing approach is shown in Table 2.We have also shown the time requirement of our prioritization approach in Table 5. The performance goal of the prioritization approach proposed by Kayes [11] is based on how quickly the dependences among the faults are identified in the regression testing process.An early detection of the fault dependences would enable faster debugging of the faults.The paper assumes that the knowledge of the fault presence is extracted from the previous executions of the test cases.A fault dependence graph is constructed using this information.However, one major limitation of this approach is that regression testing aims at discovering new faults introduced by the changes made to the software.But the prioritization approach proposed in this paper only enhances the chances of finding the faults which have already been revealed and are present in the fault dependence graph.New faults if any cannot be discovered.Further, this approach does not take into account the faultproneness of the statements.However, our approach relies on the dependence of the affected program parts represented as affected slice graph (ASG), so that error propagation because of the change is better visualized and analyzed.We compute the fault-proneness of the statements by computing their coupling values as coupling measures are proven to be good indicator of fault-proneness.Thus, our approach has a higher probability of exposing new faults, if any, in the software. Mei et al. [15] proposed a static prioritization technique to prioritize the JUnit test cases.This prioritization technique is independent of the coverage information of the test cases.It works on the analysis of the static call graphs of JUnit test cases and the program under test to estimate the ability of each test case to achieve code coverage.The test cases are scheduled based on these estimates.The experiments are carried out on 19 versions of four Java programs of considerable size considering their method and class level JUnit test cases.The heuristic to prioritize the test cases in this approach is to cover system components (in terms of total components covered or components newly covered).The coverage of the system components acts as a proxy for evaluating a test cases true potential of exposing faults.If any two test cases carry the same heuristic value then the approach randomly decides on the test case to be given higher priority.Though this is a scalable approach as it works at coarse granularity level and incurs less computational cost, it suffers from many limitations.The prioritization techniques that work at a finer granularity level give better performances (in terms of fault exposing potential) as compared to the techniques that work at coarse granularity level [42].This approach ignores the faults caused by many object-oriented features such as inheritance, polymorphism, and dynamic binding and focuses only on the static call relationships of the methods in the form of a call graph.Static call relationships are more to procedure-oriented programs.Interaction and communication between methods in the form of message passing is highly important in object-oriented programs.A single method is invoked by different objects and the behavior of the method also differs accordingly.Any prioritization technique is efficient if it is based on the characteristics of the program to be tested.Therefore, considering the objectoriented features is essential.Java supports encapsulation and provides four access levels (private, public, protected, and default) to access the data members and member methods.Any misinterpretation of these access levels forms a rich source of faults.Java supports a feature named "super" to have access to the base class constructor from the derived class constructor.This additional dependence between constructors of the derived class and the bases class needs attention of the testers.Method overriding allows a method in the derived class to have the same function signature as the method in its parent.If invocation to such methods is not resolved correctly, then it can cause some serious faults.Another powerful feature and a potential source of fault is variable hiding.It allows declaration of a variable with the same name and type in the derived class as it is in the base class and allows both variables to reside in the derived class.Problem arises when an incorrect variable is accessed.Inheritance is a powerful feature but sometimes unintentional misuse of this feature can result in serious faults.Polymorphism in Java exists for both attributes and methods and both use dynamic binding.An object of its class type can access an attribute or method of its subclass type.The subclass object can also access the same attributes and methods.These attributes and methods behave differently depending upon the kind of object that is referring it.Such polymorphic dependences if not resolved can cause faults.Interested readers are requested to refer to [46][47][48][49] for more number of faults introduced by the misuse of the object-oriented features.Therefore, any prioritization technique with a performance goal of revealing more faults must consider the object-oriented features as they can induce many kinds of faults in the system.Our approach considers all the object-oriented features in the form of intermediate graph.This intermediate graph is constructed by identifying the dependences that can exist among various program parts and are given in [32].Our approach works at a finer granularity level and therefore may not be as scalable as [15] but has better fault exposing potential. Fang et al. [14] have proposed similarity based prioritization technique.The authors have taken five Java programs from Software artifacts Infrastructure Repository (SIR) [37] to validate their approach.The prioritization process is based on the ordered sequence of the program entities.They propose two algorithms farthest-first ordered sequence (FOS) and greed-aided clustering ordered sequence (GOS).The FOS approach first selects the test case having largest statement coverage.The next test case that is selected is the one that is farthest in distance from the already selected test case.It computes two types of distances: a pairwise distance between the test cases and distance between a candidate test case and the already selected ones.GOS approach consists of clusters of test cases in which initially each cluster consists of only one test case.Then the clusters are merged depending upon the minimum distance between any two clusters.This process of merging the clusters is repeated until the size of the cluster set is less than some given .Then, the algorithm iteratively chooses one test case from each cluster and adds to the prioritized test suite until all the clusters are empty.The experimental results in this study show that statement coverage is most efficient and preferred for prioritization.When the size of the test suite is large, then additional measures are taken to reduce the cost of prioritization.This approach gives equal importance to all the test cases assuming that all the test cases have equal potential of exposing the faults.Intuitively, a test case executing less number of statements can expose more faults provided that the covered statements have high proneness to faults.It also does not consider the object-oriented features and the faults generated by these features.Unlike Fang et al. [14], we consider the fault inducing capability of the object-oriented features based on which we detect the affected program parts.We propose to prioritize a set of change-based selected test cases that are relevant to validate the change under regression testing.We compute the fault-proneness of the affected statements and then prioritize the test cases based on the coverage of these high fault-prone statements (represented as nodes in our proposed graph). Lou et al. [16] proposed a mutation-based prioritization technique.In this approach, they compared the two versions of the same software to find the modification.Then, they generate the mutants only for the modified code.They selected only those test cases of the original version that worked on the new version of the software for prioritization.The test case that killed more mutants was given higher priority.The authors used a mutation generation tool, named Javalanche.Unlike our approach, Lou et al. [16] do not take into consideration the object-oriented features and the faults likely to occur because of these features.It is also silent on the type of mutation operators (faults) considered for their experimentation.Like Lou et al. [16], we generate mutants only for the sliced program (representing the affected program parts).However, we used MuClipse (an eclipse version of MuJava) to generate the mutation faults.We use coupling measure of the affected program parts as a surrogate to imply faultproneness.Our hypothesis assumes that the test cases that execute the nodes with high coupling value have a higher chance of detecting faults early during regression testing.We used mutation analysis only to validate our hypothesis. The detail survey conducted on available coverage based prioritization techniques [11,[14][15][16][41][42][43][44][45] reveals that these techniques have not considered the object-oriented features.The presence of many faults arising due to different objectoriented features is inherent to object-oriented programs and hence must be considered.Therefore, we find that the approaches contributed to by Panigrahi and Mall [17,18] relate closely to our approach for an experimental comparison.Panigrahi and Mall proposed a version specific prioritization technique [17] to prioritize the test cases of objectoriented programs.Their technique prioritizes the selected regression test cases.The test cases are prioritized based on the coverage of affected nodes of an intermediate graph model of the program under consideration.The affected nodes are determined due to the dependences arising on account of the object relations in addition to the data and control dependences.The effectiveness of their approach is shown in form of improved APFD measure achieved for the test cases.In another work, Panigrahi and Mall [18] have improved their earlier work [17] by achieving a better APFD value.In this technique, the affected nodes are initially assigned a weight of 1.The weight is decreased by 0.5, whenever that node is covered by previous execution of the test cases.In both approaches [17,18], they have assumed that all the test cases have equal cost, and all faults have the same severity. The assumption is also that all the affected nodes have a uniform distribution of faults.As a result, a test case executing more number of affected nodes will detect more faults and, therefore, has a higher priority.The average percentage of affected nodes covered by this approach is shown in Figure 7. Unlike the approach in [18] that is based on node coverage only, our proposed approach is based on the fact that some nodes are more fault-prone than other nodes.We used an intermediate graph that represents only those nodes that are affected by the modification made to the program to compute the fault-proneness of the nodes.The coupling factor of each node in the ASG is computed to predict its level of faultproneness.The test cases are then prioritized based on the fault-prone nodes that they execute.Unlike [18], a test case Advances in Software Engineering executing more number of fault-prone nodes has a higher computed weight and gets a higher priority in our approach. Threats to Validity. It is obvious for any new proposed work to be associated with some threat to its validity, and it is likely for this work as well.Our approach is capable of measuring the coupling value of a class in the presence of many object-oriented features such as inheritance, interfaces, polymorphism, and templates.The coupling of classes in a subclass-superclass relationship can have a different impact on software maintainability and fault-proneness compared to the coupling of classes that are not in such relationship.Therefore, it is essential to make a distinction between coupling within an inheritance hierarchy and coupling across inheritance hierarchies.Similarly, whether the presence of Java interfaces (that usually do not contain actual implementations) contributes to the coupling measurement or not is a matter of study that is not included in this paper.The impact of inclusion/exclusion of any of the object-oriented features on the coupling measurement has not been empirically investigated in this paper.We believe that a detailed empirical research on such relationships and their impact on the proposed coupling analysis is essential and is left for future study.Another threat to the validity of this work is that the fault prediction can be improved when both coupling and cohesion metrics are considered together [20], but this approach focuses only on coupling measure.Slicing techniques based on intermediate graphs are always limited by the scalability issues of the graph for larger program.This approach is tested to work well with programs having nearly 1 Lakh line of code.For larger programs it may raise some memory issues.However, it will work fine for bigger programs if the graph is restricted to method level analysis only.The limited size and complexity of the experimental programs are considered a threat to the validity of this approach.Our approach considers only the primary mutants.It does not consider the secondary mutants which are also important.Our approach of mutation analysis may be extended to handle secondary mutants.The use of mutation analysis for the fault manipulation of these programs may not represent the actual fault occurrence in the complex industrial programs and hence is considered a threat to this approach.Though the proposed prioritization approach is efficient in detecting the faults, it may not be so in terms of time requirement.However, the time requirement is within acceptable limit if applied to the test cases selected for regression testing, and the coverage information is available.An empirical study of the impact of prioritization time on the choice of selection of the prioritization techniques would be interesting and may be carried out in future. Conclusion and Future Work In this paper, we proposed a coupling metric based technique to improve the effectiveness of test case prioritization in regression testing.Analysis is done to show that prioritized test cases are more effective in exposing the faults early in the regression test cycle.We performed hierarchical decomposition slicing on the intermediate graph of the input program.The affected component coupling (ACC) value of each node of the ASG is calculated as a measure to predict its faultproneness.In this technique, weight is assigned to each node of ASG based on its ACC value.The weight of a test case in a given test suite is then calculated by adding the weights of all the nodes covered by it.The test cases are prioritized based on their coverage of fault-prone affected nodes.Thus, the test case with a higher weight is given higher priority in the test suite.The results show that our FPANC approach achieves approximately 8% of increase in the APFD metric value over ANC approach.In the future, we aim to prioritize the test cases for more complex object-oriented (OO) programs such as concurrent and distributed OO programs.We would also like to incorporate different other coupling measures and metrics to predict the fault-proneness of modules and prioritize the test cases based on their coverage weights.We as well aim to compute the cohesion values of the program elements and use them along with their coupling values for a better fault prediction analysis and prioritization. 2 Advances in Software Engineering Figure 1 : 1 = Figure 1: Affected slice graph (ASG) of the example Java program given in Algorithm 1. Figure 2 :Figure 3 : 5 + Figure 2: The calculated ACC values of different nodes of the ASG in Figure 1 and their weights. members of node 24 Figure 4 : Figure 4: ACC computation of nodes of ASG in Figure 1. (i) Set of all packages in the program.(ii) Set of all classes in the program.(iii) Set of all methods in the program.(iv) Set of all statements in the program.(v) Sets of each dependence type. Figure 6 : Figure6: Average percentage of affected nodes covered by the prioritized test cases using the approach of Panigrahi and Mall[18]. Figure 7 : Figure 7: Average percentage of fault-prone affected nodes covered by the prioritized test cases using our approach. Figure 8 : Figure 8: Comparison of APFD values for different programs. Table 1 : A sample test case distribution and the faults detected by them. program, coupling can exist between any two components due to message passing, polymorphism, and inheritance mechanisms of object-oriented programs.These components include packages, classes, methods, and statements.Two statements 1 and 2 are said to be coupled if 1 has some dependence (control, data, or type dependence) on 2.Methods in an object-oriented program belong to the constituent classes.It implies that a method is coupled either with a method in the same class or with another method in a different class.If the methods of any two classes are coupled, then the corresponding classes are said to be coupled. Table 3 : Test case coverage of fault-prone affected nodes. Table 4 : Distribution of test case weights on the basis of fault-prone impact. are also the same then the moderate weights are taken into consideration for prioritization.If the moderate weight of the test cases is again the same then the weak weights are considered for prioritization.If the weak weights are still the same for any two test cases, then both of the test cases are given equal priority.The last column in Table4shows the final case, that is, if the weak weights are still the same for any two test cases, then both of the test cases are given equal priority.The last column in Table Table 5 : Result obtained for regression testing of different programs. Table 6 : Overview of mutation operators.
2018-04-03T06:19:37.874Z
2016-05-01T00:00:00.000
{ "year": 2016, "sha1": "41dbaea48486a9f4182fdea3a92e2e8d8190a60f", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/archive/2016/7132404.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4641a6ee753ceda4eb6b4a57fe23d7c031d3419c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
221822763
pes2o/s2orc
v3-fos-license
The Stemness Gene Mex3A Is a Key Regulator of Neuroblast Proliferation During Neurogenesis Mex3A is an RNA binding protein that can also act as an E3 ubiquitin ligase to control gene expression at the post-transcriptional level. In intestinal adult stem cells, MEX3A is required for cell self-renewal and when overexpressed, MEX3A can contribute to support the proliferation of different cancer cell types. In a completely different context, we found mex3A among the genes expressed in neurogenic niches of the embryonic and adult fish brain and, notably, its expression was downregulated during brain aging. The role of mex3A during embryonic and adult neurogenesis in tetrapods is still unknown. Here, we showed that mex3A is expressed in the proliferative region of the developing brain in both Xenopus and mouse embryos. Using gain and loss of gene function approaches, we showed that, in Xenopus embryos, mex3A is required for neuroblast proliferation and its depletion reduced the neuroblast pool, leading to microcephaly. The tissue-specific overexpression of mex3A in the developing neural plate enhanced the expression of sox2 and msi-1 keeping neuroblasts into a proliferative state. It is now clear that the stemness property of mex3A, already demonstrated in adult intestinal stem cells and cancer cells, is a key feature of mex3a also in developing brain, opening new lines of investigation to better understand its role during brain aging and brain cancer development. INTRODUCTION In developmental processes, spatial and temporal control of gene expression occurs at transcriptional, post-transcriptional and post-translational levels. More than 1000 genes in the eukaryotic genome encode multifunctional RNA-binding proteins (RBPs) and 50% of these RBPs are expressed in the brain where they regulate all levels of RNA biogenesis at different levels (Bryant and Yazdani, 2016). The neural specific RBPs play a key role in post-transcriptional control, regulating RNA splicing, transport, surveillance, decay and translation (Glisovic et al., 2008). By RNA-seq analysis we identified a set of evolutionarily conserved, age-regulated genes, expressed in adult neural stem cell niches (aNSCs), in the short-lived fish Nothobranchius furzeri, a well-established animal model for aging studies (Baumgart et al., 2014). Among them, the RNA-binding protein mex3A emerged as a putative new neurogenic regulator, down-regulated with age and expressed in neurogenic regions of the zebrafish embryo (Baumgart et al., 2014). This RNA-binding protein belongs to MEX3 family and vertebrates have four distinct mex-3 orthologs (mex-3A-D). All four proteins predominantly accumulate in the cytoplasm, and shuttle between the cytoplasm and the nucleus via CRM1-dependent export pathway (Fornerod et al., 1997). MEX3 genes encode proteins containing two heterogeneous nuclear ribonucleoprotein K homology (KH) domains and one carboxy-terminal RING finger module with E3 ubiquitin ligase activity (Draper et al., 1996;Buchet-Poyau et al., 2007) sharing the highest identity with Caenorhabditis elegans mex-3, a translational repressor involved in the maintenance of germline pluripotency (Ciosk et al., 2006;Hwang and Rose, 2010). The role of mex3 genes in mammals is poorly understood, though several studies suggest its putative involvement in selfrenewal/differentiation decisions with implications for stem cell and cancer biology. In particular, human MEX3A was shown to play a key function in gastrointestinal context by impairing intestinal differentiation and simultaneously promoting an increased expression of intestinal stem cells markers such as LGR5, BML1, and MS1 (Pereira et al., 2013(Pereira et al., , 2020. In mice, mex3A is expressed in the crypt base and labels a slowly cycling subpopulation of Lgr5+ intestinal stem cell population (Barriga et al., 2017;Chatterji and Rustgi, 2018). MEX3A is overexpressed in pancreatic ductal adenocarcinoma (Wang et al., 2020) and strongly up-regulated in glioblastoma samples (Bufalieri et al., 2020). Despite this evidence, to our knowledge, there are no data available regarding the putative role of mex3a during embryonic and adult neurogenesis. Here we used the clawed frog Xenopus laevis embryos to characterize the biological function of mex3A in the developing central nervous system (CNS). Xenopus embryos gave us the unique opportunity to perform functional experiments in a tissue specific manner without interfering with the normal development of all other tissues (Vitobello et al., 2011;Naef et al., 2018). We showed that mex3A is expressed in proliferative regions of Xenopus and mouse developing brain including the eye, the brain and neural crest cells. The results from gain and loss of gene function experiments suggested that mex3A plays key role in primary mechanisms of proliferation of neural precursors linking cell division and neuronal differentiation during embryonic neurogenesis. Molecular Cloning of mex3A The available Expressed Sequence Tag (EST) clone of X. laevis mex3A (ID_6638558, gene bank BC_130195) lacks the coding region at 5 -end. To isolate the 5 -end coding sequence, we used the SMARTTM RACE cDNA Amplification kit (Clontech). The final PCR product was purified and sequenced. We obtained the full-length coding sequence of X. laevis mex3A submitted to The National Center for Biotechnology Information (NCBI) (ID_2213511) (Gene bank: MK_800014). A fragment of 975 bp of mouse mex3a cDNA (Gene Bank NM_001029890) was amplified and cloned into pGEM-T vector (Promega). The full-length cDNA sequence of zebrafish mex3a (Gene Bank XM_009292667) was amplified and cloned into pCS2+ vector. Embryo Collection Animal handling and care were performed in strict compliance with protocols approved by Italian Ministry of Public Health and of local Ethical Committee of University of Pisa (authorization n. 99/2012-A, 19.04.2012). X. laevis embryos were obtained by hormone-induced laying and in vitro fertilization then reared in 0.1× Marc's Modified Ringer's Solution (MMR 1× : 0.1 M NaCl, 2 mM KCl, 1 mM MgCl2, 5 mM HEPES pH 7.5) until the desired stage according to Nieuwkoop and Faber (Nieuwkoop, 1956). Morpholino Oligonucleotides, mRNA in vitro Transcription and Microinjections All morpholinos (MOs) were obtained from Gene Tools, LLC (Philomath, OR, United States). The injections were performed into one side of the embryo in the dorsal blastomere at the 4 cells stage embryo to target neural tissue. The sequences of MOs used were mex3A MO1 sequence: 5 -CAGCAGG CTCGGCATGGCTAATAAC-3 ; mex3A MO2 sequence: 5 CATT CCTCTCCATCATCCCTGAGAG-3 ; Control Standard Morpholino sequence: 5 -CCTCTTACCTCAGTTACAATTTA TA-3 . Microinjections were performed as described previously (Corsinovi et al., 2019). We injected 12ng per embryos of experimental and control morpholinos. To select properly injected embryos, we co-injected MOs with 250 pg of gfp mRNA and we proceeded with the analysis of the embryos that, at neurula stages (stage 15), showed a specific fluorescence in the neural plate of the injected site. The un-injected side represented an internal control in each embryo. We prepared capped mex3A and gfp mRNAs using the SP6 mMessage Machine in vitro transcription kit (Ambion), according to manufacturer's instructions. For rescue experiments, we co-injected 12ng mex3A MO2 and 600ng of full-length Xenopus or zebrafish mex3A mRNA. In situ Hybridization on Frozen Tissue Sections (ISH) For ISH on cryosections, Xenopus embryos were fixed in 4% paraformaldehyde in PBS, cryoprotected with 30% sucrose in PBS and embedded in Tissue-Tek OCT compound (Sakura, 4583). We prepared 12 µm cryosections and ISH was performed according to (Casini et al., 2012). Mouse embryo sections are a kind gift of Prof. Massimo Pasqualetti and were prepared as described in Pelosi et al. (2014). In situ Hybridization (ISH) on mouse embryo cryosections at 18 dpc was performed according to (Borello et al., 2014). Measurement of Brain Areas in Xenopus and Statistical Analysis To determine the brain area, embryos at stage 41 (swimming larvae) were anesthetized with buffered tricaine methane sulfonate (MS222) and then fixed in 4% paraformaldehyde in PBS. Brains were isolated using fine forceps and areas of the uninjected and injected sides were calculated using the ImageJ64 software. P-values were calculated by paired Student's t-test using GraphPad Prism 6 software (San Diego, CA, United States). Statistical significance was indicated as: * p ≤ 0.05, * * p ≤ 0.01, * * * p ≤ 0.001, * * * * p ≤ 0.0001. Quantitative Reverse Transcription Polymerase Chain Reaction and Statistical Analysis Total RNA was extracted from 30 Xenopus morphants at neurula stage (stage 18) using Nucleospin R RNA (Macherey-Nagel) according to manufacturer's instructions. cDNA was prepared by using iScript TM cDNA Synthesis Kit (Bio-Rad) and quantitative real-time PCR was performed using GoTaq R qPCR master mix (Promega) according to the manufacturer's instruction. Relative expression levels of each gene were calculated using the 2 − Ct method (Livak and Schmittgen, 2001). The results obtained in three independent experiments were normalized to the expression of housekeeping gene, gapdh. The mean of the Control-Morpholino was set at 1. Statistical analysis for qRT-PCR experiments was performed by Student's t-test using GraphPad Prism 6 software (San Diego, CA, United States). Statistical significance was indicated as: * p ≤ 0.05. Following primers were used to perform qRT-PCR: pcna (Huyck et al., 2015); N-tubulin and sox2 (De Robertis's lab, web site: http://www.hhmi.ucla.edu/ derobertis/); elrC (Seo et al., 2007); Glyceraldehyde 3-phosphate dehydrogenase (gapdh) (Naef et al., 2018). Statistical Analysis of Embryo Phenotype Statistical analysis for phenotypes observed after the injection of the Control-Morpholino or the injection of mex3A-MO2 was performed by Student's t-test using GraphPad Prism 6 software (San Diego, CA, United States). We compared the percentage of embryos with altered marker genes expression between Control-Morpholino injected embryos and mex3A-MO2 injected embryos. Statistical significance was indicated as: Xenopus laevis Brain We compared X. laevis mex3A predicted protein sequence with the zebrafish, mouse and human homologs revealing a high degree of similarity, especially in RNA binding domains (96%) and C-terminal Ring finger domain with E3 ligase activity (95%) suggesting a conserved function of mex3A in vertebrates (Supplementary Figure 1). As a prerequisite for functional studies, firstly we analyzed the spatial expression pattern of mex3A during early embryogenesis. Whole mount in situ hybridization (WISH) revealed that mex3A is already present in early cleaving stage (four cells stage) before the midblastula transition suggesting that it is maternally supplied (Figure 1A). At mid neurula stage, mex3A could be detected in the neural plate, in presumptive eyes territory, in pre-placodal territory and in cranial neural crest cells (NCC) (Figure 1B). At later stages of development, mex3A mRNA is present in the eye, in the CNS and in NCC migrated in branchial arches (Figures 1C,D). In situ hybridization on cryosections at stage 41 showed the mex3A expression in brain areas with high proliferative activity such as the ciliary marginal zone (CMZ) in the retina, the ventricular zone of the midbrain and the subventricular zone of the hindbrain (Figures 1E,F). Mex3A Supports Neuroblasts Proliferative State Since the expression of mex3A suggested a role during primary neurogenesis, we overexpressed mex3A in X. laevis embryos to evaluate its possible impact on primary neuron formation. For all experiments described below, mex3A-mRNA injections were done unilaterally into the animal region of one dorsal blastomere at the four cells stage embryo to target neural tissue. The un-injected side served as internal injection control and the co-injection of gfp mRNA was used to select and analyze only embryos in which the transcripts correctly localized in the neural plate (Figure 2A). At neurula stage (stage 18), WISH experiments revealed that the overexpression of mex3A altered expression domains of sox2 and musashi-1 (msi-1). The expression domains of sox2, a neuroblast marker (Mizuseki et al., 1998), and msil, commonly considered a specific marker for stem/progenitor cells (Okano et al., 2005), were markedly expanded in the injected side of the embryo as compared to the un-injected side (Figures 2B,C). Furthermore, we examined the expression of elrC, a marker of cells undergoing a transition from proliferation to differentiation (Carruthers et al., 2003), at neurula and tailbud stages. The expression domain of elrC appeared dramatically down-regulated in injected side of the embryos compared to uninjected side (Figures 2D-E ). Given these preliminary results, well correlated with the function of human MEX3A as positive regulator of cell cycle progression of intestinal precursors (Pereira et al., 2013;Barriga et al., 2017), we hypothesized that mex3A might be involved in cell proliferation also in the neural context. To elucidate this possibility, we analyzed the number of mitotically active cells in mex3A overexpressing embryos by immunostaining for mitotic Ser-10-phosphorylated Histone 3 (pH3). We observed a significant increase in mitotic cell number in the injected side of the embryo compared to the control side (Figures 2F,G). These data suggested that mex3A could maintain the proliferative state of neuroblasts delaying or preventing the neuronal differentiation during embryonic neurogenesis. Mex3A Depletion Impairs Primary Neurogenesis To study the role of mex3A in primary neurogenesis context, we also performed experiments of gene loss of function by using a specific morpholino oligo designed to block mRNA translation. However, by analyzing the sequence of the unique mex3A exon, we found that there are two possible translation start codons in frame (Supplementary Figure 1). Because both codons can be used as translation initiation sites, if we block the first translation start site using a specific morpholino oligo there is the possibility that the second start site could be used to translate a protein identical to the native one except for the first eight amino acids. The presence of a second ATG in frame and in the same position is conserved in vertebrate orthologs of mex3A (Supplementary Figure 1). We designed two specific morpholinos to inject them individually or in combination in the same embryo: morpholino 1 (MO1) designed to block the first ATG and morpholino 2 (MO2) designed to block the second ATG of the Xenopus mex3A mRNA. Since the injection of the MO1 did not generate any type of phenotype and the combination of the MO1 and MO2 increased the mortality rate without any synergic or additive effect, we used MO2 alone for subsequent analyses (Figure 3A). A standard control morpholino (CoMO) was used to evaluate non-specific embryo responses. By WISH experiments we showed that the expression domain of sox2 was reduced in mex3A-MO injected side of the embryo whereas both un-injected and CoMO injected sides were unaffected (Figures 3B,C). These data were confirmed by qRT-PCR analysis that showed a significant down-regulation of sox2 mRNA in mex3A morphants ( Figure 3D). To further verify whether the loss of mex3A function could alter the regulation of neuroblast proliferation, we also examined the mRNA expression of pcna (proliferating cell nuclear antigen) (Strzalka and Ziemienowicz, 2011). Mex3A morphants showed a reduced pcna expression as detected by WISH (Figures 3B,C) and qRT-PCR experiments ( Figure 3D). As a consequence of the impairment in the maintenance of neuronal progenitors pool, we observed that the lateral stripe of N-tubulin and elrC expression domains, the future sensory neurons, appear expanded on the injected side of the embryos compared to control side and CoMO injected embryos (Figures 3E,F). This phenotype might be due to an altered density and/or number of primary neurons. Hence, we performed qRT-PCR analysis that revealed a significant raise of N-tubulin and elrC mRNA level in mex3A morphants ( Figure 3G). In order to verify the specificity of the mex3A-MO, we designed functional rescue experiments by co-injecting mex3A-MO together with the fulllength mex3A mRNA. As the mex3A-MO could target not only the endogenous mex3A but also the in vitro transcribed Xenopus mex3A mRNA, for rescue experiments we cloned the zebrafish mex3A mRNA that is not recognized by mex3A-MO (Supplementary Figure 3). We already showed that the zebrafish mex3A is localized in proliferating region of the developing brain (Baumgart et al., 2014). We further showed that the overexpression of zebrafish mex3A, in Xenopus embryos, reproduced the same phenotype obtained by the Xenopus mex3A mRNA injection, thus confirming its functional conservation (Supplementary Figure 3). We then analyzed 123 co-injected embryos (mex3A-MO plus zebrafish mex3A mRNA) and we observed a restoration of the phenotype at neurula stage (stage 18) (Supplementary Figure 3). Mex3A Is Required for Anterior Neuronal Development in Xenopus laevis The analysis of gene expression profile of mex3A showed a specific mex3A expression in the anterior neural tissue in Xenopus larvae including eye and brain (Figure 1). Therefore, to investigate in more details the putative biological function of mex3A during anterior neural development, we analyzed embryos at later stages of development. We observed in mex3A morphants, at larval stage 41, smaller and deformed eye with variable penetrance (Figures 4A,B). In contrast, in control side, as well as in CoMO injected embryos, the eye was always normal (Figure 4A). To test the specificity of mex3A-MO to induce eye phenotype, we performed rescue experiments co-injecting the mex3A-MO with the zebrafish mex3A mRNA, observing a restoration of the eye phenotype (Figures 4A,B). To better show possible alteration in larval brain development, we dissected morphants and control brains from larvae at stage 41 and we measured the areas of both brain hemispheres of injected versus un-injected side. We calculated brain area as described in Kiem et al., 2017. In comparison to the CoMO hemisphere ( Figures 4C,D), the mex3A-depleted hemisphere exhibited a significant size reduction (Figures 4C,D). This phenotype could be due to a decrease in the cell proliferation rate. To examine this hypothesis, we performed pH3 immunohistochemistry (to visualize mitotic cells) experiments using mex3A-depleted embryos at tailbud stage (stage 24). pH3 staining showed a significant reduction in cell proliferation in mex3A morphants compared to un-injected control side and to CoMO injection (Figures 4F-I). These results suggested a requirement for mex3A in the control of cell proliferation at both neurula and tailbud stages. Mex3A Is Expressed in Developing Mouse Brain The hypothesis that the intestinal stemness-related gene mex3A could be considered as a regulator of neuroblast proliferation in the CNS is intriguing but no data are available for the expression of mex3a in mammalian CNS. For this reason, we performed a preliminary analysis of mouse mex3A mRNA distribution in the developing mouse brain. We revealed that at 18 dpc mex3A mRNA is present in proliferating regions of the mouse embryonic CNS such as telencephalic ventricular and sub-ventricular zone, developing hippocampus, DISCUSSION Mex-3 family members are mediators of post-transcriptional regulation in different organisms (Pereira et al., 2013). Several studies highlighted their involvement in different physiological processes, including the maintenance of the balance between stem cell self-renewal and differentiation. In particular, human MEX3A is necessary to post-transcriptionally regulate the levels of CDX2, mRNA coding for an intestinal transcription factor required in gastrointestinal homeostasis (Pereira et al., 2013). Mex3A appears crucial for the maintenance of the slowly cycling subpopulation of lgr5+ gut stem cells (Chatterji and Rustgi, 2018), and lgr5 absence in Mex3A −/− mice leads to growth retardation, postnatal mortality, and severe impairment of intestinal crypt development (Pereira et al., 2020). Recent data showed that MEX3A is up-regulated in glioblastoma specimens (Bufalieri et al., 2020). In glioblastoma cells, MEX3A interacts with the tumor suppressor RIG-I inducing its ubiquitinylation and the proteasome-dependent degradation, supporting tumor growth (Bufalieri et al., 2020). Although MEX3A has a key role in gastrointestinal homeostasis and tumor progression, its putative role in neural context is not yet defined. Previously, we showed mex3A expression in aNSCs niches in N. furzeri and in proliferating areas of the developing brain in zebrafish embryos (Baumgart et al., 2014). In the last years, the single cell technologies allowed us to query publicly available datasets and to obtain precious clues on gene expression and possible gene function in different animal models. Transcriptomic analysis of the ventricular-subventricular zone (V-SVZ) of lateral ventricles of male mice at 2, 6, 18, and 22 months revealed mex3A among the genes that significantly change their expression, being down regulated, during aging (Apostolopoulou et al., 2017). Benayoun and collaborators included Mex3A among the top genes down regulated in olfactory bulbs, another neurogenic niche in the adult brain, during mouse aging (Benayoun et al., 2019). These data nicely correlated with our previous observation of an age-related decline of mex3a expression in aNSC niches during N. furzeri brain aging (Baumgart et al., 2014) strongly suggesting a functional conservation of the role of mex3a in brain aging among vertebrates. Despite these suggestive clues, nothing is known about mex3A function in the vertebrate nervous system. Here we revealed, for the first time, the expression and function of mex3A during early neural development using X. laevis as model system. We showed that, besides its widely described role in gastrointestinal context, mex3A is additionally involved in CNS development of tetrapods. Mex3A is expressed in the neural tissue of the early X. laevis embryo including the eye field and neural crest cells. Mex3A mRNA is localized in areas with high proliferative activity such as the ciliary marginal zone (CMZ) of the retina, the ventricular zone of the midbrain and the subventricular zone of the hindbrain strengthening the hypothesis that mex3A could promote proliferation of progenitor cells also in neural context. In order to verify possible evolutionary conservation of mex3A role in the developing CNS, we visualized mouse Mex3A expression in 18 dpc embryos. We confirmed that Mex3A is expressed in proliferative areas of the developing mouse brain, such as in the ventricular-subventricular zone of the lateral ventricles and in the olfactory bulbs. These data suggested a mex3A involvement in the context of primary neurogenesis conserved among vertebrates. Gene gain and loss of function approaches in Xenopus revealed that this gene was able to keep the undifferentiated and proliferative state of neuroblasts increasing the expression of proliferation markers and decreasing the expression of marker such as elrC (huC) and elrD (huD) during neurogenesis. This evidence suggests that mex3A could function as a potential regulator of proliferation rate of neural progenitor cells and this hypothesis is also supported by the increased expression of musashi-1 in mex3A overexpressing embryos. Msi-1 was first reported to be required for the proper development of the neural sensory organ in Drosophila (Nakamura et al., 1994), whereas it is commonly considered a specific marker for stem/progenitor cells in mammals (Kaneko et al., 2000). Msi-1 maintains stem cell proliferation state by acting as a translational repressor (Ratti et al., 2006). Interestingly, Msi-1 is regulated by Mex3A in mammalian gut cell (Pereira et al., 2013). In Xenopus another member of Mex gene family, mex3b, is expressed during early development and neurogenesis (Takada et al., 2009). Even if the expression pattern of the mex3A and mex3B are not overlapping, they seem to be both expressed in the neural plate and then in the neural tube during neurulation. Comparing our data with that obtained by Takada and collaborators, mex3A and mex3B seem to act not redundantly. The overexpression of mex3B in the neuroectoderm did not affect the expression profile of sox2 (Takada et al., 2009) and the gain or loss of mex3B function suggested an involvement of the gene in antero-posterior patterning of the neural tube (Takada et al., 2009). Our results showed that the overexpression, or the knockdown of mex3A, did not affect the antero-posterior axis formation or the regionalization of the neural tube supporting the idea that the two genes could act independently and in different time windows during CNS development. Several neural-specific RNA-binding proteins are key inducers of neuronal proliferation and/or differentiation through the stabilization and/or translational enhancement of target transcripts. Additionally, Mex3A seems to have an important role as post-translational regulators also acting as E3 ubiquitin ligase in glioblastoma cells (Bufalieri et al., 2020). In conclusion, we showed a key role of mex3A as a new post-transcriptional regulator able to influence neuroblast proliferation during neurogenesis. Mex3A gene function is necessary and sufficient to support the expression of sox2 and msi1, required for neuroblast self-renewal. In light of this, in the future, it will be interesting to focus on the possible mex3A targets in neuroblast and adult neural stem cells to better clarify its role in development and aging of the CNS with possible translational implications in brain cancer research. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://www.ncbi.nlm. nih.gov/genbank/, MK800014.1. ETHICS STATEMENT The animal study was reviewed and approved by Ministry of Public Health and local Ethical Committee of University of Pisa (authorization n. 99/2012-A, 19.04.2012). AUTHOR CONTRIBUTIONS VN, MD, and GT performed Xenopus experiments. DC and UB cloned mouse mex3a and performed ISH on mouse embryo cryosections. VN and RA contributed in the manuscript discussion and writing. VN performed the data analysis. MO contributed to conceptualization, provided necessary financial resources, experimental supervision, data analysis, discussion, and writing. All authors contributed to the article and approved the submitted version.
2020-09-22T13:05:45.629Z
2020-09-22T00:00:00.000
{ "year": 2020, "sha1": "681265bca0d5069330b410cea433daafd38cfc99", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2020.549533/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "681265bca0d5069330b410cea433daafd38cfc99", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
236390221
pes2o/s2orc
v3-fos-license
Analysis of fractality and complexity of the planetary K-index The objective of this research is to explore the inherent complexities and multifractal properties of the underlying distributions in the daily Planetary K-index time series collected from NOAA Space Weather Prediction Center. In this article, non-stationary and nonlinear characteristics of the signal have been explored using Smoothed Pseudo Wigner–Ville Distribution and Delay Vector Variance algorithms, respectively, while Recurrence Plot, 0–1 test, Recurrence Quantification Analysis and correlation dimension analysis have been applied to confirm and measure the chaos in the signal under consideration. Multifractal detrending moving average has been used to evaluate the multifractality and also recognise the singularities of the signal. The result of these analyses validates the nonstationary and nonlinear characteristics of the Planetary K-index signal, while a significant presence of deterministic chaos in it has also been noticed. It has also been confirmed that the Planetary K-index exhibits multifractal nature with positive persistence. The long-range temporal association and also the large pdf are discovered to be the primary factors that contribute to the multifractal behaviour of the Kp-index. Introduction Immense disturbances in the geomagnetic field occur mainly due to the impact of solar storms (like solar flares, solar wind, coronal mass ejection (CME)) on the Earth's magnetosphere. This geomagnetic field perturbation, commonly termed as geomagnetic storms, has a significant impact on the backbone of our modern day civilisation, power grids, electronic devices, navigation systems, spacecraft operations and global communications [1,2]. A geomagnetic storm (GMS) is the most evident manifestation of magnetospheric phenomena, and it can be conceived of as a series of nonlinear electromagnetic processes propagating from the Earth into the entire magnetospheric zone. The magnitude of the geomagnetic storms can be characterised and quantified by the value of Planetary K-index [3]. K p -index is a measure to quantify the fluctuation of the magnetic field of our planet, more specifically it is horizontal component. K index is an integer number lying between 0 and 9. For a value of less than 5, our planet's magnetic field is considered calm; otherwise, it is a geomagnetic storm. The device used to evaluate the variations of the horizontal component in geomagnetic field is a magnetometer. The official K p -index is estimated by capturing a weighted average of K-indices from a specified system of geomagnetic observatories, where the K-index is a code that refers to the maximum horizontal component variations found on a magnetometer over a three-hour period. The planetary K-index can be considered as a global indicator of geomagnetic activity around the globe. Due to the severe influence of the geomagnetic storm on human civilisation, there is a need to learn about the dynamical behaviour of it. The study of the different characteristics and complexity of Planetary K-index signal with an investigative approach may help to infer about the actual nature of the geomagnetic storm's dynamics. NOAA Space Weather Prediction Center derives the estimated 3-h Planetary K-index using the data from the ground-based magnetometers located in different countries. The daily magnitude is calculated from the averages of the eight 3-h K p -indices per day. Here, regular K p -index data over the period of February 1999 to December 2007 has been taken for analysis. 1 There are about 88 geomagnetic storm events that have occurred in solar cycle 23. Only a few of these 88 storms have a significant impact on our Earth's environment. On the 4-7 October 2000, there are records of extreme GMSs that had a major effect on the ionosphere. Though the ion temperatures of the terrestrial magneto-tail generally used to remain consistent, it has been noticed that in October 2000 the estimated temperature of ion was 2-3 times more compared to the average value [4]. Another geomagnetic storm event was noticed on 21 October 2001, and its effect has been explained in Jordanova et al. research paper [5]. They have observed massive loss of electrons in the radiation belt into the atmosphere and electron flux dropout due to the outward radial diffusion. The massive ionospheric disturbance has been registered for geomagnetic Sudden Storm Commencement (SSC) occurred on [29][30] October 2003 which has yielded a 'swirling' effect on these days. On 20-22 November 2003, a super magnetic storm hit the Earth's atmosphere that generated ionospheric disruptions in the southeast region of Asia, causing GPSnavigation to stop for several hours. Also, these GMSs on 29 October 2003, 30 October 2003and 20 November 2003 produce very high value geomagnetically induced currents (GIC) of 57.05 Amp, 48.57 Amp, 23.86 Amp, respectively [6]. It is well known that high-value GIC is always a threat to our power grids. One such well-known GIC event happened in March 1989 caused a province-wide blackout in Quebec, Canada. An intense GMS triggered on 24 August 2005 with a maximum K p -index value of ≈ 9 while another strong geomagnetic storm triggered on 11 September 2005 with a peak K p -index value of ≈ 8 . These disturbances modulated the galactic cosmic rays (GCR) which produced a severe Forbush effect (FE) on 24 August and 11 September 2005. Papaioannou et al. [7] discuss the major effects in the Earth's magnetosphere from 22 August to 17 September 2005 and also calculate cosmic ray gradients. On 14-15 December 2006 the geomagnetic storm events were triggered when Earth's magnetosphere was affected by the CME-associated interplanetary shock. Ionospheric disturbances have been recorded due to these geomagnetic storm events which produce fluctuations in electron densities and also produce GIC [8]. In space research, the Earth's magnetosphere is always an exciting domain for researchers to explore. Various researchers have conducted various studies to determine the type of interrelationship between GMSs and several solar activities [9][10][11]. However, in this article, an attempt has been made to discover the nature of GMS fluctuation by the analysis of the K p -index signal. The prime objective of this work is to surface the behavioural phenomena like stationarity, nonlinearity, chaos, complexity, self-similarity and multifractality of the daily K p -index as represented in Fig. 1 and hence the geomagnetic storm. For this purpose, appropriate statistical tools have been used. Non-stationary components are inherently present in real-world signals when they change over time. From the spectral approach, the stationary signal is one where the frequency content does not change with time, while it changes in the case of a non-stationary signal. Classical spectral analysis can only detect the frequencies within the signal, but the specific time of presence of these frequencies cannot be located and can therefore not be used to characterise the non-stationary signal. Therefore, a technique based on time-frequency representation (TFR) can be used for natural signals that are usually non-stationary in nature. Various well-established TRF based method like short-time Fourier transform (STFT) [12], wavelet transform (WT) [13], and Wigner-Ville distribution (WVD) [14], etc., can be used to confirm the non-stationary characteristics of the signal. The STFT method generates time-frequency spectrum by taking FT, while this method restricts temporal and spectral resolution by a pre-specified window length [15]. SPWVD [16], a TFR-based technique that is basically a modified WVD algorithm, has been applied in this analysis to validate the non-stationary nature of the signal under investigation. WVD has variable resolution along the time-frequency plane by having decent temporal resolution and frequency resolution at high and low frequencies, respectively, while the WVD method generates cross-term for a multi-component signal. These unwanted components can be suppressed by applying the SPWVD method where comparatively high resolution is achieved with independent time and frequency function as kernel function. Therefore, the SPWVD method has been chosen here to validate the nonstationary nature of the signal under investigation by time-frequency energy mapping. The features like nonlinearity and chaosity give an indication about the behavioural complexity of the signal. Nonlinear characteristics of a time series are traditionally being judged by signal processing tools like deterministic versus stochastic method, correlation exponents, etc. In this work, the nonlinear characteristics of the signal have been revealed by applying DVV algorithm [17]. It is necessary to detect the presence of any chaotic behaviour in the signal in order to obtain a full understanding of the K p -index's nonlinear dynamics. '0-1 test' which is a better alternative compared to the traditional maximal Lyapunov exponent method, has been chosen to investigate the trace of the chaotic phenomenon in the signal [18,19]. Whether the chaos, which may be present in the K p -index dynamics, is deterministic or stochastic is being determined by the correlation dimension analysis [20,21]. The structural complexity (the scaling properties and self-similarity) of the dynamics of the geomagnetic storm dynamics has been explored by fractal analysis of the K p -index signal [22]. The widely used methodologies for fractal analysis are rescaled adjusted range analysis (R/S) method by Hurst [23], the detrended fluctuation analysis (DFA) method introduced by Peng et al. [24]. However, R/S and DFA techniques are not useful for the multifractal signal. The drawbacks of these techniques have been overcome by methods, like wavelet transform modulus maxima (WTMM) [25] and multi-fractal detrended fluctuation analysis (MFDFA) [26] where MFDFA is more robust compared to the former [27]. Recently, a modified detrending moving average (DMA) method [28] has been developed by Gu et al. [29], known as multifractal detrending moving average (MFDMA). In this investigation, the backward MFDMA method which is advantageous than MFDFA [29][30][31] has been used for the fractal analysis of the signal. The possible cause of the multifractality has been identified by performing fractal analysis of the shuffled and surrogated form of the signal. The major contributory factors for which a dynamical system is nonstationary, chaotic and multifractal is the hidden recurrent patterns and structural changes in it. Such recurrences are the fundamental characteristic of many dynamical systems. Recurrences provide all pertinent details on the behaviour of the system. To visualise recurrences of the K p -index, recurrence plot (RP), proposed by Eckmann et al. [32], is applied in this paper. A recurrence plot is the graphical representation of the recurrence matrix of the trajectory of the dynamical system in its phase space. RP gives the first notion about the patterns of recurrences which will let to study the underlying dynamics of the process and its trajectory. With the intention to go beyond the visual realisation by RPs, some other measures of complexity that quantify the small scale structures in RPs have been proposed in [33], and their analysis is recognised as recurrence quantification analysis (RQA). The quantification procedure is based mostly on concentration of recurrence points as well as the RP's diagonal/vertical line formations. The RQA parameters of the K p -index have been estimated for the understanding of its scale complexity. The SPWVD method, DVV, correlation dimension analysis, 0-1 Test method, MFDMA algorithm and RP analysis are discussed in Sect. 2. Section 3 provides the relevant findings and observations derived from the Planetary K-index signal applying the signal processing tools described above, while this paper concludes with inferential comments made in Sect. 4. SPWVD analysis In this work, Smoothed Pseudo Wigner-Ville Distribution (SPWVD) has been chosen over various other TFR-based methods to evaluate the dependency of the frequency of the signal on time, i.e. stationary or non-stationary. The widely used TFR-based methods like short-time Fourier transform or wavelet transform do not offer sufficient crisp resolution like SPWVD analysis [34]. In 1932, the WVD method had been proposed by Wigner and later modified by Ville in 1948 which is expressed as where s(.) represents the signal achieved by using the Hilbert transform to the input signal, f represents frequency, τ is the lag variable. Interferences in the time and frequency domain have been identified in the WVD because of its bilinear structure [35]. Time domain cross terms are attenuated by using the Pseudo-WVD (PWVD) method which is the convolution of WVD and a regular window function h(t) [36]: whereas the interferences in the frequency domain are attenuated by smoothing the PWVD where it is made to pass through a low-pass filter having window function (t) . This yields smoothed-PWVD (SPWVD) [37,38]. For stationary signal, the frequencies content of the signal will be found to exist continuously along the entire time axis of the SPWVD time-frequency spectrum, whereas for nonstationary, the frequencies of the signal will exist sporadically with time. The frequency components of a stationary signal would be observed to appear continually over the entire time axis of the SPWVD spectrum, while the frequency components of a nonstationary signal would occur sporadically over time. DVV analysis For a given optimal embedding dimension m and embedded lag τ, a delay vector set (DVs) The DVs which are clustered within the pair-wise Euclidian distance of d from DV x(t) , produce the sets t d . d lies between the ranges of min 0, d − n d d to d + n d d where d denotes standard deviation, d is the mean value and n d represents controlling parameter. For every set t d , the variances of the corresponding targets 2 t d are computed. The target variance * 2 d is determined by taking the mean of these variances and normalising them to the variance of signal x(k) . The plot of the target variance as a function of the standardised distance is known as a "DVV plot". In order to standardize the distance axis, d is replaced by ( d − d )∕ d . d and d are the mean and the standard deviation of each DV, respectively. Randomness can be measured by determining the least value of target variance * 2 . A surrogate data series has been computed using iterative amplitude-adjusted Fourier transform (IAAFT) technique [10] as well as "DVV plots" for both surrogate and natural signal are compared. In case, "DVV plots" shows no difference between surrogates and original signal and then, original signal is considered linear. "DVV scatter diagrams" is the graphical mapping of the target variances of the surrogate signal to the target variances of the original signal for corresponding standardised distances. If the dynamical system is linear, the mapping locus bisects the plot diagonally; otherwise, it will be significantly different from the diagonal bisector line then. The deviation of the locus from the bisector line in terms of root mean square error (RMSE) value defines the degree of nonlinearity [39]. [40][41][42][43][44] technique has been applied to confirm the existence of chaotic nature in the signal under investigation. For measured data, this tool gives better result compared to the long-established maximal Lyapunov [18]. This binary test method is straightforward and effective to judge the chaosity of a system. If the output of the test is towards "1", then the chaotic behaviour of the signal has been confirmed, whereas if the output has an inclination towards "0", the nature is not chaotic. 0-1 Test Let, x(k) is a time series of length L and X l is its Fourier transform. The inherent characteristics of the x(k) can be judged as regular or chaotic depending upon the relation of smoothed mean square displacement d c (l) with l. In case signal has chaotic dynamics then d c (l) will continuously increase with l and X l going to produce a Brownian motion in complex plain. The smoothed mean square displacement d c (l) can be expressed as x(k) and l ≤ m = L∕10 < L . 100 values of c ranging from π/5 to 4π/5 are selected [43]. To have an optimum result for noise effected signals, The term indicates the test sensitivity. The correlation between l and d * c (l) gives the asymptotic growth rate K c = corr l, d * c (l) . The median of K c is the final output for 0-1 test analysis. Based on the value of the final output K, the time series may be judged as regular or chaotic. The signal can be categorised as normal or chaotic depending on the value of the final output K. Correlation dimension analysis Correlation dimension analysis [20] is an important indicator which not only informs the type of chaos (deterministic or stochastic) existing in a dynamical system but also enlightens about the dimension of the system. The correlation dimension D(r, m) quantifies the ability to occupy the phase space of a time series (with embedding dimension of m and time lag τ) by the cells of size r where correlation integral C(r, m) is the probability of any two randomly chosen points within a given distance r. where r is the distance among coordinates i and j for ith and jth points, respectively, in phase space for a set of N number of points and H is the Heaviside function. The mean of D(r, m) for all r values is calculated to yield D(m) for every m = 1, 2, … , L while L ≤ 10 . For a stochastic time series D(m) exhibits a monotonically increasing trend towards infinity with m. However, if the process is deterministic, this continuous increment of D(m) ceases at some small m where it saturates in a plateau [45]. This saturated value of D(m) gives the correlation dimension. So, stochastic systems are 'infinite-dimensional' and deterministic systems are 'finite-dimensional' . MFDMA A signal can be divided into several sub-signals. Now, if each sub-signal is a statistical copy of the original signal, then these sub-signals can be referred as fractals, i.e. fractal systems can be typified by self-similarity. MFDMA can be considered as an effective method for evaluating the fractal characteristics for any nonstationary time series signal [29,46]. Let U(k) be the cumulative sums for the signal x(k) of length N. The moving average functions Ũ (k) , with the moving window length of l, can be expressed as [47] where ⌊ ⌋ are the highest integer and ⌈ ⌉ smallest integer above . Position parameter denoted as θ lies between 0 and 1. For the values of = 0, 0.5 and 1 , MFDMA method can be categorised into three specific cases like backward moving average method, centred moving average method, forward moving average method, respectively. It is established in a paper [48] that out of these methods, backward moving average ( = 0) based MFDMA outperforms the other two. In this work, the backward moving average-based MFDMA has been used for fractal analysis of the Planetary K-index signal. The difference between Ũ (k) and U(k) gives the residual series (k) as The root mean square function F v (l) can be obtained as where v (k) = (n + k) for 1 ≤ k ≤ l and n = (v − 1)l. The q th -order overall fluctuation function F q (l) is If the h(q) is the Hurst index, then the power-law relation between the function F q (l) and the size scale l can be established as: F q (l) ≈ l h(q) . Also, the multifractal behaviour of a signal can be characterised by determining the scaling exponent (q) which can be defined as: (q) = qh(q) − 1 . Using the Legendre transform, the singularity strength function and the multifractal spectrum f ( ) are expressed as Eqs. (14) and (15), respectively [29]. RP and RQA The recurrence plot (RP) and recurrence quantification analysis (RQA) are summarised as follows: RP All RP method is the graphical representation of a twodimensional square matrix where a matrix element is located at (i, j) if a state at time i recurs at a different time j. If a signal trajectory ⃗ i N i=1 is in its phase space, then the mathematical equation of RP can be defined as [39] where H(•) represents Heaviside step function and threshold distance is denoted as . For i = j , RP i,j = 1 which generate main diagonal line of the RP called line of identity (LOI). The system dynamics can be explored by visualising the structural pattern of the RP. The typical pattern and their interpretation are given in Table 1. RQA After getting an idea about the underlying system dynamics by visualising the RP plot, there is a need to determine different recurrence variables which are revealed by the plot to measure the complexity [33,49]. Among many other recurrence variables, four variables are essential for the quantification of recurrences, like %REC, LMAX, %DET and ENTR. The density of recurrence points in the RP can be measured by calculating the recurrence variable %REC, given as Signal frequency changes with time, i.e. the signal generated is from a non-stationary system. The non-stationarity is due to existence of a trend or a drift in the system 3 Disruptions The system is non-stationary. The non-stationarity is due to the existence of some rare states in the system. Transition may have occurred 4 Periodic/quasi-periodic patterns The system has cyclicities. For quasi-periodic system, the distances between long diagonal lines is varying 5 Single solitary points There is strong fluctuation in the signal, i.e. random process 6 Diagonal lines (parallel to LOI) Signal generating system is deterministic. If there are isolated single points along with the diagonal lines, the process can be judged as chaotic 7 Diagonal lines (orthogonal to LOI) Here, also the states evolve equally at various times but in reverse order. It may be to insufficient embedding 8 Vertical and horizontal lines/clusters Time evolution of the states is either constant or varies slowly indicating the presence of laminar states 9 Long bowed line structures Here also the states evolve equally at various times but with differing velocities Another RQA measure is the LMAX which can be defined as the longest diagonal line present in the RP plot. If ith diagonal line length is l i, then LMAX can be expressed as where the number of diagonal lines is denoted by N l . Shorter LMAX signifies the rapid divergence of the trajectory segments, whereas longer LMAX denotes that the trajectory segments diverge slowly [50]. %DET is the measure of determinism of the system quantified by the density of lines, with length l l ≥ l min , which are aligned diagonally [51]. If there are no diagonals or if the diagonals are very short nearly resembling isolated recurrence points, the system can be judged uncorrelated or poorly correlated, stochastic and %DET is close to 0%, while deterministic system yields relatively longer diagonals and %DET is quite away from 0%. where hist(l) is the histogram of the diagonal lines with lengths l. ENTR gives the Shanon entropy of the probability distribution of the trace of diagonal lines with exact length l. Mathematically ENTR can be defined as where p(l) is the probability to have diagonals of exact length l. The ENTR is a measurement of how much data are required to recover the process. A low ENTR means that less information is required to define the process and it has less complexity, whereas a high ENTR suggests that more information is necessary and that the system is more complex. In Fig. 2, it is clearly visible that the frequency components which are present in the signal do not exist continuously along the entire time axis which confirms that K p -index fluctuation is nonstationary in nature. Here, it can be noticed that the frequency of K p -index signal is continuously varying for every time instant, while the colour bar represents the energy strength of every frequency components. Results and discussion The detection of nonlinear behaviour of the Planetary K-index using DVV analysis has been illustrated in Fig. 3. A total of 99 IAAFT-based surrogates have been generated for this purpose. Figure 3a represents that the DVV plots of surrogated and the original signal do not precisely match but deviate from each other which confirms the nonlinearity of the Planetary K-index signal. Figure 3a also reveals the least value of the target variance, * 2 min = 0.3612 which is substantially less than that of the surrogate. This signifies that K p variation is not much affected by noise and hence has a deterministic behaviour. The DVV scatter diagram as shown in Fig. 3b clearly depicts the variation of the target variances mapping locus of the surrogate and original signals from the bisector line. This deviation validates the nonlinear property of the K pindex profile. A nonlinear system may exhibit chaotic behaviour. To trace the existence of chaos within the time series 0-1 Test has been performed whose outcomes are demonstrated in Fig. 4. The computed D c (n) and D * c (n) for the Planetary K-index time series rise continuously as n increases in Fig. 4a, c, which indicates that the source of the signal is affected by the chaos. From Fig. 4b, Brownian motion is observed in the complex plane of the Fourier spectrum X n which indicates that the signal's possible underlying geometrical structure is random. The variation of the asymptotic growth rate K c is displayed in Fig. 4d. The final binary output for the Planetary K-index signal is computed to be 0.9745. This nearly unit value of the 0-1 test output corroborates about the chaotic behaviour of the signal. To identify whether chaos is deterministic or stochastic in nature, correlation dimension analysis has been done and is illustrated in Fig. 5. The correlation dimension D(m) saturates at a plateau consequently as seen in the graph. It can be inferred that the chaos of the geomagnetic storm is a deterministic phenomenon, i.e. less disturbed by external noises. Besides the nature of the curve of the D(m) , the magnitude of the D(m) is also an important parameter for evaluating the nature of the system under scrutiny. If the value of the correlation dimension tends towards infinity, more specifically if it is 5 or above, generally the signal generating process is considered as a stochastic one, i.e. heavily administered by noises [52]. As the D(m) attains the plateau at the value of 0.7348, the correlation dimension of the geomagnetic storm is 0.7348 which is a very low value compared to 5. This low value re-establishes the claim of deterministic character of the geomagnetic storm. Moreover, if the correlation dimension is found to have finite integer value, the system dynamics exhibit non-chaotic and strongly periodic deterministic quality. But its fractional value with smaller magnitude implies that the process can be considered as a low dimensional deterministic chaotic one [52][53][54]. So, the fractional correlation dimension of 0.7348 gives the notion that variation of the geomagnetic storm is absolutely a chaotic process having low-dimensional determinism. If the process is regular and deterministic, by Takens embedding theorem [55], the correlation dimension quantifies minimum the number of equations required to express the process's dynamics which is known as degrees of freedom (dof), whereas for chaotic and deterministic process, the integer value just above the fractional correlation dimension gives Fig. 6 a F(q) versus q, b h(q) versus q, c (q) versus q, d f ( ) versus But it must be kept in mind that the existence of a plateau and a finite fractional correlation dimension is not enough to give a conclusive inference about the process, but it gives a probable inclination towards the inferences made about the process. Figures 6 and 7 represent the output of MFDMA analysis based on the backward moving average. In Fig. 6, the reliance of F(q) , h(q) , (q) on q as well as nature of the singularity spectrum f ( ) is noticed. The scale parameter n is set to a range of 10 to L/10 [34] and the exponent q is selected with steps of 0.5 from − 20 to + 20. In Fig. 6a, b, the significant variation of the F(q) and nonlinear dependence of h(q) function with q unveil multifractal behaviour of the signal under scrutiny. The computed value of h(2) = 0.93891 ± 0.0059 suggests the existence of longrange positive correlation within data series. The existence of several structures of different scales is also suggested by the nonlinear shape of the (q) curve. The multi-fractal characteristic of Planetary K-index signal can be explored more thoroughly from the singularity spectrum f ( ) in Fig. 6d. f ( ) gives an estimate about the degree to which the signal is occupied by singularities of varying strengths. The parameters min and max are lowest and highest values of the Hölder exponent for which f ( ) = 0 . The width of the spectrum Δ max − min is often used to measure the strength of multifractality which indicates the 'affluence' of the signal structures. The computed values of max , min , Δ and 0 are tabulated in Table 2. The multifractal behaviour generally observed in time series is due to (1) flat probability density function (pdf ) of the data series and/or (2) various temporal structures (nonlinearity and long-range correlations) for different fluctuations [56]. Fractal analysis of surrogated and shuffled data produced from the original data is used to evaluate the true cause of multifractality. If upon shuffling the series all its long-range correlations are destroyed making h(q) = 0.5 for all q, but pdf remain unchanged, the multifractality is due to reason (2), i.e. temporal structure. The signal is not influenced by the shuffling process due to the multifractality existence of the signal due to reason (1), i.e. fatness of the pdf. In this case, surrogation (amplitudeadjusted Fourier transform) of the signal will change its pdf to Gaussian distribution, while correlation is not disturbed. The shuffled series' h(q) on q dependence would be almost equivalent with that of the natural signal, while h(q) of surrogate may not be relying on q. While both factors contribute to the signal's multifractality, h for surrogated and shuffled signals would be dependent on q, and the shuffled sequence will have weaker multifractality than the initial one. Figure 7 represents the comparative fractal analysis for original, shuffled and surrogated of the Planetary K-index data series achieved using backward MFDMA. The dependence of h(q) on q for the surrogated and shuffled data, as seen in Fig. 7b, claims that the multifractality present in the Planetary K-index signal is attributed to wideness of the pdf as well as temporal structure. The magnitudes of h(2) for the three signals as shown in Table 2 indicate the existence of long-range association which further computed applying MFDMA algorithm for all three signals nearly are equal to or even greater than 1 reveals that the signal under investigation is nonstationary and that this property is preserved even after randomisation. The skewness ( r = max − 0 0 − min ) is also determined for K p -index signal which is 1.1359. There is an inverse relationship between the values of and the strength of the singularity whereas for a rough signal having high magnitude singularities are characterised by low value [46]. For right skewed profile:r > 1 and it signifies the singularities of lower strength are predominant in the signal. For left skewed: r < 1 and it advocates the presence of higher strength singularities. If the singularities with high and low strengths are equally distributed in the signal, its singularity spectrum will be symmetric for which r = 1 . However, r = 1.1359 for K p -index it can be assumed that the fluctuation of the geomagnetic storm is not much interrupted by singularities of higher strength but exhibit as a smooth signal having low singularity strength [57]. It is visualised from the recurrence plot (RP) in Fig. 8 that the patterns which are noticed match with that mention in point number 2, 3 and 6 in Table 1. So, it can be inferred that the Planetary K-index signal is absolutely non-stationary which may be attributed to the existence of some trends or drift that are either unusual or irregular. The presence of diagonal lines parallel to LOI and isolated single points indicate that the signal has deterministic chaotic feature. The low value of the %REC for the time series as tabulated in Table 3 suggests that recurring points are rare and that denotes the absence of cyclicities in the process. The considerable magnitude of %DET validates the claim that the Planetary K-index signal is deterministic. The low value of ENTR is obtained for the signal under investigation which reveal the chaotic feature of the signal. The shorter LMAX signifies that the rapid divergence of the trajectory segments of the signal. As a whole, the RQA states that Planetary K-index signal has deterministic chaotic behaviour with the fast diverging trajectory in the phase space. Also, the temporal evolution graph of four parameters (REC, LMAX, %DET and ENTR) for the quantification of recurrences are represented in Fig. 9 Conclusion Non-stationary behaviour of the K p -index signifies that the frequency (or periodicity) of occurrence of the geomagnetic storm of nearly the same strength is not uniform all through time. The time intervals between the geomagnetic storms of nearly equal strengths are varying with time. The geomagnetic storm may be recurrent or non-recurrent type. The first one is caused due to the high-speed solar wind from co-rotating interaction region (CIR) of the Sun, and it has a uniform periodicity of nearly 27 days. This periodicity corresponds to the rotational period of the Sun. The non-recurrent type storms, on the other hand, are generally caused due to the high speed Coronal Mass Ejection (CME) whose periodicity of occurrence is not uniform. As the K p -index is found to be nonstationary, it can be concluded that the non-recurrent phenomenon like CME is more dominating over the CIR solar wind to cause the fluctuation of the geomagnetic storm. Since the K p variation exhibits deterministic characteristic, it can be inferred that the geomagnetic storm is not much affected by the causes which may introduce noise or randomness in its fluctuation. The intensity of the storm due to the CIR or CME is so strong that the effect of the other causes on the storm fluctuation seems noisy and negligible. This non-randomness feature of the K p -index fluctuation indicates a very well-defined "cause-effect" relationship. So, if the initial state of the geomagnetic system is known, the K p -index (effect) in its future state can be precisely predicted for a certain change of the inputs like CIR or CME (cause) and hence helps in geomagnetic forecasting. The nonlinearity of the K p -index establishes that this 'cause-effect' association is not governed by some simple, proportional polynomial equations of degree one but is ruled by some set of polynomials with degrees more than one. From the Takens' theorem (1980), we can determine the actual dimension (d) of the any process using the relation d ≤ (m − 1)∕2 . Since the computed value of m for our signal is 13, the actual dimension of signal generation process is d ≤ 6 . It suggests that a maximum of six nonlinear polynomial differential equations will be used to model process dynamics. Since a nonlinear system may have multiple attractors, there remains the chance that the dynamics of a nonlinear system may depend on its initial state. The direction of evolution of process parameters in state space is represented by the trajectories. The way the trajectories change with the variation of the system parameters determines the robustness of the system against small perturbation in the system parameters. If the trajectory is found to be irregular but remain confined in region of state space where there is no stable limit cycle or fixed points, the system is said to lie on a strange attractor. If any two arbitrary initial points which are close to each other diverge from each randomly after any number of iterations, and again come closer to each other after any number of iterations making the trajectory confined within a certain region of the state space generates the strange attractor. This repetition of rapid divergence and subsequent closeness of any two points within a confined space can be interpreted that a system with strange attractor is locally unstable but globally stable. The phenomenon of having strange attractor | https://doi.org/10.1007/s42452-021-04622-4 by a system is better known as chaos. The proof of the presence of chaos in the K p -index, as established by 0-1 test, indicates that there is strange attractor in its phase space. It means that the variation of the geomagnetic storm is not much robust against any small perturbation at locally but as a whole it remains stable. The chaos which is there in the variation of the geomagnetic storm is found to be deterministic with low dimension. The presence of deterministic chaos reconfirms that the system is "sensitive dependence on initial conditions". The aperiodic variation of the trajectory that is generated from an "autonomous" low-dimensional nonlinear dynamical system is generally referred as deterministic chaos. The term "autonomous" signifies the system which has no inputs neither deterministic nor noisy. The presence of this chaos is due to the fluctuations in the primary states which can cause significant deviation in future. The system is neither a complete chaos nor complete deterministic but it is in between these two states, i.e. deterministic chaos, the state where the system acquires fractal structure. As the system is not completely deterministic, the system must be energy dissipative and hence the system generating geomagnetic storm is a nonlinear dissipative dynamical system (NDDS). Again the absence of complete chaos advocates that there are nonlinear interactions between different temporal scales of the geomagnetic storm time series. These interactions enable the geomagnetic storm generating system to build and rebuild medium to large substructures from short-scale chaotic fragments. This particular state between chaos and determinacy is characterised by the presence of self-similar substructures called fractals. The presence of multifractal indicates that the variation in the strength of the geomagnetic storm characterised by many clusters of self-similar structures. The physical reason of varying slope of (q) is the presence nonlinear interaction between these structures or fractals. The attributes like multifractality, long-range positive correlation present in the Planetary K-index signal is because of both the wideness of its probability density function and the temporal structure, whereas positive persistence implies that the nonlinear interaction between the fractals is strong and their evolution moves in tandem. The superiority of scaling properties of small fluctuations over large fluctuations of K p -index signifies that the small structured fractals are playing more important role to determine the evolution pattern of the geomagnetic storm dynamics. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2021-07-27T00:04:57.934Z
2021-05-12T00:00:00.000
{ "year": 2021, "sha1": "9ca9f086e4b14250546e791c2fa0b54ec8fe8f9d", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s42452-021-04622-4.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "226483e754d49a2fb6ebf9d2f54a9e96060fac73", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
199634213
pes2o/s2orc
v3-fos-license
Genome-wide DNA methylation and gene expression patterns reflect genetic ancestry and environmental differences across the Indonesian archipelago Indonesia is the world’s fourth most populous country, host to striking levels of human diversity, regional patterns of admixture, and varying degrees of introgression from both Neanderthals and Denisovans. However, it has been largely excluded from the human genomics sequencing boom of the last decade. To serve as a benchmark dataset of molecular phenotypes across the region, we generated genome-wide CpG methylation and gene expression measurements in over 100 individuals from three locations that capture the major genomic and geographical axes of diversity across the Indonesian archipelago. Investigating between- and within-island differences, we find up to 10.55% of tested genes are differentially expressed between the islands of Sumba and New Guinea. Variation in gene expression is closely associated with DNA methylation, with expression levels of 9.80% of genes correlating with nearby promoter CpG methylation, and many of these genes being differentially expressed between islands. Genes identified in our differential expression and methylation analyses are enriched in pathways involved in immunity, highlighting Indonesia's tropical role as a source of infectious disease diversity and the strong selective pressures these diseases have exerted on humans. Finally, we identify robust within-island variation in DNA methylation and gene expression, likely driven by fine-scale environmental differences across sampling sites. Together, these results strongly suggest complex relationships between DNA methylation, transcription, archaic hominin introgression and immunity, all jointly shaped by the environment. This has implications for the application of genomic medicine, both in critically understudied Indonesia and globally, and will allow a better understanding of the interacting roles of genomic and environmental factors shaping molecular and complex phenotypes. Introduction Modern human genomics does not equitably represent the full breadth of humanity. While genome sequences for people of European descent now number a million or more, most of the world is deeply understudied [1]. This is particularly true of Indonesia [2], a country geographically as large as continental Europe and the world's fourth largest by population. Genomic diversity in Indonesia is strikingly different to other well-characterized East Asian populations, such as Han Chinese and Japanese, but this diversity is not captured in large global datasets like the 1000 Genomes Project [3] or the Simons Genome Diversity Project [4]. The first three Indonesian genome sequences were only reported in 2016 [5] with the first representative survey of diversity across the archipelago only appearing in 2019 [6]. This extreme lack of representation extends to molecular phenotypes. To our knowledge, only one genome-wide gene expression study has been published [7] from the region, focused exclusively on host-pathogen interactions with P. falciparum. There are no analyses of diversity in gene regulatory mechanisms in either Indonesia or, more broadly, Island Southeast Asia. This gap is especially incongruous because Indonesia is an epicenter of infectious disease diversity, ranging from well-known agents like malaria [8] to emerging diseases like zika virus [9]. The country faces substantial healthcare challenges, including the rise in prevalence of understudied tropical infectious diseases and the increasing impact of metabolic disorders among a growing middle class [10]. However, Indonesia also offers unique advantages for studying responses to these diseases and disorders, some of which are likely to have exerted strong evolutionary pressures on the immune system over thousands of years [11]. Because the country comprises a chain of islands that stretch for 50 degrees of longitude along the equator (wider than either the continental USA or mainland Europe), but span barely 15 degrees of latitude, environment conditions are broadly comparable in many key respects across Indonesia. In contrast, a complex population history means that its people differ greatly, forming a genomic cline of Asian ancestry in the west to Papuan ancestry in the east [12]. This change in ancestry is the most distinctive genomic signal observed in the region [13], and, since Papuans derive up to 5% of their genomes from Denisovans, also gives rise to an east-west gradient of archaic introgression [6]. Altogether, the unique conditions observed in Indonesia provide a framework for studying the effects of genome composition on gene expression in a heterogeneous environment. To provide a benchmark dataset of regional molecular phenotypes, here we report genomewide measurements of DNA methylation and gene expression for 116 individuals drawn from three population groups that capture the major genomic and geographical axes of diversity across Indonesia. The people of Mentawai, living on the barrier islands off Sumatra, are representative of the dominant Asian ancestry in western Indonesia [13]; the Korowai, hunter-gatherers from the highlands of western New Guinea island capture key aspects of regional Papuan ancestry [6]; and the inhabitants of Sumba in eastern Indonesia are, genetically, a near equal mixture of the two different ancestries [14]. However, it remains unclear whether, and to what extent, these differences in genetic ancestry correlate with variation in molecular phenotypes. By quantifying DNA methylation and gene expression levels across Indonesia for the first time, we identify the relative influences of genomic ancestry versus plasticity to local environmental conditions in driving regional molecular phenotypic patterns. Ethics statement The samples used in this study were collected by HS, JSL and an Indonesian team from the Eijkman Institute for Molecular Biology, Jakarta, Indonesia, with the assistance of Indonesian Public Health clinic staff. All collections followed protocols for the protection of human subjects established by institutional review boards at the Eijkman Institute (EIREC #90 and EIREC #126) and the University of Melbourne (Human Ethics Sub-Committee approval 1851639.1). All individuals gave written informed consent for participation in the study. Permission to conduct research in Indonesia was granted by the Indonesian Institute of Sciences and by the Ministry for Research, Technology and Higher Education. Data collection Whole blood was collected by trained phlebotomists from the Eijkman Institute and local community health centers from over 300 Indonesian men. Samples were collected across multiple villages in the three islands using EDTA blood tubes from either Vacuette or Intherma for DNA isolation, and Tempus Blood RNA Tubes (Applied Biosystems) for RNA isolation. Samples were collected in 2016 in the course of three distinct field trips: Korowai samples were collected in February, Mentawai samples in April, and Sumba samples in July. RNA extractions were performed according to the manufacturers' protocols after all collections had taken place and randomised with respect to village and island (S1 and S2 Tables). Quality and concentration of all extracted RNA samples were assessed with a Bioanalyzer 2100 (Agilent) and a Qubit device (Life Technologies), respectively. We selected 116 male samples for RNA sequencing and DNA methylation analysis primarily on the basis of their RIN (RNA Integrity Number), by focusing on villages with at least 10 samples with RIN � 6 ( Table 1). Given our past work on the island of Sumba [14], we included all samples from Sumba with RIN � 6, heedless of village. However, we occasionally observed differences between our RIN measurements and those performed by our sequencing provider, with the latter generally being lower. Out of 116 individuals, 24 (21%) had a final RIN measurement < 6. Further detail on all samples, including extraction and sequencing batches, is provided in S1 and S2 Tables. Library preparation was performed by Macrogen (South Korea), using 750 ng of RNA and the Globin-Zero Gold rRNA Removal Kit (Illumina) according to the manufacturer's instructions. Samples were sequenced using a 100-bp paired-end configuration on an Illumina HiSeq 2500 to an average depth of 30 million read pairs per individual, in three batches. All batches included at least one inter-batch control for downstream normalisation (S1 and S2 Tables). In parallel, we extracted whole blood DNA from all individuals included in the RNA sequencing data using Gentra Puregene for human whole blood kit (QIAGEN) and MagAttract HMW DNA kit (QIAGEN) according to the manufacturer's instructions. 1 μg of DNA from each sample was shipped to Macrogen, bisulfite-converted and hybridized to Illumina EPIC BeadChips according to the manufacturer's instructions. Samples were randomised with respect to village and island across two array batches, with three samples processed on both batches to control for technical variation (S1 Table). RNA sequencing data processing All RNA sequencing reads were examined with FastQC v. 0.11.5 [15]. Leading and trailing bases below a Phred score of 20 were removed using Trimmomatic v. 0.36 [16]. Reads were then aligned to the human genome (GRCh38 Ensembl release 90: August 2017) with STAR v. 2.5.3a [17] and a two-pass alignment mode; this resulted in a mean of~29 million uniquelymapped read pairs per sample. Next, we performed read quantification with featureCounts v. 1.5.3 [18] against a subset of GENCODE basic (release 27) annotations that included only transcripts with support levels 1-3, retaining a total of 58,391 transcripts across 29,614 genes. On average, we successfully assigned~15 million read pairs to each sample (S2 Table). Variant calling and ancestry estimates We applied GATK RNA-seq Best Practices [19][20][21] (https://software.broadinstitute.org/gatk /documentation/article.php?id = 3891) to the mapped RNA-seq data in order to produce a set of genotype variants from each sample and confirm their ancestry. We marked duplicate mapped reads with Picard (http://broadinstitute.github.io/picard) and recalibrated base quality scores against files provided in the GATK Resource Bundle. Variants were called by first producing per-sample raw genotype-likelihoods using HaplotypeCaller, and then joint genotyping all the per-sample gVCFs using GenotypeGVCFs [20,22]. This produced 431,808 variants, from which only biallelic SNPs with <1% missing genotypes were retained. This set was further LD pruned with PLINK v1.90 [23] using a sliding window approach (window size 100 SNPs, step size 10 SNPs, r 2 threshold 0.2); 180,715 variants passed LD pruning and were further used in principal component and Admixture analyses. All PCAs were performed in PLINK; admixture analyses were carried out using ADMIXTURE v1.3.0 [24] and setting K = 2, 3 or 5. Papuan and Asian ancestry proportions for each sample were estimated using ADMIXTURE results at K = 2, as in [25]. To explore the placement of our samples within a broader geographical context, we also merged our newly generated data with a previously generated genotyping dataset [13] spanning populations sampled across Island Southeast Asia, Papua and Polynesia; including additional samples from Mentawai, Sumba and multiple New Guinean groups. Original genotyping data (roughly 540,000 autosomal SNPs) were translated into hg38 genomic coordinates and merged with the unfiltered RNA-seq data call set. We removed A/T, G/C and triallelic variants from both datasets to avoid strand bias prior to merging, and applied a 5% missingness filter to the merged dataset. This produced 13,233 overlapping SNPs which were analysed by PCA in PLINK as above. We elected not to directly infer Denisovan introgression on this call set due to its non-random missingness relative to whole-genome sequencing data, and the high likelihood that differences in gene and exon length would impact our ability to identify introgression in an unbiased way across all expressed genes. Deconvolution of blood cell type proportions Because blood cell type composition can impact gene expression estimates in bulk RNA samples, we used DeconCell v. 0.1.023 [26] to estimate the proportion of CD8T, CD4T, NK, B cells, monocytes and granulocytes in each sample (S2 Table), and tested these for association with the first 10 PCs of both the methylation and expression datasets. Unfiltered read counts were normalised using the inbuilt Decon-cell command 'dCell.expProcessing', which performs TMM normalization, log 2 transformation of the counts, and then scale normalization for each gene. The proportion of each cell type was then predicted using the normalized data and the reference bulk dataset. In addition, we also tested the methylation-based approach from Houseman et al [27], as well as an additional RNA-seq based method, ABIS [28]. Overall, we observed high similarity between all three methods (S3 Table), especially between Decon-cell and the Houseman et al method, with Pearson's R for individual cell types ranging between 0.47 in CD8T cells to 0.81 in B cells. However, we found that the methylation-based approach yielded erratic variations in the fraction of different cell types, even between methylation replicates. We therefore decided to use DeconCell, which more closely mirrored the proportions of cell types found in healthy samples. Further details on the blood deconvolution are available as S1 Text. Differential expression analysis All statistical analyses were performed using R v. 3.5.2 [29]. We transformed read counts to log 2 -counts per million (CPM) using a prior count of 0.25 and removed genes with low expression levels by only keeping genes with log 2 CPM � 1 in at least half of the individuals from any island, resulting in a total of 13,031 genes retained for further analysis. To quantify the effect of technical batches, we included six replicate samples among our sequencing batches. As anticipated, PCA of uncorrected data suggested the presence of substantial sequencing batch effects in the data (S1 Fig). However, pairwise correlations between technical replicates were higher than between different individuals from the same village sequenced in the same batch (S2 Fig). We applied TMM normalisation [30] to the data, and removed high sample variability from the count data using the voom function [31] in limma v. 3.40.2 [32]. Differential expression testing was also performed using limma. To construct the linear model for testing, we used ANOVA to test for associations between all possible covariates and the first 10 principal components (PC) of the data. Technical covariates significantly associated with at least one PC (sequencing batch, RIN, age) were included in the differential expression testing model. Sampling sites were included at either the island or the village level, depending on the test. Comparisons between villages were limited to those with at least 15 individuals, to ensure sufficient power to detect differences. All individuals were included in comparisons between islands, and models were not hierarchically structured. Genes were called as differentially expressed (DEG) if the FDR-adjusted p value was below 0.01, regardless of the magnitude of the log 2 fold change, unless noted otherwise. Lists of DEGs were annotated using biomaRt v. 2.40.0 [33]. Gene set enrichment analyses for the DEGs on the island and village levels were performed using clusterProfiler v. 3.12.0 [34], with Gene Ontology and KEGG annotation drawn from the org.Hs.eg.db v. 3.9 database [35]. Additionally, we tested whether DEGs were enriched for genes known to have been introgressed from Denisovans into individuals of Papuan ancestry at high frequency using a hypergeometric test. Finally, to examine possible associations between known climatic variables and expression across sampling sites, we retrieved mean monthly precipitation and temperature data from WorldClim v. 2.0 [36] for the five main villages in our study at a resolution of 0.5 arcminutes (roughly 1 km 2 tiles). DNA methylation array data processing and analysis DNA methylation data were processed using minfi v. 1.30.0 [37]. The two arrays were combined using the 'combineArrays' function and preprocessed with the 'bgcorrect.illumina' function to correct for array background signal. Signal strength across all probes was evaluated using the 'detectionP' function and probes with signal p < 0.01 in >75% of samples were retained. To avoid potential spurious signals due to differences in probe hybridization affinity, we discarded 6,072 probes overlapping known SNPs segregating in any of the study populations based on previously published genotype data [6]. The final number of probes retained was 859,404. Subset-quantile Within Array Normalization (SWAN) was carried out using the 'preprocessSWAN' function [38]. Methylated and unmethylated signals were quantile normalized using lumi v. 2.36.0 [39]. As with the RNA sequencing, replicate samples were included to detect and correct for batch effects (S3 Fig). The replicate samples exhibit a high correlation between batches (Spearman's ρ 0.969 for MPI-025 and 0.980 for SMB-ANK-029, S4 Fig). As above, we used limma to test for differential methylation between sampling sites. We included methylation array batch, age, and the estimated cell type proportions (derived from the RNA sequencing data) as covariates. Differentially methylated probes (DMPs) between all pairwise comparisons of the islands and villages were identified using contrast designs. Significant DMPs were selected based on an FDR-adjusted p value threshold of 0.01 and a log 2 fold change of 0.5 or greater. Enrichment tests for the DMPs were performed using missMethyl v. 1.18.0 [40], which accounts for differences in probe density associated with gene length that can otherwise bias results [41]; probes were annotated to genes according to Illumina's manifest for the EPIC array. Significantly enriched pathways were selected based on an FDR-adjusted p value of 0.01. In addition, we intersected DMPs with published Epigenome-wide Association Studies available on the EWAS catalogue (http://ewascatalog.org) on November 2019. Altogether, we tested over 100 traits in the catalogue measured in whole blood using the 'enricher' universal enrichment analysis tool in clusterProfiler with FDR correction for multiple tests. For each population comparison, we selected methylated CpG sites that have mean beta difference > |0.05| and adjusted p < 0.01 against the background methylated CpG sites. We further identified differentially methylated regions (DMRs) by annotating the CpG probes with the 'cpg.annotate' function of the R package DMRcate v. 3.9 [42], and by collapsing the probes to regions using the 'dmrcate' function. Individual probes with an FDRadjusted p value � 0.01 and significant DMRs were selected based on a region beta value of 0.5 or greater. Modeling gene expression and CpG methylation values as a function of ancestry proportions In addition to DE and DM testing between populations, we directly applied linear models of the form ([covariate adjusted gene expression or probe methylation level]~Papuan ancestry) to each gene or probe to directly assess gene expression and CpG methylation levels as a function of Papuan ancestry proportions (determined through ADMIXTURE analyses as described above). Similarly to other analyses, batch, age, and blood cell type, as well as RIN for gene expression data, were accounted for. We tested for the enrichment of the CpGs and genes identified here among the DMPs and DEGs identified in contrast analyses with Fisher's exact test. Principal Component Analysis of expression and methylation data (PCA) DNA methylation M-values and gene expression log 2 CPM values were adjusted to correct for batch effects and differences in blood cell type proportions between samples by fitting a linear model with the technical covariates used in the differential methylation and expression analysis. Residuals of this model were used in the PCAs in Fig 1. Variable CpG probes and genes were identified based on coefficients of variation between samples. PCA was performed using the 10 4 most variable probes and the 10 3 most variable genes from the methylation and expression datasets, respectively; PCAs of the entire data set before and after batch correction are available in S1 and S3 Figs. Identifying associations between DNA methylation regions and gene expression We used the R package MethylMix v. 2.12.0 [43,44] to identify transcriptionally predictive methylation states by focusing on methylation changes that are associated with gene expression levels. As with the PCA analysis, DNA methylation M-values and gene expression log (CPM) values were adjusted to account for technical covariates and blood cell type proportions by fitting a linear model. Residuals of these linear models were used in the analysis. Batch corrected M-values and logCPM values were min-max normalized to range from 0 to 1. CpG probe methylation levels were matched to genes using the ClusterProbes function, which uses a complete linkage hierarchical clustering algorithm for all probes of a single gene to cluster the probes. To identify transcriptionally predictive DNA methylation events, MethylMix utilizes linear regression to detect negative correlations between methylation and gene expression levels. Matching DNA methylation and gene expression data from 116 individuals were used in the analysis, and a total of 10,420 genes with matching methylation and expression data were tested. As MethylMix does not output detailed summary statistics of the fitted linear models, we used linear regression to calculate the r 2 and p values for each significant CpG probe cluster and gene pair detected by MethylMix. False discovery rate adjusted p values were calculated using the 'p.adjust' function in base R. Differential DNA methylation and gene expression between Indonesian island populations To quantify the gene regulatory landscape in Indonesia, we generated DNA methylation (array) and gene expression (RNA sequencing) measurements from 116 whole blood samples of male individuals living on three islands in the Indonesian archipelago (Fig 1A). Our three sampling sites, Mentawai, Sumba, and New Guinea, represent distinct points along a well documented Asian/Papuan admixture cline [13]: the Korowai of New Guinea exhibit high Papuan ancestry; Sumbanese have intermediate degrees of Papuan ancestry; and the Mentawai have no Papuan ancestry, having been settled primarily by ancestral Austronesian speakers. Furthermore, Korowai individuals are likely to carry up to 5% of introgressed genomic sequence from archaic Denisovans, as repeatedly observed in other samples from the island of New Guinea [6,45]. Principal component analysis of genotype variants called from the RNA sequencing data shows clear clustering of samples driven by population origin (Fig 1B), demonstrating that the three populations are genetically distinguishable. Similarly, Admixture analyses at K = 3 and K = 5 (S5 Fig) confirm that the three islands represent distinct populations with very limited gene flow between them, alongside a lack of additional fine-scale geographic structure within either Sumba or Mentawai that could confound our analyses despite the inclusion of multiple villages in our sampling strategy. In addition, when analyzed together with 513 samples drawn from 20 diverse populations from the broader (Island) Southeast Asia and Papua regions our samples cluster as expected (S6 Fig). Inter-island differences are severely attenuated in PCAs of DNA methylation ( Fig 1C) and gene expression (Fig 1D), although they are still present. After correcting for known technical confounders, PC1 in the DNA methylation data separates the island of Sumba from both the Korowai (FDR-corrected Tukey's HSD p = 5.4×10 −4 ) and Mentawai (p = 6.8×10 −5 ); PC2 further differentiates Sumbanese and Mentawai (p = 2.6×10 −3 ) and additionally separates Mentawai from Korowai (p = 1.9×10 −6 ). In the gene expression data, Korowai is separated from Sumba (p = 9.1×10 −4 ) by PC1, whereas PC2 separates Sumba from Mentawai (p = 2.4×10 −4 ) and Mentawai from Korowai (p = 6.3×10 −4 ). We then tested for differences in DNA methylation and gene expression between the three islands, initially without considering the village structure in Sumba and Mentawai (Table 1 and S1 and S2 Tables). At an absolute log 2 (FC) threshold of 0.5 and an FDR-adjusted p value threshold of 0.01, we detected 26,262 (3.06% of all tested probes), 17,320 (2.02%) and 3,965 (0.46%) differentially methylated probes (DMPs) and 1,375 (10.55% of all tested genes), 1,003 (7.70%), and 328 (2.52%) differentially expressed genes (DEGs) between Sumba and the Korowai, Mentawai and the Korowai, and Sumba and Mentawai, respectively (Figs 2A and 2B). In addition, we identified 1,454, 1,168, and 279 differentially methylated regions across all three inter-island comparisons, respectively, when thresholding to a mean β difference of 0.05 across the region. A full summary of these results is available as S4 Table. We also directly modeled CpG methylation and gene expression levels as a function of the proportion of Papuan ancestry in each sample, identifying 9,305 CpGs and 2,025 genes with an adjusted p value <0.01 after correcting for multiple testing (S5 and S6 Tables). These genes and CpGs associated with Papuan ancestry are enriched for DE genes (Fisher's exact p = 5.7 × 10 −125 ) and DMPs (p <2.2×10 −16 ) between islands, suggesting differences in Papuan ancestry levels may be directly driving some of the observed gene regulatory changes in our inter-island comparisons. In particular, when testing for the overrepresentation of these CpGs and genes among the DMPs and DEGs in each of the three pairwise comparisons separately, we find overlaps of 508/1,003 DEGs (50.65%, Fisher's exact test p = 2.4×10 −144 ) and 4,927 out of 26,262 DMPs (18.76%, p < 2.2×10 −16 ) in the comparison between Mentawai (no Papuan ancestry) and the Korowai (100% Papuan), but only 298/1375 DEGs (21.67%, p = 1.2×10 −53 ) and 1,875/17,320 DMPs (10.83%, p < 2.2×10 −16 ) between Sumba (roughly 50% Papuan ancestry) and the Korowai. There is substantial overlap in signals between either Sumba or Mentawai versus Korowai (Fig 2C and 2D). For instance, 44.95% of DEGs between Sumba and Korowai are also differentially expressed between Mentawai and Korowai; the same is true of 41.94% of DMPs between Sumba and Korowai. DEGs and DMPs between Sumba and Mentawai, however, have poor overlap with the other inter-island comparisons and are generally limited in number. This suggests that many of the signals we identify are driven by the Korowai data, and by some degree of homogeneity across Sumba and Mentawai. Indeed, comparisons involving Korowai PLOS GENETICS Genome-wide DNA methylation and gene expression patterns across the Indonesian archipelago routinely identify an order of magnitude more DEGs and DMPs. Furthermore, we find substantial agreement in both the magnitude and direction of effect between DEGs and DMPs across both comparisons involving Korowai, (Fig 2E and 2F; the generalized additive model of the form y~s(x) was calculated using MGCV with the shrinkage version of the cubic regression spline [46,47]; methylation deviance explained by model = 64.6%, p < 2×10 −16 ; expression deviance explained = 70.1%, p < 2×10 −16 ). However, effect size agreement is far poorer when examining both comparisons featuring either Sumba or Mentawai, regardless of whether we focus on methylation or expression differences (S7 Fig). Differentially expressed genes are enriched for immune function and Denisovan introgression We tested for enrichment of DEGs and DMPs against Gene Ontology (GO [48]) and Kyoto Encyclopedia of Genes and Genomes (KEGG [49]) pathways to detect functional enrichment between island populations. Overlapping enriched GO categories and KEGG pathways (adjusted p < 0.05; full tables of results for all comparisons are provided in S7−S10 Table) in comparisons between both Mentawai or Sumba versus the Korowai include functions related to the adaptive immune response, malaria response, and nervous system function. However, DEGs between Mentawai and Sumba were not enriched for either GO or KEGG terms. Similar testing for enrichment on DMPs shows various categories, which include terms mostly related to neurogenesis, the nervous system, and immunity, and which partly overlap with categories enriched in DEGs, although the biological interpretation of these results is not straightforward. Thus, to further refine them we intersected our lists of DMPs with published EWAS results available at the EWAS catalogue (http://ewascatalog.org; S11 Table). DMPs associated with a small number of terms are enriched in all three inter-island comparisons; these include immunity associated terms such as HIV infection, as well as lifestyle terms such as smoking behaviour and alcohol consumption, but also less straightforward terms including age. Finally, because the island of New Guinea has the highest levels of Denisovan introgression worldwide (up to 5% [6]), we asked whether any of the genes differentially expressed between the Korowai (high Papuan ancestry) and Mentawai (no Papuan ancestry), or the Korowai and Sumbanese (intermediate Papuan ancestry) fell within high confidence introgressed Denisovan tracts, on the basis of our previous data [6]. A total of 235 DEGs (considering all comparisons) overlap high confidence introgressed Denisovan haplotype blocks in New Guinea [6]. High-frequency introgressed genes in our DEGs include FAHD2B (introgressed at 65% frequency in New Guinea; DE between Sumba and Korowai (p = 0.004), and Mentawai and Korowai (p = 1.1×10 −6 ), and multiple genes related to immunity and antiviral response, such as CXCR6 (20% frequency in New Guinea [50]) and GBP1/3/4 (19% frequency in New Guinea [51,52]). Our ability to identify Denisovan-introgressed genes as differentially expressed depends on both the magnitude of the expression change between the groups being compared, and the introgressed allele's frequency within our sample. In turn, this means that the proportion of introgressed genes that we expect to be differentially expressed is difficult to predict a priori. Therefore, we examined the distribution of introgressed allele frequencies in New Guinea for all DEGs in our data, and asked whether these differ between our three inter-island comparisons. If Denisovan introgression is contributing to expression differences between the three sampling sites, we expect that genes that are differentially expressed between the Korowai and the other two groups will have generally higher introgressed allele frequencies than genes that are DE between the Sumbanese and the Mentawai. Indeed, we observe no difference in allelic frequencies for genes that are DE between both Sumba and Korowai, and Mentawai and Korowai (ttest p = 0.902), but observe higher frequencies in DEG between Sumba and Korowai, or Mentawai and Korowai, than between Sumba and Mentawai (p = 0.032 and 0.028, respectively), suggesting that Denisovan introgression may impact the expression levels of some genes. Methylation changes are correlated with changes in gene expression in a subset of genes To further explore the relationship between DNA methylation and gene expression, we asked how much of the variation we observe in gene expression levels can be correlated with variation in DNA methylation levels. We searched for regions of putatively functional DNA methylation by identifying instances of significant negative correlation between gene expression levels and cis-promoter methylation. We identified 1,282 probe clusters associated with 1,021 genes (9.80% of all genes with both methylation and expression data) where expression level was associated with nearby CpG methylation (Fig 3A and S12 Table). We compared the genes identified in this analysis with the DMPs and DEGs detected in the between-island comparisons, and find that 218 genes (17.16% of DEGs; hypergeometric p = 1.9x10 -14 ) in the comparison between Korowai and Sumba, 193 genes (15.20%, p = 4.9x10 -22 ) between Korowai and Mentawai, and 37 genes (2.91%, p = 0.203) between Sumba and Mentawai have expression levels associated with significant methylation changes at nearby CpGs; these include genes like SIGLEC7 (Fig 3B), which is involved in antigen presentation and natural killer (NK) celldependent tumor immunosurveillance [53]. SIGLEC7 and other SIGLEC family genes are also potential immunotherapeutic targets against cancer [54]. There are five enriched KEGG pathways, all broadly involved in immune interactions (S13 Table), including natural killer cellmediated cytotoxicity. Overall, these results confirm the association between DNA methylation and gene expression and suggest a possible role for differential DNA methylation in shaping the patterns of differential gene expression between these populations. Inter-island differences are primarily driven by a subset of villages While the three island populations differ substantially in terms of genetic composition (Figs 1 and S5), we have previously shown that there is a high degree of genetic similarity within islands [13]. Therefore, we may expect that intra-island differences in either DNA methylation or gene expression profiles, if they exist, are likely to reflect local environmental differences [55]. To test this hypothesis, we took advantage of the fact that we collected samples across multiple villages in both Sumba and Mentawai. PCA captured differences between villages at both the expression and methylation level (S8 Fig). For instance, PC1 of the DNA methylation data captures varying degrees of separation at both the intra-and inter-island level. Neither the two Sumba villages, Wunga and Anakalang, or the two Mentawai villages, Taileleu and Madobag, are separated by the first PCs, confirming our previous observations of limited differentiation within islands. Between islands, however, PC1 significantly separates multiple pairs of villages, chief amongst them the Korowai from all four other sites (Tukey's HSD-corrected p-values for all comparisons mentioned are available in S14 Table), and three out of four Sumba vs Mentawai village comparisons. There are similar, but again, weaker, trends in the expression data: PC1 separates the Korowai from both Sumba villages, as well as the villages of Wunga (in Sumba) and Madobag (in Mentawai), whereas PC2 separates Taileleu (in Mentawai) from the Korowai, and from Anakalang (in Sumba). We then repeated our differential expression and methylation analyses between villages. At a log 2 FC threshold of 0.5 and an FDR of 1%, we are able to recapitulate the main findings of our island-level analyses, although additional trends emerge (Figs 4 and S9). Detectable differences between villages in the same island are small, with only 62 DMPs and 55 DEGs between the two Mentawai villages of Madobag and Taileleu, and 23 DMPs and 1 DEG, IDO1 (a modulator of T-cell behavior and marker of immune activity [56]; p = 0.009, log 2 FC = -1.49), between the Sumbanese villages of Wunga and Anakalang, echoing their limited separation in PCA. Similarly, we find low numbers of DEGs and DMPs across all comparisons involving Sumba and Mentawai (Fig 4), again recapitulating the observations we made at the island level (Fig 2). Overall, there appears to be high concordance between genes identified as DE at the island and village level (S10 Fig), with a high degree of correlation between village-and islandlevel results, as expected (S15 Table). However, when comparing villages between islands, we identified substantially more DMPs and DEGs between Taileleu and Korowai (14,231 and 1,143, respectively) than between Madobag and Korowai (9,787 and 484, respectively), although both Taileleu and Madobag are located in Mentawai and have very similar genetic backgrounds. Similarly, we identified more DMPs and DEGs between Wunga and Korowai (31,905 and 1,592, respectively) than between Anakalang and Korowai (26,317 and 843, respectively). To understand why we may observe these patterns, we focused on genes that exhibit discordant patterns between the villages on a single island. DEGs between Taileleu and Korowai, but not between Madobag and Korowai (Fig 4B), tend to have similar expression profiles in Madobag and Korowai, whereas DEGs between Wunga and Korowai but not between Anakalang and Korowai (Fig 4C) seem to be expressed at an intermediate level in Anakalang. These differences are not correlated with known technical confounders such as differences in RNA quality or in variability within villages (S11 Fig). Indeed, their presence in both the DNA methylation and RNA sequencing results argues against sample processing artifacts. In order to confirm that these patterns were not driven by differences in sample size, we randomly subsampled each village to 10 individuals and repeated DEG testing 10 3 times. There are consistently more DEGs between Wunga and Korowai than Anakalang and Korowai (t-test p < 10 −30 ) as well as between Taileleu and Korowai than between Madobag and Korowai (p < 10 −30 ). Given the genetic homogeneity we observe within islands (S5 and S8 Figs), we reasoned that these observations may be driven by interactions between genetics and differences in the fine-scale local environment at each sampling site, although a comparison of rainfall and mean monthly temperatures across all five sites did not support these factors as drivers (S12 Fig). While there are clear differences between islands across all climate variables we have considered, climate is generally homogeneous within islands, and thus is unlikely to be responsible for the trends we observe. On the whole, our results highlight the importance of detailed PLOS GENETICS data collection and thorough sampling from regions spanning diverse genomic and environmental clines, if we are to elucidate gene-by-environment interactions. Discussion Although Island Southeast Asia accounts for nearly 6% of the world's population and contains substantial ethnic and genetic diversity [13], genomic characterisation of this region lags drastically behind other regions of the world. The first regional large-scale set of publicly available human whole genome sequences was published in 2019 [6]; to our knowledge, there is only one study of gene expression from the region, of patients with malaria from the northern tip of Sulawesi [7]. In contrast, our work represents the first characterization of gene expression and DNA methylation levels across self-reported healthy individuals from geographically and genetically distinct populations in Indonesia, and more broadly from Island Southeast Asia. We have surveyed three sites with genetically distinct populations, spanning the Asian/Papuan genetic cline that characterises human diversity in the region, and we also sampled multiple villages in two of the islands (Sumba and Mentawai). Our study design purposefully allows us to explore both genetic (primarily between islands) and environmental (both between and within islands) contributions to expression and methylation differences, a result that is further highlighted in our inter-village analysis, where we observe some small-scale village-specific effects (Fig 4). Indeed, while we find differentially expressed genes and differentially methylated CpGs in most location comparisons (Fig 2), the most numerous, reproducible and largest effect changes were found when comparing either the Sumbanese or Mentawai with the Korowai. Many of these results feature genes involved in immune function, suggesting a potentially adaptive response to local environmental pressures. For example, beyond consistent enrichment for immune-associated GO and KEGG terms, the top 20 strongest DEG signals between the Mentawai and the Korowai include genes involved in antigen presentation in both innate and adaptive immune cells (MARCO and SIGLEC7, respectively; MARCO p = 3.7×10 −13 ; SIGLEC7 p = 1.1×10 −12 ; these genes are also differentially expressed between Sumbanese and the Korowai (MARCO p = 1.1×10 −9 ; SIGLEC7 p = 1.2×10 −11 ; S13 Fig). Polymorphisms within MARCO, which is expressed on the surface of macrophages, have been repeatedly shown to associate with susceptibility of infection by Mycobacterium tuberculosis and Streptococcus pneumoniae in multiple populations worldwide [57][58][59][60]; some of these variants have been subsequently shown to have a direct impact on antigen-binding [61]. Our MethylMix analyses identify differences in SIGLEC7 expression as being potentially driven, at least in part, by methylation differences in its promoter region (Fig 3B). Although we have generated a preliminary set of genotype calls from our RNA-sequencing data, in the absence of whole-genome-level results from our samples, it is challenging to identify whether these signals are also associated with selective signals or driven entirely by environmental differences; neither of these genes has been identified in previous scans of Denisovan introgressions and our current genotype calls do not have sufficient resolution to enable us to directly call introgression in these samples. However, both we and others have previously shown that introgressed Denisovan tracts on the island of New Guinea are enriched for immune genes [6,62], similar to the contributions of Neanderthals to non-African genomes [63,64]. Indeed, our data suggest that Denisovan introgression in New Guinea may be impacting gene expression levels in the Korowai. More broadly, immune challenges have exerted some of the strongest selective forces on humans throughout our species' history [11]; transmissible diseases endemic in Indonesia range from malaria (both P. falciparum and P. vivax) [8] to infections by multiple helminth species and other understudied tropical diseases [2]. Tuberculosis remains a major health concern in the region, with the World Health Organisation reporting nearly half a million new cases in 2017 [65]. Others have sought to characterise the interplay between genetic and environmental contributions to either expression or methylation levels across limited geographic scales. A study of approximately 1,000 individuals drawn from a founder population in Quebec demonstrated that gene-by-environment interactions-specifically, with air pollution levels-drastically impacted measurements of gene expression in blood, overpowering the effects of genetic relatedness [66]. Equivalent high-resolution Indonesian data are unavailable, and our attempts to associate differences in expression or methylation across small geographic scales by using WorldClim data were inconclusive. Unfortunately, it remains difficult to characterize granular levels of regional heterogeneity in disease burden and infection type, yet our results suggest pressures shaping immune response in Indonesia vary at the local level. A different study of DNA methylation across rainforest hunter-gatherer and farmer populations in Central Africa showed that methylation captures both population history and current lifestyle practices. However, these two factors impact non-overlapping sets of genes, with differences at immune genes associated with a group's present-day habitat as well as genomic signals of past positive selection [55]. We observe similar trends here; the Korowai occupy an ecological niche akin to that of African rainforest hunter-gatherers, whereas the inhabitants of Sumba and Mentawai are village-based agriculturalists. Sumba, in particular, is host to a network of traditional communities derived largely from pre-existing Papuans, who first arrived on the island~50,000 years ago, and incoming Asian farming cultures, that reached the island~4,000 years ago [14]. Today, Sumba retains a low population density and little contact between villages, as reflected in its extensive linguistic diversity [67]. This has resulted in small, isolated populations of a few hundred to a few thousand individuals that can be identified genetically between villages roughly 10 km apart [14], making it a near unique study system for examining gene-by-environment interactions. As we move further into the age of personalised and genomic medicine, understanding how genetics and other molecular phenotypes drive disease risk across diverse populations is of crucial importance to ensure benefits are equitably distributed. Already there has been a dramatic expansion of genomic-based tests that are being deployed to identify the risk of disease. However, these tests are largely built using European cohorts and have proven difficult to translate to non-European populations [68][69][70]. Even within homogeneous populations, environmental factors can have marked effects on gene expression measurements, and on the interpretability of genomic-based tests of disease risk [71], highlighting a secondary risk of such biased European sampling: limiting not only the genomic diversity under study, but the environmental diversity as well, to general detriment. This study provides a valuable first step in the characterization of the processes shaping gene expression changes in Island Southeast Asia. Supporting information S1
2019-08-16T06:18:22.166Z
2019-07-16T00:00:00.000
{ "year": 2020, "sha1": "09fe5bf2e71505af2f2cf287236f379ba3a5d570", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1008749&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4e8ccd556235c8f29460ef392ae237bc08aa3178", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
81049845
pes2o/s2orc
v3-fos-license
Cancer Screening Guidelines : A Rapid Review Introduction Cancer, as a major public health is the leading cause of death globally. Unfortunately, the burden of cancer has been expected to increase due to population growth and aging; lifestyle behaviors such as smoking, poor diet, physical inactivity, and reproductive changes in women especially in developing countries. Globally, lung cancer and breast cancer have been known as the leading causes of death in men and women respectively. 3 To prevent the mortality and burden of cancers, governmental and scientific associations have tried to develop and update cancer guidelines worldwide. In this paper, the guidelines of most important cancers have been briefly reviewed. Introduction 2][3] Unfortunately, the burden of cancer has been expected to increase due to population growth and aging; lifestyle behaviors such as smoking, poor diet, physical inactivity, and reproductive changes in women especially in developing countries.Globally, lung cancer and breast cancer have been known as the leading causes of death in men and women respectively. 2,3 prevent the mortality and burden of cancers, governmental and scientific associations have tried to develop and update cancer guidelines worldwide.In this paper, the guidelines of most important cancers have been briefly reviewed. Breast Cancer Breast cancer is the most common cancer of women worldwide.Several guidelines have been developed for breast cancer screening.Mammography is a popular modality for breast screening.The American Cancer Society (ACS) has recommend that mammography must begin annually at the age of 40 years for all women.In addition, Breast Self-Examination (BSE) (regularly (monthly) or irregularly, beginning in their early 20s), and Clinical Breast Examination (CBE) (preferably at least every 3 years for women in their 20s and 30s and preferably annually for asymptomatic women aged ≥40 y) have been considered. 4,5 e United States Preventive Service Task Force (USPSTF) recommended biennial screening mammography for women aged 50 to 74 years.Albeit, beginning mammography prior to age 50 years has been considered as an individual option especially for women with a parent, sibling, or child with breast cancer.In addition, the balance of benefits and harms of screening mammography in women aged 75 years or older is not sufficient. 5,6 e Canadian Task Force on Preventive Health Care (CTFPHC) recommended routinely screening with mammography every 2 to 3 years for women aged 50-69 and 70-74 years.Although, for women aged 40-49, routinely screening with mammography was not recommended.In addition, BSE and CBE is not recommended routinely. 7he American College of Physician (ACP) recommended mammography in women age 40-70 years.Although the benefit of screening in older women is higher.Based on the United Kingdom National Health Service (UK-NHS) guideline, routine screening is not recommended in women age 40-49 years.Albeit, digital mammography is more accurate in women 40-49 years in compare with film mammography.In women aged 50-70 years it has been recommend to perform mammography screening of all women every 3 years. 8][10] Based on UK-NHS, routine screening before the age of 25 is not recommended.Cytology every 3 and 5 years has been recommended for women between 25-49 years and women between 50-64 years respectively.In women age >=65 years, it has been recommended to screen women who have not screened since the age of 50 years or who have had a history of abnormal test results.Albeit, if all the previous screening were negative, the screening ended. 8he American College of Obstetrics and Gynecology (ACOG) recommend to begin screening at the age of 21 years independent of the sexual history every 2 years from 21 to 29 years.In women >= 30 years, we could screen every 2-3 years if they have three consecutive negative screens, no history of cervical intraepithelial neoplasia 2 or 3, not immunocompromised, no HIV, and not exposed to DES. 5,8,10 Colorectal Cancer It has been reported that colorectal cancer is the third most commonly diagnosed cancer in males and the second in females worldwide.As mentioned above, screening begins at the age of 50 years in average-risk adults. 8,11 SIG, DCBE and CTC performs every 5 years while colonoscopy performs every 10 years as gold standard for colon colorectal cancer screening.It has been recommended to perform gFOBT and FIT (iFOBT) annually.Unfortunately, the interval of stool DNA test is not clearly defined (8, 12).UK-NHS recommended screening with FOBT every 2 years in adults aged between 60-69 years.The Australian guideline has also recommended it but for adults aged between 50-74 years. 8,11 those at increased risk based on family history but without a definable genetic syndrome, ACS and U.S. Multisociety Task Force on Colorectal Cancer (USMTFCC) recommended screening with colonoscopy at the age of 40 years or 10 years younger than the earliest diagnosis in the immediate family which must be repeated every 5 years.Also, in very-high-risk hereditary non-polyposis colorectal cancer (HNPCC or Lynch syndrome) patients, screening with colonoscopy every 2 years beginning at the age of 20-25 years then yearly at the age of 40 years has been recommended. 8,11 case of classic Familial Adenomatous Polyposis (FAP), it has been recommended that at-risk children should be offered genetic testing at the ages of 10-12 years.Also, flexible sigmoidoscopy or colonoscopy every 12 months starting at the ages of 10-12 years is recommended. Elective colectomy based on the number and histology of polyps is usually done by the early 20s.In addition, upper endoscopy every 5 years if no gastric or duodenal polyps starting in early 20s has been recommended. 8,12 state Cancer It has been stated that prostate cancer is the second most frequently diagnosed cancer in men worldwide.The use of PSA testing leads to various incidence rates worldwide. 2[20][21] Based on the ACP guideline, in men between the ages of 50 and 69 years, PSA testing is not offered unless the patient expresses a clear preference for screening.Also, PSA testing is not offered in average-risk men younger than 50 or 70 years and older or in men with a life expectancy of less than 10-15 years. 8The European Association of Urology (EAU) stated that there is a lack of evidence to support or disregard screening with PSA testing for early detection of prostate cancer. 8,22,23 Bed on UK-NHS and NCCN, PSA testing is offered for healthy men between 45-70 years.It has been recommended retesting in 5 years in men at the ages of 45-49 with PSA<0.7 ng/mL; while in men age 45-49 with PSA >0.7 ng/mL and those at the ages of 50-59 with PSA >0.9 ng/mL, retesting is offered every 1-2 years. 8 Lung Cancer As mentioned, lung cancer has been known as a leading cause of death in men. 2,3,24 Bsed on USPSTF and AAFP guidelines, there is no sufficient evidence to recommend for or against lung cancer screening.Also, the American College of Chest Physicians (ACCP) and the American Society of Clinical Oncology (ASCO) stated that routine screening for lung CA with chest x-ray (CXR) and sputum cytology is not recommended.26] Conclusion While cancer screening guidelines could be important in the reduction of cancer death, early diagnosis and better cancer management.However, it is important to consider their benefits and harms, direct and indirect costs, accessibility, physician's and population's acceptability. The National Comprehensive Cancer Network (NCCN) recommended CBE every 1-3 years-breast awareness education in women age 25-40 years with average risk.Albeit, an annual CBE and annual mammogram has been recommended in women age> 40 year.MRI has not been recommended in average-risk patients.
2018-08-22T10:52:43.064Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "e59b5fa92c16e186f323f43f3d706d840d901c57", "oa_license": "CCBY", "oa_url": "http://www.ijmedrev.com/article_63093_43da4615953eb3074803d7c2638ac783.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6d2f8f7c0e02506a5cb2cf37af852effbaa71ba3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235302598
pes2o/s2orc
v3-fos-license
Optical Redox Imaging of Treatment Responses to Nampt Inhibition and Combination Therapy in Triple-Negative Breast Cancer Cells We evaluated the utility of optical redox imaging (ORI) to identify the therapeutic response of triple-negative breast cancers (TNBC) under various drug treatments. Cultured HCC1806 and MDA-MB-231 cells treated with FK866 (nicotinamide phosphoribosyltransferase (Nampt) inhibitor), FX11 (lactate dehydrogenase A inhibitor), paclitaxel, and their combinations were subjected to ORI, followed by imaging fluorescently labeled reactive oxygen species (ROS). Cell growth inhibition was measured by a cell viability assay. We found that both cell lines experienced significant NADH decrease and redox ratio (Fp/(NADH+Fp)) increase due to FK866 treatment; however, HCC1806 was much more responsive than MDA-MB-231. We further studied HCC1806 with the main findings: (i) nicotinamide riboside (NR) partially restored NADH in FK866-treated cells; (ii) FX11 induced an over 3-fold NADH increase in FK866 or FK866+NR pretreated cells; (iii) FK866 combined with paclitaxel caused synergistic increases in both Fp and the redox ratio; (iv) FK866 sensitized cells to paclitaxel treatments, which agrees with the redox changes detected by ORI; (v) Fp and the redox ratio positively correlated with cell growth inhibition; and (vi) Fp and NADH positively correlated with ROS level. Our study supports the utility of ORI for detecting the treatment responses of TNBC to Nampt inhibition and the sensitization effects on standard chemotherapeutics. Introduction Breast cancer is the most diagnosed cancer among women, with~15% of breast cancer patients possessing a triple-negative breast cancer (TNBC) subtype, i.e., absence of estrogen and progesterone receptors (ER − , PR − ), and lack of HER2 overexpression (HER2 − ) [1,2]. With current treatment options limited to surgery and systemic chemotherapy, TNBC has the worst prognosis among breast cancer molecular types (https://www.breastcancer. org/symptoms/types/molecular-subtypes, accessed on 21 May 2021). TNBC is also a highly heterogeneous group of breast cancers with diverse therapeutic responses to chemotherapy [3,4]. Sensitive and early biomarkers for response to chemotherapy are crucial for the determination of responders versus non-responders and optimization of cancer treatment strategies [5]. Metabolism has been at the center stage of cancer research in recent decades. On one hand, metabolic changes at the molecular level precede morphological/pathological changes and are expected to provide early biomarkers for treatment response. On the other hand, cancer metabolism provides new therapeutic targets that will potentially enhance treatment effects when combined with conventional chemotherapy. For example, there has been a renewed interest in nicotinamide adenine dinucleotide (NAD + ) biology and targeting enzymes involved in NAD + metabolism has been proposed as a cancer therapy [6,7]. NAD + , an essential molecule for cellular metabolism, plays a central role in Int. J. Mol. Sci. 2021, 22, 5563 2 of 13 mitochondrial energy transduction and is synthesized by two major pathways: de novo and salvage pathways [8,9]. Nicotinamide phosphoribosyltransferase (Nampt), the key and rate-limiting enzyme of the salvage pathway of NAD + biosynthesis, is crucial for the maintenance of intracellular NAD + levels and regulation of NAD-dependent enzymes. Data from clinical breast cancer patients and many breast cancer cell lines show overexpression of Nampt [10,11]. While deregulation of Nampt expression is related to initiation and progression of various human malignancies [12], Nampt inhibition has led to tumor growth attenuation in various cancers [6,13]. Furthermore, suppression of Nampt expression has been found to reduce the viability of breast cancer cells and increase their susceptibility to chemotherapy [12,14]. Optical redox imaging (ORI, or optical metabolic imaging, used in some literature) is a label-free metabolic imaging technique that may aid in the development of cancer biomarkers for treatment response. It detects the intrinsic fluorescence of oxidized flavoproteins (Fp containing flavin adenine dinucleotide (FAD)) and reduced nicotinamide adenine dinucleotide (NADH) [15][16][17]. The optical redox ratio, Fp/NADH or its normalized form Fp/(NADH+Fp), is a surrogate marker of the NAD + /NADH or NAD + /(NADH+NAD + ) ratios, respectively, and provides a quantitative measure of the mitochondrial redox state. It has been shown that Fp/NADH or Fp/(NADH+Fp) linearly correlates with biochemically-determined redox ratio NAD + /NADH or NAD + /(NADH+NAD + ) [18][19][20]. ORI has wide applications in the study of bioenergetics, metabolism, and treatment response [21][22][23]. Employing ORI, we previously demonstrated frontline therapy CHOPinduced mitochondrial redox state alteration in non-Hodgkin's lymphoma xenografts [24]. We also reported that lonidamine-induced redox changes in melanoma were readily detected by ORI at both cellular and tissue levels within 45 min after treatment [25], in accordance with lonidamine's anti-tumor effect and metabolic changes detected by 31 P-MRS in the same melanoma xenograft models [26][27][28]. Based on NADH and FAD intensity and lifetime measurements, multiphoton optical metabolic imaging has also been used to identify early therapeutic responses of various cancer cells and differentiate between drug-resistant and non-resistant models [29][30][31][32], and was able to resolve treatment response in tumor xenografts earlier than fluorodeoxyglucose positron emission tomography (FDG-PET) [29]. Herein, we examine the utility of ORI for detecting the therapeutic response to Nampt inhibitor FK866, a common chemo-agent paclitaxel (Taxol), and their combinations, using two TNBC cell culture models that have different sensitivities to Nampt inhibition. We correlate the ORI readouts with an MTS (3-(4,5-dimethylthiazol-2-yl)-5-(3carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium) cell viability assay, which acts as the endpoint for the treatment responses. We also investigate the correlation of ORI indices with the intracellular levels of reactive oxygen species (ROS). Redox Responses to FK866 Treatment and NR Rescue Effects FK866 is expected to decrease NAD + and its protonated/reduced form, NADH. ORI was applied to detect the response to 48 h FK866 treatment in two TNBC breast cancer cell lines, HCC1806 and MDA-MB-231. Figure 1A shows the typical redox images of HCC1806 cells under the control condition (0.1% DMSO) and 1 nM FK866 treatment. Treatment with various concentrations of FK866 ranging from 1 nM (the lowest concentration we tested) to 100 nM for 48 h generated similar effects on the redox indices (Fp, NADH, and the redox ratio) and significantly increased Fp signals by~49%, decreased NADH signals by~35%, and raised the redox ratio by~40% ( Figure 1B). FK866-induced Fp increase coincided with NADH decrease, which likely reflects the conjugation of the two signals in the mitochondria. ~35%, and raised the redox ratio by ~40% ( Figure 1B). FK866-induced Fp increase coincided with NADH decrease, which likely reflects the conjugation of the two signals in the mitochondria. The phenomenon that 1 to 100 nM FK866 yielded the same redox effect indicates that in this range the NAD-salvage pathway has been completely inhibited at FK866 concentration as low as 1 nM. This apparent saturation effect can be further understood by the fact that in many cell lines, the IC50 of FK866 is below 1 nM [33], and many companies report IC50 is 0.09 nM (e.g., https://www.selleckchem.com/products/apo866-fk866.html, accessed on 21 May 2021). Nicotinamide riboside (NR) is a precursor of NAD + . NR is converted to nicotinamide mononucleotide (NMN) by an NR kinase (NRK), then NMN is converted to NAD + by NMN adenylyltransferase (NMNAT) [34]. By adding NR to cells pre-treated with FK866, which have diminished use of the NAD-salvage pathway, we expected to see an increase in NAD + and thus NADH. We added NR (800 µM) to the dishes pre-treated with 100 nM FK866 for 42 h. Six hours later, we found an 11.7% increase in NADH (p = 0.02) in HCC1806 cells compared to the NADH level of 100 nM FK866 48 h treated HCC1806 dishes ( Figure 2A). Though NR treatment significantly increased the NADH signal from FK866 pretreated HCC1806 cells, the drug at the concentration of 800 µM did not fully restore NADH levels. Furthermore, the NR rescue effect was nearly equivalent in the various concentrations of FK866 tested (5-100 nM) ( Figure S1). This result further supports that the NAD-salvage pathway has been completely inhibited at 1 nM of FK866. The phenomenon that 1 to 100 nM FK866 yielded the same redox effect indicates that in this range the NAD-salvage pathway has been completely inhibited at FK866 concentration as low as 1 nM. This apparent saturation effect can be further understood by the fact that in many cell lines, the IC 50 of FK866 is below 1 nM [33], and many companies report IC 50 is 0.09 nM (e.g., https://www.selleckchem.com/products/apo866-fk866.html, accessed on 21 May 2021). Nicotinamide riboside (NR) is a precursor of NAD + . NR is converted to nicotinamide mononucleotide (NMN) by an NR kinase (NRK), then NMN is converted to NAD + by NMN adenylyltransferase (NMNAT) [34]. By adding NR to cells pre-treated with FK866, which have diminished use of the NAD-salvage pathway, we expected to see an increase in NAD + and thus NADH. We added NR (800 µM) to the dishes pre-treated with 100 nM FK866 for 42 h. Six hours later, we found an 11.7% increase in NADH (p = 0.02) in HCC1806 cells compared to the NADH level of 100 nM FK866 48 h treated HCC1806 dishes ( Figure 2A). Though NR treatment significantly increased the NADH signal from FK866 pre-treated HCC1806 cells, the drug at the concentration of 800 µM did not fully restore NADH levels. Furthermore, the NR rescue effect was nearly equivalent in the various concentrations of FK866 tested (5-100 nM) ( Figure S1). This result further supports that the NAD-salvage pathway has been completely inhibited at 1 nM of FK866. ~35%, and raised the redox ratio by ~40% ( Figure 1B). FK866-induced Fp increase coincided with NADH decrease, which likely reflects the conjugation of the two signals in the mitochondria. The phenomenon that 1 to 100 nM FK866 yielded the same redox effect indicates that in this range the NAD-salvage pathway has been completely inhibited at FK866 concentration as low as 1 nM. This apparent saturation effect can be further understood by the fact that in many cell lines, the IC50 of FK866 is below 1 nM [33], and many companies report IC50 is 0.09 nM (e.g., https://www.selleckchem.com/products/apo866-fk866.html, accessed on 21 May 2021). Nicotinamide riboside (NR) is a precursor of NAD + . NR is converted to nicotinamide mononucleotide (NMN) by an NR kinase (NRK), then NMN is converted to NAD + by NMN adenylyltransferase (NMNAT) [34]. By adding NR to cells pre-treated with FK866, which have diminished use of the NAD-salvage pathway, we expected to see an increase in NAD + and thus NADH. We added NR (800 µM) to the dishes pre-treated with 100 nM FK866 for 42 h. Six hours later, we found an 11.7% increase in NADH (p = 0.02) in HCC1806 cells compared to the NADH level of 100 nM FK866 48 h treated HCC1806 dishes ( Figure 2A). Though NR treatment significantly increased the NADH signal from FK866 pretreated HCC1806 cells, the drug at the concentration of 800 µM did not fully restore NADH levels. Furthermore, the NR rescue effect was nearly equivalent in the various concentrations of FK866 tested (5-100 nM) ( Figure S1). This result further supports that the NAD-salvage pathway has been completely inhibited at 1 nM of FK866. For the MDA-MB-231 cells, 48 h treatment with 100 nM FK866 also resulted in a significant but lesser degree of NADH decrease and redox ratio increase but no significant change in Fp ( Figure 2B). This result is consistent with literature reports that the MDA-MB-231 cell line is relatively less sensitive to Nampt inhibition [11]. In contrast to HCC1806 cells, no effects by NR were found in MDA-MB-231 cells, which could also serve as a negative control. We focused on studying HCC1806 cells only in the following experiments. Inhibiting Lactate Dehydrogenase A of FK866-Pretreated Cells Resulted in a Dramatic NADH Spike It is generally understood that Nampt regulates NAD + synthesis and NADH pool size. For Nampt inhibition with long-term (48 h) FK866 treatment, we expect the availability of NADH should be very low if not diminished. Thus, we set out to investigate the NADH availability by adding FX11, a specific inhibitor of lactate dehydrogenase A (LDHA), to the HCC1806 cells that were pretreated with FK866 or FK866+NR. LDHA catalyzes the conversion of pyruvate to lactate, coupled with the oxidation of NADH to NAD + in the cell. As expected, FX11 treatment to control dishes resulted in LDHA inhibition and a buildup of NADH ( Figure 3, bars in grey colors), which was also previously observed [35]. However, to our surprise, after suppression of NADH levels by 100 nM FK866 for 48 h, the HCC1806 cells still responded strongly to 10 min 5 µM FX11 treatment, resulting in a dramatic increase in NADH, together with a significant decrease in Fp and the redox ratio ( Figure 3, bars in blue colors). Cells pre-treated with FK866+NR also showed a similar response to FX11 treatment ( Figure 3, bars in orange colors). The NADH increases for the control, FK866, and FK866+NR dishes are 1180, 884, and 848 units, corresponding to an increase of 311%, 372%, and 332%, respectively. These data suggest that a significant amount of NADH remains available despite the inhibition of the NAD + salvage pathway. group: 100 nM FK866 48-h treatment; FK866+NR group: 100 nM FK866 48-h treatment during which 800 µM NR was added in the last 6 h. Stars above bars represent significant difference from control and stars above brackets indicate significant difference between treatment groups (ANOVA with Tukey's post-hoc test, * p < 0.05, ** p < 0.01, **** p < 0.0001). For the MDA-MB-231 cells, 48 h treatment with 100 nM FK866 also resulted in a significant but lesser degree of NADH decrease and redox ratio increase but no significant change in Fp ( Figure 2B). This result is consistent with literature reports that the MDA-MB-231 cell line is relatively less sensitive to Nampt inhibition [11]. In contrast to HCC1806 cells, no effects by NR were found in MDA-MB-231 cells, which could also serve as a negative control. We focused on studying HCC1806 cells only in the following experiments. Inhibiting Lactate Dehydrogenase A of FK866-Pretreated Cells Resulted in a Dramatic NADH Spike It is generally understood that Nampt regulates NAD + synthesis and NADH pool size. For Nampt inhibition with long-term (48 h) FK866 treatment, we expect the availability of NADH should be very low if not diminished. Thus, we set out to investigate the NADH availability by adding FX11, a specific inhibitor of lactate dehydrogenase A (LDHA), to the HCC1806 cells that were pretreated with FK866 or FK866+NR. LDHA catalyzes the conversion of pyruvate to lactate, coupled with the oxidation of NADH to NAD + in the cell. As expected, FX11 treatment to control dishes resulted in LDHA inhibition and a buildup of NADH ( Figure 3, bars in grey colors), which was also previously observed [35]. However, to our surprise, after suppression of NADH levels by 100 nM FK866 for 48 h, the HCC1806 cells still responded strongly to 10 min 5 µM FX11 treatment, resulting in a dramatic increase in NADH, together with a significant decrease in Fp and the redox ratio ( Figure 3, bars in blue colors). Cells pre-treated with FK866+NR also showed a similar response to FX11 treatment ( Figure 3, bars in orange colors). The NADH increases for the control, FK866, and FK866+NR dishes are 1180, 884, and 848 units, corresponding to an increase of 311%, 372%, and 332%, respectively. These data suggest that a significant amount of NADH remains available despite the inhibition of the NAD + salvage pathway. ORI-Detected Therapeutic Responses Correlated with Growth Inhibition Paclitaxel is a chemotherapeutic agent for treating solid tumors, including breast tumors. We investigated paclitaxel treatment effects and whether a low concentration of FK866 may sensitize cells to paclitaxel treatment. Our ORI results illustrate that 1 nM ORI-Detected Therapeutic Responses Correlated with Growth Inhibition Paclitaxel is a chemotherapeutic agent for treating solid tumors, including breast tumors. We investigated paclitaxel treatment effects and whether a low concentration of FK866 may sensitize cells to paclitaxel treatment. Our ORI results illustrate that 1 nM paclitaxel (the reported IC 50 for HCC1806 cells for 48 h treatment was in this range [36,37]) alone induced an increase in all redox indices ( Figure 4A). Separate 1 nM FK866 and 1 nM paclitaxel treatments induced a total 96% increase in Fp level, and a summed change of 37% in the redox ratio. However, when cells were simultaneously treated with 1 nM FK866 and 1 nM paclitaxel, Fp levels increased by 173%, and the redox ratio increased by 47%, implying a synergistic effect on the redox status. Since FK866-induced NADH change was opposing the direction of the paclitaxel-induced change, we observed a lesser increase in NADH from the combination compared to that by paclitaxel alone. paclitaxel (the reported IC50 for HCC1806 cells for 48 h treatment was in this range [36,37]) alone induced an increase in all redox indices ( Figure 4A). Separate 1 nM FK866 and 1 nM paclitaxel treatments induced a total 96% increase in Fp level, and a summed change of 37% in the redox ratio. However, when cells were simultaneously treated with 1 nM FK866 and 1 nM paclitaxel, Fp levels increased by 173%, and the redox ratio increased by 47%, implying a synergistic effect on the redox status. Since FK866-induced NADH change was opposing the direction of the paclitaxel-induced change, we observed a lesser increase in NADH from the combination compared to that by paclitaxel alone. Control dishes for both graphs were treated with 0.2% DMSO. F stands for FK866 and T for paclitaxel/Taxol. Stars above bars represent significant difference from control. Stars above brackets indicate significant difference between treatment groups (ANOVA with Tukey's post-hoc test, * p < 0.05, *** p < 0.001, **** p < 0.0001). When we increased the paclitaxel concentration from 1 to 20 nM, Fp level was 435% higher than the control, which was triple the Fp level with 1 nM paclitaxel treatment; NADH level and the redox ratio were 73 and 50% higher than the control, respectively, corresponding to 39% and 32% higher than that from 1 nM paclitaxel treatment, respectively ( Figure 4B). The individual Fp percentage increase for 1 nM FK866 or 20 nM paclitaxel treatment summed to 464%, whereas the percentage increase in Fp for simultaneous treatment at these concentrations was 490%. Thus, the synergistic effect on Fp for the combination 1 nM FK866 + 20 nM paclitaxel treatment was modest and less prominent than that of 1 nM FK866 + 1 nM paclitaxel. The 42% NADH change due to 1 nM FK866 + 20 nM paclitaxel combination treatment was significantly different in comparison with either 1 nM FK866 (−24%) or 20 nM paclitaxel (73%) alone. No synergy on the redox ratio was present for 1 nM FK866 + 20 nM paclitaxel treatment despite that the largest redox ratio change in comparison with control was observed with the combination 1 nM FK866 + 20 nM paclitaxel treatment. With 48-h treatments of FK866 and paclitaxel on HCC1806 cells at various concentrations, the MTS assay revealed that FK866 and paclitaxel combination treatments inhibited cell proliferation ( Figure 5). Specifically, from Figure 5A, HCC1806 cells exhibited a trend of decreased proliferation with 48 h 1 nM FK866 treatment and a significantly decreased proliferation by ~40% with 100 nM treatment. As for paclitaxel treatment alone, only the 20 nM dose resulted in a modest but significant proliferation inhibition. Combinations of 1 nM FK866 with 5, 10, and 20 nM paclitaxel significantly reduced proliferation, although no synergistic reduction was observed for the combinational treatment compared to individual treatments. We further confirmed this result by using a lower seeding density of cells/well. As displayed in Figure 5B, 1 nM FK866 treatment for 48 h showed a significant growth inhibition, as did the combination of 1 nM FK866 and 1 nM paclitaxel. However, 1 nM FK866 failed to show synergy with 1 nM paclitaxel treatment. When we increased the paclitaxel concentration from 1 to 20 nM, Fp level was 435% higher than the control, which was triple the Fp level with 1 nM paclitaxel treatment; NADH level and the redox ratio were 73% and 50% higher than the control, respectively, corresponding to 39% and 32% higher than that from 1 nM paclitaxel treatment, respectively ( Figure 4B). The individual Fp percentage increase for 1 nM FK866 or 20 nM paclitaxel treatment summed to 464%, whereas the percentage increase in Fp for simultaneous treatment at these concentrations was 490%. Thus, the synergistic effect on Fp for the combination 1 nM FK866 + 20 nM paclitaxel treatment was modest and less prominent than that of 1 nM FK866 + 1 nM paclitaxel. The 42% NADH change due to 1 nM FK866 + 20 nM paclitaxel combination treatment was significantly different in comparison with either 1 nM FK866 (−24%) or 20 nM paclitaxel (73%) alone. No synergy on the redox ratio was present for 1 nM FK866 + 20 nM paclitaxel treatment despite that the largest redox ratio change in comparison with control was observed with the combination 1 nM FK866 + 20 nM paclitaxel treatment. With 48-h treatments of FK866 and paclitaxel on HCC1806 cells at various concentrations, the MTS assay revealed that FK866 and paclitaxel combination treatments inhibited cell proliferation ( Figure 5). Specifically, from Figure 5A, HCC1806 cells exhibited a trend of decreased proliferation with 48 h 1 nM FK866 treatment and a significantly decreased proliferation by~40% with 100 nM treatment. As for paclitaxel treatment alone, only the 20 nM dose resulted in a modest but significant proliferation inhibition. Combinations of 1 nM FK866 with 5, 10, and 20 nM paclitaxel significantly reduced proliferation, although no synergistic reduction was observed for the combinational treatment compared to individual treatments. We further confirmed this result by using a lower seeding density of cells/well. As displayed in Figure 5B, 1 nM FK866 treatment for 48 h showed a significant growth inhibition, as did the combination of 1 nM FK866 and 1 nM paclitaxel. However, 1 nM FK866 failed to show synergy with 1 nM paclitaxel treatment. In the x-axis, F represents FK866, T represents paclitaxel, and numbers following either F or T indicate concentration in nM. Stars above bars indicate significant difference from DMSO (control). Stars above brackets indicate significant difference between treatment groups. Means were normalized to DMSO mean. Significance determined by running one-way ANOVA with control for multiple comparisons. (A) shows the results with cell seeding density at 15,000/well (n = 3); (B) shows the results with cell seeding density at 5000/well (n = 4) (ANOVA with Dunnett's post-hoc test, * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001). Comparing the results from ORI and MTS (Figures 4 and 5) indicates that the ORIdetected redox changes correspond to the MTS-detected proliferation inhibition of HCC1806 cells, and larger redox ratio change corresponds to larger inhibition of proliferation. However, ORI appears to be more sensitive in detecting both single-treatment effect and sensitization effect. Particularly, at a low concentration of drugs, synergistic redox changes induced by these treatments were observed, whereas MTS did not detect such synergy. We performed a linear regression analysis based on the data from Figure 4 and Figure 5A (DMSO, F1, T1, T20, F1T1, F1T20) and found both Fp and the redox ratio positively correlate with growth inhibition, whereas NADH does not ( Figure 6). The significant linear correlation indicates that either Fp or the redox ratio can predict treatment responses to FK866, paclitaxel, and their combination in HCC1806 cells. We previously found that the redox indices correlate with ROS levels [35]. We therefore also measured drug-induced intracellular ROS production. As shown by Figure 7A, FK866 treatment led to a modest increase in intracellular ROS level, which was insignificant at 1nM, yet ~27% significantly increased at 100 nM. In contrast, there was a ~73% ROS increase due to 1 nM paclitaxel treatment and a 94% ROS increase due to combination of 1 nM FK866 and 1 nM paclitaxel treatment compared to control. Additionally, there were In the x-axis, F represents FK866, T represents paclitaxel, and numbers following either F or T indicate concentration in nM. Stars above bars indicate significant difference from DMSO (control). Stars above brackets indicate significant difference between treatment groups. Means were normalized to DMSO mean. Significance determined by running one-way ANOVA with control for multiple comparisons. (A) shows the results with cell seeding density at 15,000/well (n = 3); (B) shows the results with cell seeding density at 5000/well (n = 4) (ANOVA with Dunnett's post-hoc test, * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001). Comparing the results from ORI and MTS (Figures 4 and 5) indicates that the ORIdetected redox changes correspond to the MTS-detected proliferation inhibition of HCC1806 cells, and larger redox ratio change corresponds to larger inhibition of proliferation. However, ORI appears to be more sensitive in detecting both single-treatment effect and sensitization effect. Particularly, at a low concentration of drugs, synergistic redox changes induced by these treatments were observed, whereas MTS did not detect such synergy. We performed a linear regression analysis based on the data from Figures 4 and 5A (DMSO, F1, T1, T20, F1T1, F1T20) and found both Fp and the redox ratio positively correlate with growth inhibition, whereas NADH does not ( Figure 6). The significant linear correlation indicates that either Fp or the redox ratio can predict treatment responses to FK866, paclitaxel, and their combination in HCC1806 cells. In the x-axis, F represents FK866, T represents paclitaxel, and numbers following either F or T indicate concentration in nM. Stars above bars indicate significant difference from DMSO (control). Stars above brackets indicate significant difference between treatment groups. Means were normalized to DMSO mean. Significance determined by running one-way ANOVA with control for multiple comparisons. (A) shows the results with cell seeding density at 15,000/well (n = 3); (B) shows the results with cell seeding density at 5000/well (n = 4) (ANOVA with Dunnett's post-hoc test, * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001). Comparing the results from ORI and MTS (Figures 4 and 5) indicates that the ORIdetected redox changes correspond to the MTS-detected proliferation inhibition of HCC1806 cells, and larger redox ratio change corresponds to larger inhibition of proliferation. However, ORI appears to be more sensitive in detecting both single-treatment effect and sensitization effect. Particularly, at a low concentration of drugs, synergistic redox changes induced by these treatments were observed, whereas MTS did not detect such synergy. We performed a linear regression analysis based on the data from Figure 4 and Figure 5A (DMSO, F1, T1, T20, F1T1, F1T20) and found both Fp and the redox ratio positively correlate with growth inhibition, whereas NADH does not ( Figure 6). The significant linear correlation indicates that either Fp or the redox ratio can predict treatment responses to FK866, paclitaxel, and their combination in HCC1806 cells. We previously found that the redox indices correlate with ROS levels [35]. We therefore also measured drug-induced intracellular ROS production. As shown by Figure 7A, FK866 treatment led to a modest increase in intracellular ROS level, which was insignificant at 1nM, yet ~27% significantly increased at 100 nM. In contrast, there was a ~73% ROS increase due to 1 nM paclitaxel treatment and a 94% ROS increase due to combination of 1 nM FK866 and 1 nM paclitaxel treatment compared to control. Additionally, there were We previously found that the redox indices correlate with ROS levels [35]. We therefore also measured drug-induced intracellular ROS production. As shown by Figure 7A, FK866 treatment led to a modest increase in intracellular ROS level, which was insignificant at 1nM, yet~27% significantly increased at 100 nM. In contrast, there was a~73% ROS increase due to 1 nM paclitaxel treatment and a 94% ROS increase due to combination of 1 nM FK866 and 1 nM paclitaxel treatment compared to control. Additionally, there were larger significant increases in ROS level due to 20 nM paclitaxel treatment (~273%) and combination of 1 nM FK866 and 20 nM paclitaxel treatment (~209%) ( Figure 7B). These findings are consistent with literature reports [11,[38][39][40][41]. However, the correlation between growth inhibition and ROS is not statistically significant ( Figure 7C). Note that 1 nM FK866 addition did not significantly increase ROS generation in the cells that were under 1 or 20 nM paclitaxel treatment. In comparison, ORI detected a significant redox difference under these conditions. Int. J. Mol. Sci. 2021, 22, x FOR PEER REVIEW 7 of 13 larger significant increases in ROS level due to 20 nM paclitaxel treatment (~273%) and combination of 1 nM FK866 and 20 nM paclitaxel treatment (~209%) ( Figure 7B). These findings are consistent with literature reports [11,[38][39][40][41]. However, the correlation between growth inhibition and ROS is not statistically significant ( Figure 7C). Note that 1 nM FK866 addition did not significantly increase ROS generation in the cells that were under 1 or 20 nM paclitaxel treatment. In comparison, ORI detected a significant redox difference under these conditions. We further performed a linear regression analysis to determine the correlation between the redox indices and ROS levels under these various treatments. As shown in Figure 8, both Fp and NADH positively correlate with ROS (p = 0.004 and 0.011, respectively) ( Figure 8A-B). Although higher redox ratio tends to correspond to higher ROS, the correlation has a borderline significance (p = 0.060). ORI Is Sensitive to the Metabolic Modulations and Detects Differential Responses to Nampt Inhibition between Two TNBC Cell Lines Optical redox imaging of NAD(H) redox status provides a measure of the mitochondrial redox status and has been found to be sensitive to the therapeutic effects of cancer drugs. Here, we employed ORI to investigate the effects of inhibition of NAD biosynthesis We further performed a linear regression analysis to determine the correlation between the redox indices and ROS levels under these various treatments. As shown in Figure 8, both Fp and NADH positively correlate with ROS (p = 0.004 and 0.011, respectively) ( Figure 8A,B). Although higher redox ratio tends to correspond to higher ROS, the correlation has a borderline significance (p = 0.060). Int. J. Mol. Sci. 2021, 22, x FOR PEER REVIEW 7 of 13 larger significant increases in ROS level due to 20 nM paclitaxel treatment (~273%) and combination of 1 nM FK866 and 20 nM paclitaxel treatment (~209%) ( Figure 7B). These findings are consistent with literature reports [11,[38][39][40][41]. However, the correlation between growth inhibition and ROS is not statistically significant ( Figure 7C). Note that 1 nM FK866 addition did not significantly increase ROS generation in the cells that were under 1 or 20 nM paclitaxel treatment. In comparison, ORI detected a significant redox difference under these conditions. We further performed a linear regression analysis to determine the correlation between the redox indices and ROS levels under these various treatments. As shown in Figure 8, both Fp and NADH positively correlate with ROS (p = 0.004 and 0.011, respectively) ( Figure 8A-B). Although higher redox ratio tends to correspond to higher ROS, the correlation has a borderline significance (p = 0.060). ORI Is Sensitive to the Metabolic Modulations and Detects Differential Responses to Nampt Inhibition between Two TNBC Cell Lines Optical redox imaging of NAD(H) redox status provides a measure of the mitochondrial redox status and has been found to be sensitive to the therapeutic effects of cancer drugs. Here, we employed ORI to investigate the effects of inhibition of NAD biosynthesis ORI Is Sensitive to the Metabolic Modulations and Detects Differential Responses to Nampt Inhibition between Two TNBC Cell Lines Optical redox imaging of NAD(H) redox status provides a measure of the mitochondrial redox status and has been found to be sensitive to the therapeutic effects of cancer drugs. Here, we employed ORI to investigate the effects of inhibition of NAD biosynthesis by treating TNBC cells with Nampt inhibitor, FK866. We observed significant redox changes in both HCC1806 and MDA-MB-231 cells. We found that NAD(H) redox status responded differentially in these two TNBC lines treated with 100 nM FK866 for 48 h, where NADH decreased by~40% and~13% in HCC1806 and MDA-MB-231 cells, respectively. Since several factors, including timing, the FK866 dose, and the 3-phosphoglycerate dehydrogenase (PHGDH) expression level can all affect NAD + depletion [11,42], NADH level should also be affected by these factors. The temporal NAD + depletion pattern shows that with 10 nM FK866 treatment, the NAD + level of MDA-MB-231 cells reached the lowest at 24 h then went up at 48 h [11], suggesting we could have observed lower NADH levels at 24 h instead of at 48 h. On the other hand, Nampt inhibition affects serine biosynthesis from glucose via PHGDH, and the PHGDH-high breast cancer cell lines (estrogen receptor absence, basal-like, such as HCC1806) are highly sensitive to Nampt inhibition compared to PHGDH-low cell lines (estrogen receptor absence, mesenchymal, such as MDA-MB-231) [11,42]. These factors together may explain a smaller redox change in MDA-MB-231 than in HCC1806 upon 48 h FK866 treatment with the same dose. Biochemical assay analysis has shown that suppression of Nampt in breast cancer cells lowers NAD + , NADH, and NADPH levels, where NADH decrease is less than NAD + decrease [38,39,43]. By treating the two TNBC cell lines with a Nampt inhibitor, FK866, for 48 h, we found a significant decrease in NADH level, consistent with these reports. We also found an FK866-induced increase in Fp. Thus, both a decreased NADH and an increased Fp contribute to an increase in the redox ratio in the TNBC cells. Moreover, the FK866 in the range of 1 to 100 nM had an equivalent impact on the redox status of HCC1806 cells. This suggests that lower FK866 concentrations would have had impacts on the NAD(H) redox status of HCC1806 cells as well. It also warrants further testing with higher FK866 concentrations, since a concentration-dependent effect of FK866 on NAD + level has been reported for many types of cells [44]. By modulating the HCC1806 cells with NR, we observed the expected redox changes. NR increased NADH, although NADH was not fully restored to its original level. The NADH rescue effect by NAD + supplementation depends on several factors, including specific NAD + restoration agents and their concentrations, as well as FK866 concentration used for NAD + suppression [11]. It is known that TNBC cells are highly heterogeneous, resulting in diverse treatment outcomes. Therapeutic response biomarkers that are sensitive and that can stratify TNBC patients are highly desired. The observed differential redox responses to Nampt inhibition and NR restoration between the two TNBC lines suggest that ORI can be useful to improve classifications of TNBCs based on their redox responses to metabolic treatments. We also probed the NADH pool in HCC1806 cells by using FX11 to inhibit LDHA under the control or the pretreatment of FK866 or FK866+NR. Immediately after FX11 was added to the control or the FK866-treated HCC1806 cells (with or without NR rescue), we observed dramatic NADH increases, more than three times higher, and an over 50% decrease in redox ratios, compared to the levels before FX11 addition. Previously, we reported a~200% NADH increase in MDA-MB-231 cells induced by FX11 treatment [35]. These results demonstrate the significant role of LDHA in mediating the NAD(H) redox balance in TNBC cells. This also seems to suggest that separate subcellular pools of NAD + may be acted on by NR and FX11. It was reported that there is a mitochondrial-insensitive NAD + pool and that FK866 reduces the cytoplasmic but not the mitochondrial NAD + pool, which could be due to a delay in depletion of mitochondrial NAD + [44]. It was also suggested that the NAD + pool generated by Nampt exists as a separate pool to the NAD + pool for glycolysis [11]. ORI Detects Paclitaxel Treatment Response and the Sensitization Effect of FK866 on Paclitaxel We observed paclitaxel-induced dose-dependent redox changes, including increased NADH and Fp levels and the redox ratio, where both Fp and NADH increases correlated with drug-induced ROS production. The increase in NADH is likely the result of cells being apoptotic. It has been shown that when apoptosis starts, there is a significant increase in NADH signals, whereas H 2 O 2 -induced necrosis showed a decrease in NADH [45][46][47]. Lukina et al. reported paclitaxel-induced redox changes of 3D HeLa culture, where cells underwent a time-dependent increase in the optical redox ratio, starting from 6 h of exposure to paclitaxel [31]. However, their study observed no change in Fp intensity for any occasion, a decrease in NADH intensity in the responders (viable and altered morphology), and no NADH intensity change in non-responders (viable and unaltered morphology) within 24 h of treatment. It is unclear why our results of increased NADH and Fp levels after 48 h treatment differ from theirs. It could be due to different cancer cell types and/or treatment time. We observed that the combination of FK866 (1 nM) and paclitaxel (1 and 20 nM) resulted in synergistic redox changes in HCC1806 cells, corresponding to enhanced inhibition of cell growth detected by the MTS assay. A combination of Nampt inhibitors (including FK866) and paclitaxel has been shown to have an additive effect on decreasing cell viability and growth in pancreatic cancers [48]. Our results are consistent with this study. In addition, we found a strong positive linear correlation of Fp or the redox ratio with cell growth inhibition ( Figure 6, R 2 > 0.7), indicating that ORI can detect a drug sensitization effect. In comparison, ROS had an insignificant correlation with cell growth inhibition (R 2 = 0.49, p = 0.12) and indicated neither the synergistic nor the addictive effect of FK866 and paclitaxel combination as observed by ORI and MTS, respectively. As the reported IC 50 for HCC1806 cells for 48 h treatment is in the 1 to 5 nM range [36,37], the paclitaxel-induced growth inhibitions in the range of 1 to 20 nM we observed were not quite as significant as they would be expected for the low IC 50 , although the degree of the growth inhibition in our MTS readouts did increase with the drug's concentration ( Figure 5A). The seeding density we chose was in the linear range of this cell line for the MTS assay. It is unclear why we still observed a high viability at 20 nM. Several possibilities, such as paclitaxel degradation in alkaline medium, drug purity, and cell line variations across laboratories might account for this observation. However, since our purpose here was to investigate the correlation between the redox imaging measurements and the MTS readouts, the absolute accuracy of paclitaxel concentration is not as important as that for the determination of IC 50 . In future studies, we can determine the IC 50 under the experimental conditions. ORI has been used as a label-free imaging tool to study the drug response of cancer cells. It is known that metabolic changes precede the morphological manifestation of cell death and that early redox response can predict later apoptotic changes [31,32]. In the future, we may study more TNBC cell lines for their redox response profiles at various time points and correlate redox changes with various endpoints, e.g., growth inhibition, cell migration and invasion, and cell kill to gain a more holistic metabolic analysis of treatment response. Dihydroethidium (DHE) and all drugs were purchased from Sigma-Aldrich (Milli-poreSigma, St. Louis, MO, USA), except FK866 and paclitaxel, which were purchased from LC Laboratories (Woburn, MA, USA). Nicotinamide riboside (NR) (AST-F20758, Neta Scientific Inc. Hainesport, NJ, USA) was reconstituted in deionized water at 25 mg/mL, aliquoted, and stored at −20 • C until use. All other drugs were first reconstituted in DMSO, aliquoted, and stored at either −80 or −20 • C until use. Cell Culture and Drug Treatments For FK866-treated groups, FK866 was added post cell attachment to dishes (approximately 4 h after seeding) with final concentrations between 1 and 100 nM, and cells were treated for 48 h before imaging. If NR treatment occurred, NR (800 µM final) was added to cells pre-treated with FK866 for 42 h. FK866 and NR combination treatment lasted 6 h. For paclitaxel-treated groups, paclitaxel was added to dishes with medium red-orange in color to avoid degradation of the drug in alkaline conditions at a final concentration of 1 and 20 nM. Treatment was 48 h. Acute FX11 treatment (5 µM final) was approximately 10 min. Images were taken with dishes under treatment, as opposed to the case of long-term treatment(s) where images were taken without drug presence. Optical Redox Imaging of Live Cells Cells were seeded 50,000/1 mL onto 20 mm glass-bottom dishes (Cellvis, Cat #D35-20-1.5-N, Mountain View, CA, USA) and divided into treated and control groups. Approximately one hour before imaging, cells were rinsed twice with DPBS + (Dulbecco's Phosphate-Buffered Serum with calcium and magnesium, Thermo Fisher Scientific, Waltham, MA, USA) and incubated with 1 mL Live Cell Imaging Solution (abbreviated as LCIS, Molecular Probes, Thermo Fisher Scientific, Waltham, MA, USA) supplemented with glucose (11 mM) and L-glutamine (2 mM). A Zeiss wide-field microscope (Axio Observer 7, White Plains, NY, USA) set at 37 • C was used for imaging. Using a 20 × lens (NA = 0.8), signals were collected with an image resolution of 0.29 × 0.29 µm 2 through the following optical bandpass filters: NADH channel, excitation (Ex) 370-400 nm, emission (Em) 414-450 nm; Fp channel, Ex 450-488 nm, Em 500-530 nm; and DHE channel, Ex 540-570 nm, Em 580-610 nm. To avoid photo-bleaching, transmitting light was used to locate and focus on regions of interest. Three to five random, distinct fields of view per dish were imaged. Shading correction was done on the fly. All dishes were imaged without the presence of drug(s), except for the acute FX11 treatment. Intracellular ROS measurements were acquired by adding dihydroethidium (DHE, 2 µM final concentration) to dishes and incubating at 37 • C protected from light for 40 min. Dishes were then rinsed once with PBS + and liquid was replaced with 1 mL LCIS supplemented with 2 mM glutamine and 11.5 mM glucose for imaging. Cell Proliferation Assay Cell proliferation was examined by MTS (3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxyme thoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium) assay (CellTiter 96 ® Aqueous One Solution Cell Proliferation Assay, Promega, Madison, WI, USA). Briefly, HCC1806 cells were seeded at a density of 15,000 cells or 5000 cells per well in 96-well plates and incubated overnight with RPMI containing 10% FBS. The next day, the cells were treated with various concentrations of FK866 and paclitaxel or the combination of both drugs and incubated for 48 h. Thereafter, 20 µL of MTS was added to each well. The plate was incubated for 4 h and absorbance at 490 nm was measured using a plate reader (Enspire Multimode Plate Reader, model: 2300, Perkin Elmer, Washington, MA, USA). Eight technical replicate samples were prepared in each assay. The experiment was repeated 3 times. Data Analysis and Statistics Each image file was split into its separate channels using ImageJ. A custom MATLAB ® (Version 2019a, The MathWorks, Inc., Natick, MA, USA) program was used to quantify NADH, Fp, and ROS intensities. Redox ratio images were generated pixel-by-pixel from NADH and Fp images. The program analyzed the images through a series of steps including background removal and thresholding at a signal-to-noise ratio of 7.5, as described in detail previously [49], except that the polynomial surface fit of the background was no longer needed for removing the vignette effect, which was corrected on the fly. The group mean was obtained by first averaging fields of view per dish, then averaging across dishes. Bar graphs grouped by treatment are displayed as the means ± standard deviations (SD). To compare three or more groups, one-way ANOVA tests followed by post-hoc Tukey's or Dunnett's tests to correct for multiple comparisons were used via PRISM 9 (GraphPad Software, San Diego, CA, USA). To compare two groups, unpaired t-tests with unequal variance were used. Significant differences are displayed as: * p < 0.05, ** p < 0.01, *** p < 0.001, and **** p < 0.0001. Conclusions The present study found that the optical redox imaging technique readily detects the therapeutic effects of both single treatment of FK866 and paclitaxel and their combinations on TNBC cells. Both Fp and the redox ratio correlated strongly and linearly with druginduced growth inhibition detected by the MTS assay. The redox indices showed synergistic changes due to the combination treatment of FK866 and paclitaxel, while MTS analysis showed an additive effect. Both Fp and NADH were found to be positively correlated with drug-induced ROS levels. Drug-induced ROS levels reflected no synergistic or addictive effects from the combination treatment, and only correlated with cell growth inhibition with a weak borderline significance. Additionally, ORI resolved differences in the treatment responses between two TNBC lines. These findings indicate that ORI is valuable for the identification of treatment responses to metabolic inhibitors targeting Nampt and the sensitization effects on standard chemotherapeutic drugs for TNBC. The findings also warrant further study by testing a panel of TNBC cells of diverse metabolism with other therapeutic agents to confirm the utility of ORI as a therapeutic response biomarker for chemotherapy of TNBC. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2021-06-03T06:17:21.144Z
2021-05-25T00:00:00.000
{ "year": 2021, "sha1": "f1a080d144f2d1116bcb2f7d5e2051ea65b71351", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/22/11/5563/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "125e31152442fd92185ca3829369bff752800bcf", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
117588899
pes2o/s2orc
v3-fos-license
Park City lectures on elliptic curves over function fields These are the notes from a course of five lectures at the 2009 Park City Math Institute. The focus is on elliptic curves over function fields over finite fields. In the first three lectures, we explain the main classical results (mainly due to Tate) on the Birch and Swinnerton-Dyer conjecture in this context and its connection to the Tate conjecture about divisors on surfaces. This is preceded by a"Lecture 0"on background material. In the remaining two lectures, we discuss more recent developments on elliptic curves of large rank and constructions of explicit points in high rank situations. Introduction These are the notes from a course of five lectures at the 2009 Park City Math Institute. The focus is on elliptic curves over function fields over finite fields. In the first three lectures, we explain the main classical results (mainly due to Tate) on the Birch and Swinnerton-Dyer conjecture in this context and its connection to the Tate conjecture about divisors on surfaces. This is preceded by a "Lecture 0" on background material. In the remaining two lectures, we discuss more recent developments on elliptic curves of large rank and constructions of explicit points in high rank situations. A great deal of this material generalizes naturally to the context of curves and Jacobians of any genus over function fields over arbitrary ground fields. These generalizations were discussed in a course of 12 lectures at the CRM in Barcelona in February, 2010, and will be written up as a companion to these notes, see [Ulm11]. Unfortunately, theorems on unbounded ranks over function fields are currently known only in the context of finite ground fields. Finally, we mention here that very interesting theorems of Gross-Zagier type exist also in the function field context. These would be the subject of another series of lectures and we will not say anything more about them in these notes. It is a pleasure to thank the organizers of the 2009 PCMI for the invitation to speak, the students for their interest, enthusiasm, and stimulating questions, and the "elder statesmen"-Bryan Birch, Dick Gross, John Tate, and Yuri Zarhinfor their remarks and encouragement. Thanks also to Keith Conrad for bringing the fascinating historical articles of Roquette [Roq06] to my attention. Last but not least, thanks are due as well to Lisa Berger, Tommy Occhipinti, Karl Rubin, Alice Silverberg, Yuri Zarhin, and an anonymous referee for their suggestions and T E Xnical advice. Background on curves and function fields This "Lecture 0" covers definitions and notations that are probably familiar to many readers and that were reviewed very quickly during the PCMI lectures. Readers are invited to skip it and refer back as necessary. Terminology Throughout, we use the language of schemes. This is necessary to be on firm ground when dealing with some of the more subtle aspects involving non-perfect ground fields and possibly non-reduced group schemes. However, the instances where we use any hard results from this theory are isolated and students should be able to follow readily the main lines of discussion, perhaps with the assistance of a friendly algebraic geometer. Throughout, a variety over a field F is a separated, reduced scheme of finite type over Spec F . A curve is a variety purely of dimension 1 and a surface is a variety purely of dimension 2. Function fields and curves Throughout, p will be a prime number and F q will denote the field with q elements with q a power of p. We write C for a smooth, projective, and absolutely irreducible curve of genus g over F q and we write K = F q (C) for the function field of C over F q . The most important example is when C = P 1 , the projective line, in which case K = F q (C) = F q (t) is the field of rational functions in a variable t over F q . We write v for a closed point of C, or equivalently for an equivalence class of valuations of K. For each such v we write O (v) for the local ring at v (the ring of rational functions on C regular at v), m v ⊂ O (v) for the maximal ideal (those functions vanishing at v), and κ v = O (v) /m v for the residue field at v. The extension κ v /F q is finite and we set deg(v) = [κ v : F q ] and q v = q deg(v) so that κ v ∼ = F qv . For example, in the case where C = P 1 , the "finite" places of C correspond bijectively to monic irreducible polynomials f ∈ F q [t]. If v corresponds to f , then O (v) is the set of ratios g/h where g, h ∈ F q [t] and f does not divide h. The maximal ideal m v consists of ratios g/h where f does divide g, and the degree of v is the degree of f as a polynomial in t. There is one more place of K, the "infinite" place v = ∞. The local ring consists of ratios g/h with g, h ∈ F q [t] and deg(g) ≤ deg(h). The maximal ideal consists of ratios g/h where deg(g) < deg(h) and the degree of v = ∞ is 1. The finite and infinite places of P 1 give all closed points of P 1 . We write K sep for a separable closure of K and let G K = Gal(K sep /K). We write F q for the algebraic closure of F q in K sep . For each place v of K we have the decomposition group D v (defined only up to conjugacy), its normal subgroup the inertia group I v ⊂ D v , and Fr v the (geometric) Frobenius at v, a canonical generator of the quotient D v /I v ∼ = Gal(F q /F q ) that acts as x → x q −1 v on the residue field at a place w dividing v in a finite extension F ⊂ K sep unramified over v. Zeta functions Let X be a variety over the finite field F q . Extending the notation of the previous section, if x is a closed point of X , we write κ x for the residue field at x, q x for its cardinality, and deg(x) for [κ x : F q ]. We define the Z and ζ functions of X via Euler products: Z(X , T ) = where the products are over the closed points of X . It is a standard exercise to show that where N n is the number of F q n -valued points of X . It follows from a crude estimate for the number of F q n points of X that the Euler product defining ζ(X , s) converges in the half plane Re(s) > dim X . If X is smooth and projective, then it is known that Z(X , T ) is a rational function of the form where P 0 (T ) = (1 − T ), P 2 dim X (T ) = (1 − q dim X T ), and for all 0 ≤ i ≤ 2 dim X P i (T ) is a polynomial with integer coefficients and constant term 1. We denote the inverse roots of P i by α ij so that The inverse roots α ij of P i (T ) are algebraic integers that have absolute value q i/2 in every complex embedding. (We say that they are Weil numbers of size q i/2 .) It follows that ζ(X , s) has a meromorphic continuation to the whole s plane, with poles on the lines Re s ∈ {0, . . . , dim X } and zeroes on the lines Re s ∈ {1/2, . . . , dim X −1/2}. This is the analogue of the Riemann hypothesis for ζ(X , s). It is also known that the set of inverse roots of P i (T ) (with multiplicities) is stable under α ij → q/α ij . Thus ζ(X , s) satisfies a functional equation when s is replaced by dim X − s. Thus ζ(C, s) has simple poles for s ∈ 2πi log q Z and s ∈ 1 + 2πi log q Z and its zeroes lie on the line Re s = 1/2. For a fascinating history of the early work on zeta functions and the Riemann hypothesis for curves over finite fields, see [Roq06] and parts I and II of that work. Cohomology Assume that X is a smooth projective variety over k = F q . We write X for X × Fq F q . Note that G k = Gal(F q /F q ) acts on X via the factor F q . Choose a prime ℓ = p. We have ℓ-adic cohomology groups H i (X , Q ℓ ) which are finite-dimensional Q ℓ -vector spaces and which vanish unless 0 ≤ i ≤ 2 dim X . Functoriality in X gives a continuous action of Gal(F q /F q ). Since the geometric Frobenius (Fr q (a) = a q −1 ) is a topological generator of Gal(F q /F q ), the characteristic polynomial of Fr q on H i (X , Q ℓ ) determines the eigenvalues of the action of Gal(F q /F q ); in fancier language, it determines the action up to semi-simplification. An important result (inspired by [Wei49] and proven in great generality in [SGA5]) says that the factors P i of Z(X , t) are characteristic polynomials of Frobenius: (4.1) P i (T ) = det(1 − T Fr q |H i (X , Q ℓ )). From this point of view, the functional equation and Riemann hypothesis for Z(X , T ) are statements about duality and purity. To discuss the connections, we need more notation. Let Z ℓ (1) = lim ← −n µ ℓ n (F q ) and Q ℓ (1) = Z ℓ (1) ⊗ Z ℓ Q ℓ , so that Q ℓ (1) is a one-dimensional Q ℓ -vector space on which Gal(F q /F q ) acts via the ℓ-adic cyclotomic character. More generally, for n > 0 set Q ℓ (n) = Q ℓ (1) ⊗n (n-th tensor power) and Q ℓ (−n) = Hom(Q ℓ (n), Q ℓ ), so that for all n, Q ℓ (n) is a one-dimensional Q ℓ -vector space on which Gal(F q /F q ) acts via the nth power of the ℓ-adic cyclotomic character. We have H 0 (X , Q ℓ ) ∼ = Q ℓ (with trivial Galois action) and H 2 dim X (X , Q ℓ ) ∼ = Q ℓ (dim X ). The functional equation follows from the fact that we have a canonical non-degenerate, Galois equivariant pairing Indeed, the non-degeneracy of this pairing implies that if α is an eigenvalue of Fr q on H i (X , Q ℓ ), then q dim X /α is an eigenvalue of Fr q on H 2 dim X −i (X , Q ℓ ). The Riemann hypothesis in this context is the statement that the eigenvalues of Fr q on H i (X , Q ℓ ) are algebraic integers with absolute value q i/2 in every complex embedding. See [SGA4 1 2 ] or [Mil80] for an overview ofétale cohomology and its connections with the Weil conjectures. Picard and Albanese properties We briefly review two (dual) universal properties of the Jacobian of a curve that we will need. See [Mil86b] for more details. We assume throughout that the curve C has an F q -rational point x, i.e., a closed point with residue field F q . If T is another connected variety over F q with an F q -rational point t, a divisorial correspondence between (C, x) and (T, t) is an invertible sheaf L on C × Fq T such that L| C×t and L| x×T are trivial. Two divisorial correspondences are equal when they are isomorphic as invertible sheaves. Note that the set of divisorial correspondences between (C, x) and (T, t) forms a group under tensor product and is thus a subgroup of Pic(C × T ). We write DivCorr((C, x), (T, t)) ⊂ Pic(C × T ) for this subgroup. One may think of a divisorial correspondence as giving a family of invertible sheaves on C: s → L| C×s . Let J = J C be the Jacobian of C and write 0 for its identity element. Then J is a g-dimensional abelian variety over F q and it carries the "universal divisorial correspondence with C." More precisely, there is a divisorial correspondence M between (C, x) and (J, 0) such that if S is another connected variety over F q with F q -rational point s and L is a divisorial correspondence between (C, x) and (S, s), then there is a unique morphism φ : S → J sending s to 0 such that L = φ * M. (Of course M depends on the choice of base point x, but we omit this from the notation.) It follows that there is a canonical morphism, the Abel-Jacobi morphism, AJ : C → J sending x to 0. Intuitively, this corresponds to the family of invertible sheaves parameterized by C that sends y ∈ C to O C (y − x). More precisely, let ∆ ⊂ C × C be the diagonal, let and let L = O C×C (D) which is a divisorial correspondence between (C, x) and itself. The universal property above then yields the morphism AJ : C → J. It is known that AJ is a closed immersion and that its image generates J as an algebraic group. The second universal property enjoyed by J (or rather by AJ) is the Albanese property: it is universal for maps to abelian varieties. More precisely, if A is an abelian variety and φ : C → A is a morphism sending x to 0, then there is a unique homomorphism of abelian varieties ψ : J → A such that φ = ψ • AJ. Combining the two universal properties gives a useful connection between correspondences and homomorphisms: Suppose C and D are curves over F q with rational points x ∈ C and y ∈ D. Then we have an isomorphism (5.1.1) DivCorr((C, x), (D, y)) ∼ = Hom(J C , J D ). Intuitively, given a divisorial correspondence on C × D, we get a family of invertible sheaves on D parameterized by C and thus a morphism C → J D . The Albanese property then gives a homomorphism J C → J D . We leave the precise version as an exercise, or see [Mil86b,6.3]. We will use this isomorphism later to understand the Néron-Severi group of a product of curves. The Tate module Let A be an abelian variety of dimension g over F q , for example the Jacobian of a curve of genus g. (See [Mil86a] for a brief introduction to abelian varieties and [Mum08] for a much more complete treatment.) Choose a prime ℓ = p. Let A[ℓ n ] be the set of F q points of A of order dividing ℓ n . It is a group isomorphic to (Z/ℓ n Z) 2g with a linear action of Gal(F q /F q ). We form the inverse limit where the transition maps are given by multiplication by ℓ. Let V ℓ A = T ℓ A ⊗ Z ℓ Q ℓ , a 2g-dimensional Q ℓ -vector space with a linear action of Gal(F q /F q ). It is often called the Tate module of A. According to Roquette, what we now call the Tate module seems to have first been used in print by Deuring [Deu40] as a substitute for homology in his work on correspondences on curves. It appears already in a letter of Hasse from 1935, see [Roq06,p. 36]. The following proposition is the modern interpretation of the connection between homology and torsion points. Proposition 5.2.1. Let A be an abelian variety over a field k and let ℓ be a prime not equal to the characteristic of k. Let V ℓ A be the Tate module of A and (V ℓ A) * its dual as a G k = Gal(k sep /k)-module. • There is a canonical isomorphism of G k -modules • If A is the Jacobian of a curve C over k, then For a proof of part 1, see [Mil86a,15.1] and for part 2, see [Mil86b,9.6]. Exercises 5.2.2. These exercises are meant to make the Proposition more plausible. (1) Show that if A(C) is a complex torus C g /Λ, then the singular homology H 1 (A(C), Q ℓ ) is canonically isomorphic to V ℓ A(C). (Hint: Use the universal coefficient theorem to show that H 1 (A(C), Z/ℓ n Z) ∼ = Λ/ℓ n Λ.) (2) (Advanced) Let C be a smooth projective curve over an algebraically closed field k. Let ℓ be a prime not equal to the characteristic of k. Use geometric class field theory (as in [Ser88]) to show that unramified Galois covers C ′ → C equipped with an isomorphism Gal(C ′ /C) ∼ = Z/ℓZ are in bijection with elements of Hom(J C [ℓ], Z/ℓZ). (Make a convention to deal with the trivial homomorphism.) This suggests that H 1 (C, Z/ℓZ) "should be" Hom(J C [ℓ], Z/ℓZ) and H 1 (C, Z/ℓZ) "should be" J C [ℓ]. The reason we only have "should be" rather than a theorem is that a non-trivial Galois cover C ′ → C is never locally constant in the Zariski topology. This is a prime motivation for introducing theétale topology. Tate's theorem on homomorphisms of abelian varieties As usual, let k be a finite field and let A and B be two abelian varieties over k. Choose a prime ℓ not equal to the characteristic of k and form the Tate modules V ℓ A and V ℓ B. Any homomorphism of abelian varieties φ : A → B induces a homomorphism of Tate modules φ * : V ℓ A → V ℓ B and this homomorphism commutes with the action of G k = Gal(k/k) on the Tate modules. We get an induced homo- Tate's famous result [Tat66a] asserts that this is an isomorphism: Theorem 6.1. The map φ → φ * induces an isomorphism of Q ℓ -vector spaces: We also mention [Zar08] which gives a different proof and a strengthening with finite coefficients. We will use Tate's theorem in Theorem 12.1 of Lecture 2 to understand the divisors on a product of curves in terms of homomorphisms between their Jacobians. Elliptic curves over function fields In this lecture we discuss the basic facts about elliptic curves over function fields over finite fields. We assume the reader has some familiarity with elliptic curves over global fields such as Q or number fields, as explained, e.g., in [Sil09], and we will focus on aspects specific to characteristic p. The lecture ends with statements of the main results known about the conjecture of Birch and Swinnerton-Dyer in this context. Definitions We write k = F q for the finite field of cardinality q and characteristic p and we let K be the function field of a smooth, projective, absolutely irreducible curve C over k. An elliptic curve over K is a smooth, projective, absolutely irreducible curve of genus 1 over K equipped with a K-rational point O that will serve as the origin of the group law. All the basic geometric facts, e.g., of [Sil09, Ch. III and App. A], continue to hold in the context of function fields. We review a few of them to establish notation, but will not enter into full details. Using the Riemann-Roch theorem, an elliptic curve E over K can always be presented as a projective plane cubic curve defined by a Weierstrass equation, i.e., by an equation of the form where a 1 , . . . , a 6 ∈ K. The origin O is the point at infinity [0 : 1 : 0]. We often give the equation in affine form: (1.1.2) y 2 + a 1 xy + a 3 y = x 3 + a 2 x 2 + a 4 x + a 6 where x = X/Z and y = Y /Z. The quantities b 2 , . . . , b 8 , c 4 , c 6 , ∆, j are defined by the usual formulas ([Sil09, III.1] or [Del75]). Since E is smooth, by the following exercise ∆ = 0. Remark/Exercises 1.1.3. The word "smooth" in the definition of an elliptic curve means that the morphism E → Spec K is smooth. Smoothness of a morphism can be tested via the Jacobian criterion (see, e.g., [Har77,III.10.4] or [Liu02,4.3.3]). Show that the projective plane cubic (1.1.1) is smooth if and only if ∆ = 0. Because the ground field K is not perfect, smoothness is strictly stronger than the requirement that E be regular, i.e., that its local rings be regular local rings (cf. [Liu02,4.2.2]). For example, show that the projective cubic defined by Y 2 Z = X 3 − tZ 3 over K = F p (t) with p = 2 or 3 is a regular scheme, but is not smooth over K. Definitions 1.1.4. Let E be an elliptic curve over K. ( Equivalently, E is constant if it can be defined by a Weierstrass cubic (1.1.1) where the a i ∈ k. (2) We say E is isotrivial if there exists a finite extension K ′ of K such that E becomes constant over K ′ . Note that a constant curve is isotrivial. Suppose that E is isotrivial, so that E becomes constant over a finite extension K ′ and let k ′ be the field of constants of K ′ (the algebraic closure of k in K ′ ). A priori, the definition of isotrivial says that there is an elliptic curve Show that we may take K ′ to have field of constants k and E 0 to be defined over k. Show also that we may take K ′ to be separable and of degree dividing 24 over K. Exercise 1.1.6. For any elliptic curve E over K, the functor on K-algebras L → Aut L (E × L) is represented by a group scheme Aut(E). (Concretely, this means there is a group scheme Aut(E) such that for any K-algebra L, Aut L (E × L) is Aut(E)(L), the group of L-valued points of Aut(E).) Show that Aut(E) is anétale group scheme. Equivalently, show that any element of Aut K (E) is defined over a separable extension of K. (This is closely related to the previous exercise.) Examples Let K = F p (t) with p > 3 and define elliptic curves Then E 1 ∼ = E 2 over K and both are constant, E 3 is isotrivial and non-constant, whereas E 4 is non-isotrivial. For more examples, let K = F p (t) (with p restricted as indicated) and define Then E 5 and E 6 are isotrivial and non-constant whereas E 7 , E 8 , and E 9 are nonisotrivial. Frobenius If X is a scheme of characteristic p, we define the absolute Frobenius morphism Fr X : X → X as usual: It is the identity on the underlying topological space and raises functions to the p-th power. When X = Spec K, Fr X is just the map of schemes induced by the ring homomorphism K → K, a → a p . Suppose as usual that K is a function field and let E be an elliptic curve over K. Define a new elliptic curve E (p) over K by the fiber product diagram: More concretely, if E is presented as a Weierstrass cubic as in equation (1.1.2), then E (p) is given by the equation with a i replaced by a p i . The universal property of the fiber product gives a canonical morphism Fr E/K , the relative Frobenius: In terms of Weierstrass equations for E and E (p) as above, it is just the map (x, y) → (x p , y p ). It is evident that Fr E/K is an isogeny, i.e., a surjective homomorphism of elliptic curves, and that its degree is p. We define V = V E/K to be the dual isogeny, so that V E/K • Fr E/K = [p], multiplication by p on E. Note that j(E (p) ) = j(E) p so that if E is non-isotrivial, E and E (p) are not isomorphic. Thus, using Frobenius and its iterates, we see that there are infinitely many non-isomorphic elliptic curves isogenous to any non-isotrivial E. This is in marked contrast to the situation over number fields (cf. [Fal86]). Lemma 2.1. Let E be an elliptic curve over K. Then j(E) is a p-th power in K if and only if there exists an elliptic curve E ′ over K such that E ∼ = E ′(p) . Proof. We sketch a fancy argument and pose as an exercise a more down-toearth proof. Obviously if there is an E ′ with E ∼ = E ′(p) , then j(E) = j(E ′(p) ) = j(E ′ ) p ∈ K p . Conversely, suppose j(E) ∈ K p and choose an elliptic curve E ′′ such that j(E ′′ ) p = j(E). It follows that E ′′(p) is isomorphic to E over a finite separable extension of K. In other words, E is the twist of E ′′(p) by a cocycle in H 1 (G K , Aut K sep (E ′′(p) )). But there is a canonical isomorphism Aut K sep (E ′′(p) ) ∼ = Aut K sep (E ′′ ) and twisting E ′′ by the corresponding element of Exercise 2.2. Use explicit equations, as in [Sil09, Appendix A], to prove the lemma. The Hasse invariant Let F be a field of characteristic p and E an elliptic curve over F . Let O E be the sheaf of regular functions on E and let Ω 1 E be the sheaf of Kähler differentials on E. The coherent cohomology group H 1 (E, O E ) is a one-dimensional F -vector space and is Serre dual to the space of invariant differentials H 0 (E, Ω 1 E ). Choose a non-zero differential ω ∈ H 0 (E, Ω 1 E ) and let η be the dual element of H 1 (E, O E ). The absolute Frobenius Fr E induces a (p-linear) homomorphism: Suppose E is given by a Weierstrass equation (1.1.2) and ω = dx/(2y+a 1 x+a 3 ). If p = 2, then A(E, ω) = a 1 . If p > 2, choose an equation with a 1 = a 3 = 0. Then A(E, ω) = the coefficient of x p−1 in (x 3 + a 2 x 2 + a 4 x + a 6 ) (p−1)/2 . These assertions follow from [KM85,12.4] where several other calculations of A are also presented. Recall that E/K is ordinary if the group of p-torsion points E(K)[p] = 0 and supersingular otherwise. It is known that E is supersingular if and only if A(E, ω) = 0 (e.g., [KM85,12.3.6 and 12.4]) and in this case j(E) ∈ F p 2 (e.g., [KM85, proof of 2.9.4]). (Alternatively, one may apply [Sil09,V.3.1] to E over K.) In particular, if E is supersingular, then it must also be isotrivial. Endomorphisms The classification of endomorphism rings in [Sil09, III.9] goes over verbatim to the function field case: End K (E) is either Z, an order in an imaginary quadratic number field, or an order in a quaternion algebra over Q ramified exactly at ∞ and p. The quaternionic case occurs if and only if E is supersingular, and the imaginary quadratic case occurs if and only if j(E) is in F p and E is not supersingular ([Sil09, V.3.1 and Exer. V.5.8]). In particular, if E is non-isotrivial, then End K (E) = End K (E) = Z. The Mordell-Weil-Lang-Néron theorem We write E(K) for the group of K-rational points of E and we call E(K) the Mordell-Weil group of E over K. Lang and Néron (independently) generalized the classical Mordell-Weil theorem to the function field context: Theorem 5.1. Assume that K = F q (C) is the function field of a curve over a finite field and let E be an elliptic curve over K. Then E(K) is a finitely generated abelian group. (The theorems of Lang and Néron apply much more generally to any abelian variety A over a field K that is finitely generated over its "constant field" k, but one has to take care of the "constant part" of A. See [Ulm11] for details.) We will not give a detailed proof of the MWLN theorem here, but will mention two strategies. One is to follow the method of proof of the Mordell-Weil (MW) theorem over a number field. Choose a prime number ℓ = p. By an argument very similar to that in [Sil09, Ch. VIII] one can show that E(K)/ℓE(K) is finite (the "weak Mordell-Weil theorem") by embedding it in a Selmer group and showing that the Selmer group is finite by using the two fundamental finiteness results of algebraic number theory (finiteness of the class group and finite generation of the unit group) applied to Dedekind domains in K. One can then introduce a theory of heights exactly as in [Sil09] and show that the MW theorem follows from the weak MW theorem and finiteness properties of heights. See the original paper of Lang and Néron [LN59] for the full details. A complete treatment in modern language has been given by Conrad [Con06]. One interesting twist in the function field setting comes if one takes ℓ = p above. It is still true that the Selmer group for p is finite, but one needs to use the local restrictions at all places; the maximal abelian extension of exponent p unramified outside a finite but non-empty set of places is not finite and so one needs some control on ramification at every place. See [Ulm91] for a detailed account of pdescent in characteristic p. A second strategy of proof, about which we will say more in Lecture 3, involves relating the Mordell-Weil group of E to the Néron-Severi group of a closely related surface E. In fact, finite generation of the Néron-Severi group (known as the "theorem of the base") is equivalent to the Lang-Néron theorem. A direct proof of the theorem of the base was given by Kleiman in [SGA6,XIII]. See also [Mil80,V.3.25]. The constant case It is worth pausing in the general development to look at the case of a constant curve E. Recall that K is the function field k(C) of the curve C over k = F q . Suppose E 0 is an elliptic curve over k and let E = E 0 × k K. Proposition 6.1. We have a canonical isomorphism where Mor k denotes morphisms of varieties over k (=morphisms of k-schemes). Under this isomorphism, E(K) tor corresponds to the subset of constant morphisms. Proof. By definition, E(K) is the set of K-morphisms By the universal property of the fiber product, these are in bijection with kmorphisms Spec K → E 0 . Since C is a smooth curve, any k-morphism Spec K → E 0 extends uniquely to a k-morphism C → E 0 . This establishes a map E(K) → Mor k (C, E 0 ). If η : Spec K → C denotes the canonical inclusion, composition with η (φ → φ • η) induces a map Mor k (C, E 0 ) → E(K) inverse to the map above. This establishes the desired bijection and this bijection is obviously compatible with the group structures. Since k is finite, it is clear that a constant morphism goes over to a torsion point. Conversely, if P ∈ E(K) is torsion, say of order n, then the image of the corresponding φ : C → E 0 must lie in the set of n-torsion points of E 0 , a discrete set, and this implies that φ is constant. Corollary 6.2. Let J C be the Jacobian of C. We have canonical isomorphisms Proof. The Albanese property of the Jacobian of C (Subsection 5.1 of Lecture 0) gives a surjective homomorphism This homomorphism sends non-constant (and therefore surjective) morphisms to non-constant (surjective) homomorphisms, so its kernel consists exactly of the constant morphisms. The second isomorphism in the statement of the corollary follows from the fact that Jacobians are self-dual. By Poincaré complete reducibility [Mil86a, 12.1], J C is isogenous to a product of simple abelian varieties. Suppose J C is isogenous to E m 0 × A and A admits no non-zero morphisms to E 0 . We say that "E 0 appears in J C with multiplicity m." Then it is clear from the corollary that E(K)/E(K) tor ∼ = End k (E 0 ) m and so the rank of E(K) is m, 2m, or 4m. Tate and Shafarevich used these ideas to exhibit isotrivial elliptic curves over F = F p (t) of arbitrarily large rank. Indeed, using Tate's theorem on isogenies of abelian varieties over finite fields (reviewed in Section 6 of Lecture 0) and a calculation of zeta functions in terms of Gauss sums, they were able to produce a hyperelliptic curve C over F p whose Jacobian is isogenous to E m 0 × A where E 0 is a supersingular elliptic curve and the multiplicity m is as large as desired. If K = F p (C), E is the constant curve E = E 0 × F , and E ′ is the twist of E by the quadratic extension K/F , then Rank E ′ (F ) = Rank E(K) and so E ′ (F ) has large rank by the analysis above. See the original article [TS67] for more details and a series of articles by Elkies (starting with [Elk94]) for a beautiful application to the construction of lattices with high packing densities. Torsion An immediate corollary of the MWLN theorem is that E(K) tor is finite. In fact, E(K) tor is isomorphic to a group of the form where m divides n and p does not divide m. (See for example [Sil09,Ch. 3].) One can also see using the theory of modular curves that every such group appears for a suitable K and E. In another direction, one can give uniform bounds on torsion that depend only on crude invariants of the field K. Indeed, in the constant case, E(K) tor ∼ = E 0 (F q ) which has order bounded by (q 1/2 + 1) 2 . In the isotrivial case, there is a finite extension K ′ with the same field of constants k = F q over which E becomes constant. Thus E(K) tor ⊂ E(L) tor again has cardinality bounded by (q 1/2 + 1) 2 . We now turn to the non-isotrivial case. Proposition 7.1. Assume that E is non-isotrivial and let g C be the genus of C. Then there is a finite (and effectively calculable) list of groups-depending only on g C and p-such that for any non-isotrivial elliptic curve E over K, E(K) tor appears on the list. Proof. (Sketch) First consider the prime-to-p torsion subgroup of E(K). It has the form G = Z/mZ × Z/nZ where m|n and p | m. There is a modular curve X(m, n), irreducible and defined over F p (µ m ), that is a coarse moduli space for elliptic curves with subgroups isomorphic to G. We get a morphism C → X(m, n) which is non-constant (because E is non-isotrivial) and therefore surjective. The Riemann-Hurwitz formula then implies that g C ≥ g X(m,n) . But the genus of X(m, n) goes to infinity with n. Indeed, g X(m,n) ≥ g X(1,n) and standard genus formulae ([Miy06, 4.2]) together with crude estimation show that the latter is bounded below by This shows that for a fixed value of g C , only finitely many groups G as above can appear as E(K) tor . The argument for p-torsion is similar, except that ones uses the Igusa curves Ig(p n ) (cf. [KM85,Ch. 12]). If E(K) has a point of order p n , we get a nonconstant morphism C → Ig(p n ) and the genus of Ig(p n ) is asymptotic to p 2n /48 [Igu68,p. 96]. This proposition seems to have been rediscovered repeatedly over the years. The first reference I know of is [Lev68]. Since the genus of a function field is an analogue of the discriminant (more precisely q 2g−2 is an analogue of the absolute value of the discriminant of a number field), the proposition is an analogue of bounding E(K) tor in terms of the discriminant of a number field K. One could ask for a strengthening where torsion is bounded by "gonality", i.e., by the smallest degree of a non-constant map C → P 1 . This would be an analogue of bounding E(K) tor in terms of the degree of a number field K, as in the theorems of Mazur, Kamienny, and Merel [Mer96]. This is indeed possible and can be proven by mimicking the proof of the proposition, replacing bounds on the genus of the modular curve with bounds on its gonality. See [Poo07] for the best results currently known on gonality of modular curves. Exercise 7.2. Compute the optimal list mentioned in the proposition for g = 0. (This is rather involved.) Note that the optimal list in fact depends on p. Indeed, Z/11Z is on the list if and only if p = 11. One can be very explicit about p-torsion: Proposition 7.3. Suppose that E is a non-isotrivial elliptic curve over K. Then E(K) has a point of order p if and only if j(E) ∈ K p and A(E, ω) is a (p − 1)st power in K × . Note that whether A(E, ω) is a (p − 1)st power is independent of the choice of the differential ω. Now suppose that P ∈ E(K) is a non-trivial p-torsion point. Then Fr(P ) is a non-trivial p-torsion point in E (p) (K) and so A(E, ω) is a (p − 1)st power in K. Let E ′ be the quotient of E by the cyclic subgroup generated by P : E ′ = E/ P . Since P is in the kernel of multiplication by p, we have a factorization of multiplication by p: Since E → E ′ isétale of degree p and [p] is inseparable of degree p 2 , we have that E ′ → E is purely inseparable of degree p. But an elliptic curve in characteristic p has a unique inseparable isogeny of degree p (namely the quotient by the unique connected subgroup of order p, the kernel of Frobenius) so we have an identification E = E ′(p) . By 2.1, j(E) ∈ K p . Conversely, suppose A(E, ω) is a (p − 1)st power and j(E) ∈ K p . Let E ′ be the elliptic curve such that E ′(p) ∼ = E. Given a differential ω on E, there is a differential ω ′ on E ′ such that A(E, ω) = A(E ′ , ω ′ ) p (as can be seen for example by using Weierstrass equations). It follows that A(E ′ , ω ′ ) is also a (p − 1)st power in K. Thus we have a non-trivial point of order p in E ′(p) (K) = E(K). Part of the proposition generalizes trivially by iteration: if E(K) has a point of order p n , then j(E) ∈ K p n . A full characterization of p n torsion seems harderthe condition that A(E, ω) be a (p − 1)st power is closely related to the equations defining the Igusa curve Ig(p) ([KM85, 12.8]), but we do not have such explicit equations for Ig(p n ) when n > 1. Local invariants Let E be an elliptic curve over K and let v be a place of K. A model (1.1.2) for E with coefficients in the valuation ring O (v) is said to be integral at v. The valuation of the discriminant ∆ of an integral model is a non-negative integer and so there are models where this valuation takes its minimum value. Such models are minimal integral models at v. Choose a model for E that is minimal integral at v: Let a i ∈ κ(v) be the reductions of the coefficients and let E v be the plane cubic (8.1) y 2 + a 1 xy + a 3 y = x 3 + a 2 x 2 + a 4 x + a 6 over the residue field κ v . It is not hard to check using Weierstrass equations that the isomorphism type of the reduced cubic (8.1) is independent of the choice of minimal model. If the discriminant of a minimal integral model at v has valuation zero, i.e., is a unit at v, then the reduced equation defines an elliptic curve over κ v . If the minimal valuation is positive, then the reduced curve is singular. We distinguish three cases according to the geometry of the reduced curve. (1) If E v is a smooth cubic, we say E has good reduction at v. (2) If E v is a nodal cubic, we say E has multiplicative reduction at v. If the tangent lines at the node are rational over κ(v) we say the reduction is split multiplicative and if they are rational only over a quadratic extension, we say the reduction is non-split multiplicative. (3) If E v is a cuspidal cubic, we say E has additive reduction. Define an integer a v as follows: if E has non-split multiplicative reduction at v 0 if E has additive reduction at v Exercise 8.4. To make this definition less ad hoc, note that in the good reduction case, the numerator of the ζ-function of the reduced curve is 1 − a v q −s v + q 1−2s v . Show that in the bad reduction cases, the ζ-function of the reduced curve is In the good reduction case, the results about zeta functions andétale cohomology reviewed in Lecture 0, Sections 3 and 4 imply the "Hasse bound": There are two more refined invariants in the bad reduction cases: the Néron model and the conductor. The local exponent of the conductor at v, denoted n v is defined as Here δ v is a non-negative integer that is 0 when p > 3 and is ≥ 0 when p = 2 or 3. We refer to [Tat75] for a definition and an algorithm to compute δ v . The The Néron model will be discussed in Lecture 3 below. Exercise 8.6. Mimic [Sil09, Ch. VII] to define a filtration on the points of E over a completion K v of K. Show that the prime-to-p part of E(K) tor maps injectively into to the special fiber of the Néron model of E at v. As in the classical case, this gives an excellent way to bound the primeto-p part of E(K) tor . The L-function We define the L-function of E/K as an Euler product: and L(E, s) = L(E, q −s ). (Here T is a formal indeterminant and s is a complex number. Unfortunately, there is no standard reasonable parallel of the notations Z and ζ to distinguish the function of T and the function of s.) Because of the Hasse bound on the size of a v , the product converges absolutely in the region Re s > 3/2, and as we will see below, it has a meromorphic continuation to all s. When E is constant it is elementary to calculate L(E, s) in terms of the zetafunctions of E 0 and C. Exercise 9.2. Suppose that E = E 0 × k K. Write the ζ-functions of E 0 and C as rational functions: Prove that . Thus L(E, s) is a rational function in q −s of degree 4g C − 4, it extends to a meromorphic function of s, and it satisfies a functional equation for s ↔ 2 − s. Its poles lie on the lines Re s = 1/2 and Re s = 3/2 and its zeroes lie on the line Re s = 1. Although the proofs are much less elementary, these facts extend to the nonconstant case as well: Theorem 9.3. Suppose the E is a non-constant elliptic curve over K. Let n be the conductor of E. Then L(E, s) is a polynomial in q −s of degree N = 4g C − 4 + deg n, it satisfies a functional equation for s ↔ 2−s, and its zeroes lie on the line Re s = 1. More precisely, where each α i is an algebraic integer of absolute value q in every complex embedding. The theorem is a combination of results of Grothendieck, Deligne, and others. We will sketch a proof of it in Lecture 4. Note that in all cases L(E, s) is holomorphic at s = 1. In the non-constant case, its order of vanishing at s = 1 is bounded above by N and it equals N if and only if L(E, s) = (1 − q 1−s ) N . The basic BSD conjecture This remarkable conjecture connects the analytic behavior of the function L(E, s), constructed from local data, to the Mordell-Weil group, a global invariant. The original conjecture was stated only for elliptic curves over Q [BSD65] but it is easily seen to make sense for abelian varieties over global fields. There is very strong evidence in favor of it, especially for elliptic curves over Q and abelian varieties over function fields. See [Gro10, Lecture 3, §4] for a summary of the best theoretical evidence in the number field case. We will discuss what is known for elliptic curves in the function field case later in this course. See Section 12 for statements of the main results and [Ulm11] for a discussion of the case of higher dimensional abelian varieties over function fields. The Tate-Shafarevich group We define the Tate-Shafarevich group of E over K as Here the cohomology groups can be taken to be Galois cohomology groups: and similarly for H 1 (K v , E); or they can be taken asétale or flat cohomology groups of Spec K with coefficients in the sheaf associated to E. The flat cohomology definition is essential for proving finer results on p-torsion in (E/K). Exercise 11.1. Show that the group H 1 (K, E) (and therefore (E/K)) is torsion. Hint: Show that given a class c ∈ H 1 (K, E), there is a finite Galois extension L/K such that c vanishes in H 1 (L, E). The refined BSD conjecture relates the leading coefficient of L(E, s) at s = 1 to invariants of E including heights, Tamagawa numbers, and the order of (E/K). In particular, the conjecture that (E/K) is finite is included in the refined BSD conjecture. We will not discuss that conjecture in these lectures, so we refer to [Gro10] and [Ulm11] for more details. Statements of the main results Much is known about the BSD conjecture over function fields. We start with general results. Theorem 12.1. Let E be an elliptic curve over a function field K. Then we have: (2) The following are equivalent: • a finite extension and if the BSD conjecture holds for E over K ′ , then it holds for E over K. The theorem was proven by Tate [Tat66b] and Milne [Mil75] and we will sketch a proof in Lecture 3. When the equivalent conditions of Item 2 hold, it turns out that the refined BSD conjecture automatically follows. (This is also due to Tate and Milne and will be discussed in detail in [Ulm11]. ) We now state several special cases where the conjecture is known to be true. As will be seen in the sequel, they all ultimately reduce either to Tate's theorem on isogenies of abelian varieties over finite fields (Theorem 6.1 of Lecture 0) or to a theorem of Artin and Swinnerton-Dyer on K3 surfaces [ASD73]. Theorem 12.2. If E is an isotrivial elliptic curve over a function field K, then the BSD conjecture holds for E. Recall that a constant curve is also isotrivial. To state the next result, we make an ad hoc definition. If E is an elliptic curve over K = F q (t) we define the height h of E to be the smallest non-negative integer such that E can be defined by a Weierstrass equation (1.1.1) where the a i are all polynomials and deg(a i ) ≤ hi. For example, the curves E 1 and E 2 in Subsection 1.2 have height h = 0 and the other curves E 3 , . . . E 9 there all have height h = 1. See Section 4 of Lecture 3 below for a more general definition. Theorem 12.3. Suppose that K = k(t) and that E is an elliptic curve over K of height h ≤ 2. Then the BSD conjecture holds for E. Note that this case overlaps the preceding one since an elliptic curve over k(t) is constant if and only if its height is zero (cf. Proposition 4.1 in Lecture 3). The following case is essentially due to Shioda [Shi86]. To state it, consider a polynomial f in three variables with coefficients in k which is the sum of exactly 4 non-zero monomials, say where the c i ∈ k are non-zero. Set e i4 = 1 − 3 j=1 e ij and let A be the 4 × 4 integer matrix A = (e ij ). If det A = 0 (mod p), we say that f satisfies Shioda's condition. Note that the condition is independent of the order of the variables x j . Theorem 12.4. Suppose that K = k(t) and that E is an elliptic curve over K. which is the sum of exactly 4 non-zero monomials and which satisfies Shioda's condition. Then the BSD conjecture holds for E. For example, the theorem applies to the curves E 4 , E 7 , E 8 , and E 9 of Subsection 1.2 over K = F q (t) for any prime power q. It applies more generally to these curves when t is replaced by t d for any d prime to p. Note that when d is large, the height of the curve is also large, and so we get cases of BSD not covered by Theorem 12.3. Finally we state another more recent and ultimately much more flexible special case due to Lisa Berger [Ber08]. Theorem 12.5. Suppose that K = k(t) and that E is an elliptic curve over K. Suppose that E is birational to a plane curve of the form where f and g are rational functions of one variable and d is prime to p. Then the BSD conjecture holds for E. Here one should clear denominators to interpret the equation f = t d g (or work in a Zariski open subset of the plane). For example, if f (x) = x(x − 1) and g(y) = y 2 /(1 − y) then we have the plane curve over K = k(t) defined by which turns out to be birational to The rest of the course The remainder of these lectures will be devoted to sketching the proofs of most of the main results and applying them to construct elliptic curves of large rank over function fields. More precisely, in Lecture 2 we will review facts about surfaces and the Tate conjecture on divisors. This is a close relative of the BSD conjecture. In Lecture 3 we will explain the relationship between the BSD and Tate conjectures and use it to prove the part of Theorem 12.1 related to ℓ = p as well as most of the other theorems stated in the previous section. In Lecture 4 we will recall a general result on vanishing of L-functions in towers and combine it with the results above to obtain many elliptic curves of arbitrarily large rank. In Lecture 5, we will give other applications of these ideas to ranks of elliptic curves and explicit points. Motivation Consider an elliptic curve E/K and suppose that K = k(t) and that we choose an equation for E as in With a little more work (discussed in the next lecture), for any E over K = k(C) we can define a smooth projective surface E over k with a morphism π : E → C whose generic fiber is E. Obviously there will be close connection between the arithmetic of E and that of E. Although E has higher dimension than E, it is defined over the finite field k and as a result we have better control over its arithmetic. Pursuing this line of inquiry leads to the main theorems stated at the end of the previous section. In this lecture, we discuss the relevant facts and conjectures about surfaces over finite fields. In the next lecture we will look carefully at the connections between E and E and deduce the main classical theorems. There are many excellent references for the general theory of surfaces, including [Bea96], [BHPV04], and [Bȃd01]. We generally refer to [Bȃd01] below since it works throughout over a field of arbitrary characteristic. Surfaces Let k = F q be a finite field of characteristic p. As always, by a surface over k we mean a purely 2-dimensional, separated, reduced scheme of finite type over k. Such a scheme is automatically quasi-projective and is projective if and only if it is complete [Bȃd01, 1.28]. Since k is perfect, a surface X is a regular scheme if and only if X → Spec k is a smooth morphism (e.g., [Liu02,4.3.3, Exer. 3.24]). We sloppily say that "X is smooth" if these conditions hold. Resolution of singularities is known for surfaces: For any surface X , there is a proper birational morphism X → X withX smooth. (We may even take this morphism to be a composition of normalizations and blow ups at closed points [Lip78]. See also [Art86] for a nice exposition.) Therefore, every surface is birational to a smooth projective surface. In the cases of interest to us, this can be made very explicit in an elementary manner. Throughout we assume that X is a smooth, projective, absolutely irreducible surface over k and we assume that X (k) is non-empty, i.e., X has a k-rational point. Divisors and the Néron-Severi group We give a lightning review of divisors and equivalence relations on divisors. See, for example, [Har77, V.1] for more details. Divisor classes In other words, the set of divisors is the free abelian group on the reduced, closed, codimension 1 subvarieties on X . If Z is a reduced, closed subvariety of X of codimension 1, there is an associated valuation ord Z : k(X ) × → Z that sends a rational function to its order of zero or pole along Z. A rational function f on X has a divisor: The group of divisors modulo those linear equivalent to zero is the divisor class group DivCl(X ). It is a fundamental invariant of X . The Picard group Let Pic(X ) be the Picard group of X , i.e., the group of isomorphism classes of invertible sheaves on X with group law given by the tensor product. There is a cohomological calculation of Pic(X ): . The map sending a divisor D to the invertible sheaf O X (D) induces an isomorphism DivCl(X )→ Pic(X ). The Néron-Severi group As usual, we write X for X × k k. We first introduce the notion of algebraic equivalence for divisors on X . Intuitively, two divisors D and D ′ are algebraically equivalent if they lie in a family parameterized by a connected variety (which we may take to be a smooth curve). More precisely, if T is a smooth curve over k and D ⊂ X × k T is a divisor that is flat over T , then we get a family of divisors on X parameterized by T : t ∈ T corresponds to X × {t} ∩ D. Two divisors D 1 and D 2 on X are algebraically equivalent if they lie in such a family, i.e., if there is a curve T and a divisor D as above and two points t 1 and t 2 ∈ T (k) such that D i = X × k {t i } ∩ D. (A priori, to ensure transitivity of this relation we should use chains of equivalences (see [Har77, Exer. V.1.7]) but see [Ful84,10.3.2] for an argument that shows the definition works as is.) Note that linear equivalence is algebraic equivalence where T is restricted to be P 1 ([Har77, Exer. V.1.7]) and so algebraic equivalence is weaker than linear equivalence. The group of divisors on X modulo those algebraically equivalent to zero is the Néron-Severi group NS(X ). A classical (and difficult) theorem, the "theorem of the base," says that NS(X ) is finitely generated. See [LN59] and [SGA6, XIII.5.1] for proofs and Lecture 3 below for more discussion. See also [Con06] for a modern discussion of the results in [LN59]. Since linear equivalence is weaker than algebraic equivalence, NS(X ) is a quotient of Pic(X ). We define NS(X ) to be the image of Div(X ) in NS(X ) or equivalently the image of Pic(X ) in NS(X ). Thus NS(X ) is again a finitely generated abelian group. As we will see, it is of arithmetical nature. Exercise 3.3.1. Let G k = Gal(k/k). Show that NS(X ) is the group of invariants NS(X ) G k . You will need to use that k is a finite field. The Picard scheme We define Pic 0 (X ) as the kernel of the surjection Pic(X ) → NS(X ). In order to understand this group better, we will introduce more structure on the Picard group. The main fact we need to know is that the group Pic 0 (X × k) is the set of points on an abelian variety and is therefore a divisible group. (I.e., for every class c ∈ Pic 0 (X × k) and every positive integer n, there is a class c ′ such that n c ′ = c.) Readers willing to accept this assertion can skip the rest of this section. The Picard group Pic(X ) is the set of k-points of a group scheme. More precisely, under our hypotheses on X there is a group scheme called the Picard scheme and denoted Pic X /k which is locally of finite type over k and represents the relative Picard functor. This means that if T → S = Spec k is a morphism of schemes and π T : X T := X × Spec k T → T is the base change then . Here the left hand side is the group of T -valued points of Pic X /k . See [Kle05] for a thorough and detailed overview of the Picard scheme, and in particular [ Kle05,9.4.8] for the proof that there is a scheme representing the relative Picard functor as above. We write Pic 0 X /k for the connected component of Pic X /k containing the identity. Under our hypotheses, Pic 0 X /k is a geometrically irreducible projective group scheme over k [ Kle05,9.5.3,9.5.4]. It may be non-reduced. (See examples in [Igu55] and [Ser58] and a full analysis of this phenomenon in [Mum66].) We let PicVar X /k = Pic 0 X /k red , the Picard variety of X over k, which is an abelian variety over k. If k ′ is a field extension of k, we have so that Pic 0 (X k ′ ) is the set of points of an abelian variety. By [Kle05,9.5.10], Pic 0 X /k (k) = Pic 0 (X ), in other words, the class of a divisor in Pic(X ) lies in Pic 0 (X ) if and only if the divisor is algebraically equivalent to 0. Intersection numbers and numerical equivalence There is an intersection pairing on the Néron-Severi group: which is bilinear and symmetric. If D and D ′ are divisors, we write D.D ′ for their intersection pairing. There are two approaches to defining the pairing. In the first approach, one shows that given two divisors, there are divisors in the same classes in NS(X ) (or even the same classes in Pic(X )) that meet transversally. Then the intersection number is literally the number of points of intersection. The work in this approach is to prove a moving lemma and then show that the resulting pairing is well defined. See [Har77, V.1] for the details. In the second approach, one uses coherent cohomology. If L is an invertible sheaf on X , let be the coherent Euler characteristic of L. Then define One checks that if C is a smooth irreducible curve on X , then C.D = deg O X (D)| C and that if C and C ′ are two distinct irreducible curves on X meeting transversally, then C.C ′ is the sum of local intersection multiplicities. See [Bea96, I.1-7] for details. (Nowhere is it used in this part of [Bea96] that the ground field is C.) Two divisors D and D ′ are said to be numerically equivalent if D.D ′′ = D ′ .D ′′ for all divisors D ′′ . If Num(X ) denotes the group of divisors in X up to numerical equivalence, then we have surjections and so Num(X ) is a finitely generated group. It is clear from the definition that Num(X ) is torsion-free and so we can insert NS(X )/tor (Néron-Severi modulo torsion) into this chain: Pic(X ) ։ NS(X ) ։ NS(X )/tor ։ Num(X ). Cycle classes and homological equivalence There is a general theory of cycle classes in ℓ-adic cohomology, see for example ]. In the case of divisors, things are much simpler and we can construct a cycle class map from the Kummer sequence. Indeed, consider the short exact sequence of sheaves on X for theétale topology: (The sheaves µ ℓ n and G m are perfectly reasonable sheaves in the Zariski topology on X , but the arrow in the right is not surjective in that context. We need to use theétale topology or a finer one.) Taking cohomology, we get a homomorphism Since Pic 0 (X ) is a divisible group, we have NS(X )/ℓ n = Pic(X )/ℓ n and so taking an inverse limit gives an injection Composing with the natural homomorphism NS(X ) → NS(X ) gives our cycle class map We declare two divisors to be (ℓ-)homologically equivalent if their classes in H 2 (X , Z ℓ (1)) are equal. (We will see below that this notion is independent of ℓ.) The group of divisors modulo homological equivalence will (temporarily) be denoted Homol(X ). It will turn out to be a finitely generated free abelian group. The intersection pairing on N S(X ) corresponds under the cycle class map to the cup product on cohomology. This means that a divisor that is homologically equivalent to zero is also numerically equivalent to zero. Thus we have a chain of surjections: Comparison of equivalence relations on divisors A theorem of Matsusaka [Mat57] asserts that the surjection and these groups are finitely generated, free abelian groups. Since NS(X ) is finitely generated, NS(X ) tor is finite. In all of the examples we will consider, NS(X ) is torsion free. (In fact, for an elliptic surface with a section, the surjection NS(X ) → Num(X ) is always an isomorphism, see [SS09, Theorem 6.5].) So to understand Pic(X ) we have only to consider the finitely generated free abelian group NS(X ) and the group Pic 0 (X ), which is (the set of points of) an abelian variety. Exercise 7.1. In the case of a surface X over the complex numbers, use the cohomology of the exponential sequence X → 0 to analyze the structure of Pic(X ). Examples 8.1. P 2 It is well known (e.g., [Har77, II.6.4]) that two curves on P 2 are linearly equivalent if and only if they have the same degree. It follows that Pic(P 2 ) = NS(P 2 ) ∼ = Z. Abelian varieties If X is an abelian variety (of any dimension g), then Pic 0 (X ) is the dual abelian variety and NS(X ) is a finitely generated free abelian group of rank between 1 and 4g 2 . See [Mum08] for details. Products of curves Suppose that C and D are smooth projective curves over k with k-rational points x ∈ C and y ∈ D. By definition (see Subsection 5.1 of Lecture 0), the group of divisorial correspondences between (C, x) and (D, y) is a subgroup of Pic(C × D) and it is clear that Moreover, as we saw in Lecture 0, is a discrete group. It follows that This last isomorphism will be important for a new approach to elliptic curves of high rank over function fields discussed in Lecture 5. Blow ups Let X be a smooth projective surface over k and let π : Y → X be the blow up of X at a closed point x ∈ X so that E = π −1 (x) is a rational curve on Y. Fibrations Let X be a smooth projective surface over k, C a smooth projective curve over k, and π : X → C a non-constant morphism. Assume that the induced extension of function fields k(C) ֒→ k(X ) is separable and k(C) is algebraically closed in k(X ). Then for every closed point y ∈ C, the fiber π −1 (y) is connected, and it is irreducible for almost all y. Write F for the class in NS(X ) of the fiber over a k-rational point y of C. (This exists because we assumed that X has a k-rational point.) We write F for the subgroup of NS(X ) generated by F . It is clear from the definition of NS(X ) that if y ′ is another closed point of C, then the class in NS(X ) of π −1 (y ′ ) is equal to (deg y ′ )F . Now suppose that z ∈ C is a closed point such that π −1 (z) is reducible, say where the Z i are the irreducible components of π −1 (z) and the n i are their multiplicities in the fiber. Then a consideration of intersection multiplicities (see for example [Sil94, III.8]) shows that for any integers m i , if and only if there is a rational number α such that m i = αn i for all i. More precisely, the intersection pairing restricted to the part of NS(X ) generated by the classes of the Z i is negative semi-definite, with a one-dimensional kernel spanned by integral divisors that are rational multiples of the whole fiber. It follows that the subgroup of NS(X )/ F generated by the classes of the Z i has rank f z − 1. It is free of this rank if the gcd of the multiplicities n i is 1. It also follows that if D is a divisor supported on a fiber of π and D ′ is another divisor supported on other fibers, then D = D ′ in NS(X )/ F if and only if D = D ′ = 0 in NS(X )/ F . Define L 2 NS(X ) to be the subgroup of NS(X ) generated by all components of all fibers of π over closed points of C. By the above, it is the direct sum of the F and the subgroups of NS(X )/ F generated by the components of the various fibers. Thus we obtain the following computation of the rank of L 2 NS(X ). Proposition 8.6.1. For a closed point y of C, let f y denote the number of irreducible components in the fiber π −1 (y). Then the rank of L 2 NS(X ) is If for all y the greatest common divisor of the multiplicities of the components in the fiber of π over y is 1, then L 2 NS(X ) is torsion-free. Tate's conjectures T 1 and T 2 Tate's conjecture T 1 for X (which we denote T 1 (X )) characterizes the image of the cycle class map: Conjecture 9.1 (T 1 (X )). For any prime ℓ = p, the cycle class map induces an isomorphism NS(X ) ⊗ Q ℓ → H 2 (X , Q ℓ (1)) G k We will see below that T 1 (X ) is equivalent to the apparently stronger integral statement that the cycle class induces an isomorphism We will also see that T 1 (X ) is independent of ℓ which is why we have omitted ℓ from the notation. Since G k is generated topologically by F r q , we have The injectivity of the cycle class map implies that and T 1 (X ) is the statement that these two dimensions are equal. The second Tate conjecture relates the zeta-function to divisors. Recall that ζ(X , s) denotes the zeta function of X , defined in Lecture 0, Section 3. Conjecture 9.2 (T 2 (X )). We have Rank NS(X ) = − ord s=1 ζ(X , s) Note that by the Riemann hypothesis, the poles of ζ(X , s) at s = 1 come from P 2 (X , q −s ). More precisely, using the cohomological formula (4.1) of Lecture 0 for P 2 , we have that the order of pole of ζ(X , s) at s = 1 is equal to the multiplicity of q as an eigenvalue of F r q on H 2 (X , Q ℓ ). Thus we have a string of inequalities Conjecture T 1 (X ) is that the first inequality is an equality and conjecture T 2 (X ) is that the leftmost and rightmost integers are equal. It follows trivially that T 2 (X ) implies T 1 (X ). Tate proved the reverse implication. Proposition 9.4. The conjectures T 1 (X ) and T 2 (X ) are equivalent. In particular, Proof. First note that the intersection pairing on NS(X ) is non-degenerate, so we get an isomorphism On the other hand, the cup product on H 2 (X , Q ℓ (1)) is also non-degenerate (by Poincaré duality), so we have If we use a superscript G k to denote invariants and a subscript G k to denote coinvariants, then we have a natural homomorphism which is an isomorphism if and only if the subspace of H 2 (X , Q ℓ (1)) where Fr q acts by 1 is equal to the whole of the generalized eigenspace for the eigenvalue 1. As we have seen above, this holds if and only if we have dim Q ℓ H 2 (X , Q ℓ ) F rq=q = − ord s=1 ζ(X , s). Now consider the diagram The lower right arrow is an isomorphism by elementary linear algebra. The maps h and h * are the cycle map and its transpose and they are isomorphisms if and only if T 1 (X ) holds. One checks that the diagram commutes ([Tat66b, p. 24] or [Mil75, Lemma 5.3]) and so T 1 (X ) implies that f is an isomorphism. Thus T 1 (X ) implies T 2 (X ). We remark that the equality of dim Q ℓ H 2 (X , Q ℓ ) F rq=q and − ord s=1 ζ(X , s) would follow from the semi-simplicity of F r q acting on H 2 (X , Q ℓ ) (or even from its semisimplicity on the F r q = q generalized eigenspace). This is a separate "standard" conjecture (see for example [Tat94]); it does not seem to imply T 1 (X ). T 1 and the Brauer group We define the (cohomological) Brauer group Br(X ) by Br(X ) = H 2 (X , G m ) = H 2 (X , O × X ) (with respect to theétale or finer topologies). Because X is a smooth proper surface over a finite field, the cohomological Brauer group is isomorphic to the usual Brauer group (defined in terms of Azumaya algebras) and it is known to be a torsion group. Similarly, define . This group is torsion but need not be finite. Taking the cohomology of the exact sequence as in Section 6, we have an exact sequence Taking G k -invariants and then the inverse limit over powers of ℓ, we obtain an exact is zero if and only if the ℓ-primary part of Br(X ) is finite. It follows that the ℓ part of the Brauer group is finite if and only if T 1 (X ) for ℓ holds if and only if the integral version of T 1 (X ) for ℓ holds. In particular, since T 1 (X ) is independent of ℓ, if Br(X )[ℓ ∞ ] is finite for one ℓ, then Br(X )[ℓ ∞ ] is finite for all ℓ = p. It is even true, although more difficult to prove, that T 1 (X ) is equivalent to the finiteness of Br(X ). Proof. We sketch the proof of the prime-to-p part of this assertion following [Tat66b] and refer to [Mil75] for the full proof. We already noted that the ℓ-primary part of Br(X ) is finite for one ℓ = p if and only if T 1 (X ) holds. To see that almost all ℓ-primary parts vanish, we consider the following diagram, which is an integral version of the diagram in the proof of Proposition 9.4: Here e is induced by the intersection form, h is the cycle class map, f is induced by the identity map of H 1 (X , Z ℓ (1)) and g * is the transpose of a map obtained by taking the direct limit over powers of ℓ of the first map in equation (10.1). We say that a homomorphism φ : A → B of Z ℓ -modules is a quasi-isomorphism if it has a finite kernel and cokernel. In this case, we define It is easy to check that if φ 3 = φ 2 φ 1 (composition) and if two of the maps φ 1 , φ 2 , φ 3 are quasi-isomorphisms, then so is the third and we have z(φ 3 ) = z(φ 2 )z(φ 1 ). In the diagram above, if we assume T 1 (X ), then h is an isomorphism. The map e is induced from the intersection pairing and is a quasi-isomorphism and z(e) is (the ℓ part of) the order of the torsion subgroup of NS(X ) divided by (the ℓ part of) discriminant of the intersection form. We saw above that under the assumption of T 1 (X ), the map f is a quasi-isomorphism and it turns out that z(f ) is essentially (the ℓ part of) the leading term of the zeta function ζ(X , s) at s = 1. In particular, under T 1 (X ), e, f , and h are isomorphisms for almost all ℓ. The same must therefore be true of g * . By taking G k -invariants and a direct limit over powers of ℓ in equation (10.1), one finds that z(g * ) is equal to the order of Br(X )[ℓ ∞ ] and so this group is trivial for almost all ℓ. This completes our sketch of the proof of the theorem. The sketch above has all the main ideas needed to prove that the prime-to-p part of the Artin-Tate conjecture on the leading coefficient of the zeta function at s = 1 follows from the Tate conjecture T 1 (X ). The p-part is formally similar although more delicate. To handle it, Milne replaces the group in the lower right of the diagram with the larger group Hom(H 2 (X , (Q p /Z p )(1)), Q p /Z p ). The z invariants of the maps to and from this group turn out to have more p-adic content that is related to the term q α (X ) in the Artin-Tate leading coefficient conjecture. We refer to [Mil75] for the full details and to [Ulm11] for a discussion of several related points, including the case p = 2 (excluded in Milne's article, but now provable due to improved p-adic cohomology) and higher dimensional abelian varieties. The descent property of T 1 IfX → X is the blow up of X at a closed point, then T 1 (X ) is equivalent to T 1 (X ). Indeed, under blowing up both the rank of NS(·) and the dimension of H 2 (·, Q ℓ (1)) G k increase by one. (See Example 8.5 above.) In fact: Proposition 11.1. T 1 (X ) is invariant under birational isomorphism. More generally, if X → Y is a dominant rational map, then T 1 (X ) implies T 1 (Y). Proof. We give simple proof of the case where X and Y are surfaces. See [Tat94] for the general case. First, we may assume X Y is a morphism. Indeed, letX → X be a blow up resolving the indeterminacy of X Y, i.e., so that the compositionX → X Y is a morphism. As we have seen above T 1 (X ) implies T 1 (X ) so we may replace X withX and show that T 1 (Y) holds. So now suppose that π : X → Y is a dominant morphism. Since the dimensions of X and Y are equal, π must be generically finite, say of degree d. But then the push forward and pull-back maps on cycles present NS(Y) ⊗ Q ℓ as a direct factor of N S(X )⊗Q ℓ ; they also present H 2 (Y, Q ℓ (1)) as a direct factor of H 2 (X , Q ℓ (1)). The cycle class maps and Galois actions are compatible with these decompositions and since by assumption Note that the dominant rational map X Y could be a ground field extension, or even a purely inseparable morphism. Tate's theorem on products In this section we sketch how T 1 for products of curves follows from Tate's theorem on endomorphisms of abelian varieties over finite fields. Theorem 12.1 (Tate). Let C and D be curves over k and set X = C × k D. Then T 1 (X ) holds. Proof. Extending k if necessary, we may assume that C and D both have rational points. Fix rational base points x and y (which we will mostly omit from the notation below). Recall from Subsection 8.4 that By the Künneth formula, Twisting by Q ℓ (1) and taking invariants, we have Under the cycle class map, the factor Z 2 of NS(X ) (corresponding to C × {y} and {x} × D) spans the factor Q ℓ 2 of H 2 (X , Q ℓ (1)) G k (corresponding to H 2 ⊗ H 0 and H 0 ⊗ H 2 in the Kunneth decomposition). Thus what we have to show is that the cycle class map induces an isomorphism Thus the needed isomorphism is and this is exactly the statement of Tate's theorem (Lecture 0, Theorem 6.1). This completes the proof of the theorem. (1) A variation of the argument above, using Picard and Albanese varieties, shows that T 1 for a product X × Y of varieties of any dimension follows from T 1 for the factors. (2) It is worth noting that Tate's conjecture T 1 (and the proof of it for products of curves) only characterizes the image of in ℓ-adic cohomology of NS(X ) ⊗ Z ℓ , not the image of NS(X ) itself. This should be contrasted with the Lefschetz (1, 1) theorem, which characterizes the image of NS(X ) in deRham cohomology when the ground field is C. Products of curves and DPC Assembling the various parts of this lecture gives the main result: Proposition 13.1. Let X be a smooth, projective surface over k. If there is a dominant rational map C × k D X from a product of curves to X , then the Tate conjectures T 1 (X ) and T 2 (X ) hold. Indeed, by Theorem 12.1, we have T 1 (C × D) and then by Proposition 11.1 we deduce T 1 (X ). By Proposition 9.4, T 2 (X ) follows as well. We say that "X is dominated by a product of curves (DPC)." The question of which varieties are dominated by products of curves has been studied by Schoen [Sch96]. In particular, over any field there are surfaces that are not dominated by products of curves. Nevertheless, as we will see below, the collection of DPC surfaces is sufficiently rich to give some striking results on the Birch and Swinnerton-Dyer conjecture. Elliptic curves and elliptic surfaces We keep our standard notations throughout this lecture: p is a prime, k = F q is the finite field of characteristic p with q elements, C is a smooth, projective, absolutely irreducible curve over k, K = k(C) is the function field of C, and E is an elliptic curve over K. Curves and surfaces In this section we will construct an elliptic surface E → C canonically associated to an elliptic curve E/K. More precisely, we give a constructive proof of the following result: Proposition 1.1. Given an elliptic curve E/K, there exists a surface E over k and a morphism π : E → C with the following properties: E is smooth, absolutely irreducible, and projective over k, π is surjective and relatively minimal, and the generic fiber of π is isomorphic to E. The surface E and the morphism π are uniquely determined up to isomorphism by these requirements. Here "the generic fiber of π" means E K , the fiber product: Relatively minimal" means that if E ′ is another smooth, absolutely irreducible, projective surface over k with a surjective morphism π ′ : E ′ → C, then any birational morphism E → E ′ commuting with π and π ′ is an isomorphism. Relative minimality is equivalent to the condition that there are no rational curves of self-intersection −1 in the fibers of π (i.e., to the non-existence of curves in fibers that can be blown down). Remarks 1.2. The requirements on E and π imply that π is flat and projective and that all geometric fibers of π are connected. These properties of π will be evident from the explicit construction below. It follows that π * O E ∼ = O C and more generally that π is "cohomologically flat in dimension zero," meaning that for every morphism T → C the base change Uniqueness in Proposition 1.1 follows from general results on minimal models, in particular [Lic68,Thm. 4.4]. See [Chi86] and [Liu02,9.3] for other expositions. We first give a detailed construction of a (possibly singular) "Weierstrass surface" W → C and then resolve singularities to obtain E → C. More precisely, the proposition follows from the following two results. Proposition 1.3. Given an elliptic curve E/K, there exists a surface W over k and a morphism π 0 : W → C with the following properties: W is normal, absolutely irreducible, and projective over k, π 0 is surjective, each of its fibers is isomorphic to an irreducible plane cubic, and its generic fiber is isomorphic to E. This proposition is elementary, but does not seem to be explained in detail in the literature, so we give a proof below. Proposition 1.4. There is an explicit sequence of blow ups (along closed points and curves in W) yielding a proper birational morphism σ : E → W where the surface E and the composed morphism π = π 0 • σ : E → W → C have the properties mentioned in Proposition 1.1. Proof of Proposition 1.3. Choose a Weierstrass equation for E: (1.5) y 2 + a 1 xy + a 3 y = x 3 + a 2 x 2 + a 4 x + a 6 where the a i are in K = k(C). Recall that we have defined the notion of a minimal integral model at a place v of K: the a i should be integral at v and the valuation at v of ∆ should be minimal subject to the integrality of the a i . Clearly, there is a non-empty Zariski open subset U ⊂ C such that for every closed point v ∈ U , the model (1.5) is a minimal integral model. Let W 1 be the closed subset of P 2 U := P 2 k × k U defined by the vanishing of (1.6) Y 2 Z + a 1 XY Z + a 3 Y Z 2 − X 3 + a 2 X 2 Z + a 4 XZ 2 + a 6 Z 3 where X, Y, Z are the standard homogeneous coordinates on P 2 k . Then W 1 is geometrically irreducible and there is an obvious projection π 1 : W 1 → U (the restriction to W 1 of the projection P 2 U → U ). The fiber of π 1 over a closed point v of U is the plane cubic over the residue field κ v at v. The generic point η of C lies in U and the fiber of π 1 at η is E/K. There are finitely many points in C \ U and we must extend the model W 1 → U over each of these points. Choose one of them, call it w, and choose a model of E that is integral and minimal at w. In other words, choose a model of E where the a ′ i ∈ K are integral at w and the valuation at w of the discriminant ∆ is minimal. The new coordinates are related to the old by a transformation (1.8) (x, y) = (u 2 x ′ + r, u 3 y ′ + su 2 x ′ + t) with u ∈ K × and r, s, t ∈ K. Let U ′ be a Zariski open subset of C containing w on which all of the a ′ i are integral and the model (1.7) is minimal. Let W ′ be the geometrically irreducible closed subset of P 1 U ′ defined by the vanishing of with its obvious projection π ′ : W ′ → U ′ . On the open set V = U ∩ U ′ , u is a unit and the change of coordinates (1.8), or rather its projective version defines an isomorphism between π −1 1 (V ) and π ′−1 (V ) compatible with the projections. Glueing W 1 and W ′ along this isomorphism yields a new surface W 2 equipped with a projection π 2 : W 2 → U 2 where U 2 = U ∪ U ′ . Note that U 2 is strictly larger than U . Moreover π 2 is surjective, its geometric fibers are irreducible projective plane cubics, and its generic fiber is E. We now iterate this construction finitely many times to extend the original model over all of C. We arrive at a surface W equipped with a proper, surjective morphism π : W → C whose geometric fibers are irreducible plane cubics and whose generic fiber is E. Since C is projective over k, so is W. Since W is obtained by glueing reduced, geometrically irreducible surfaces along open subsets, it is also reduced and geometrically irreducible. Since it has only isolated singular points, by Serre's criterion it is normal. This completes the proof of Proposition 1.3. Note that the closure in W of the identity element of E is a divisor on W which maps isomorphically to the base curve C. We write s 0 : C → W for the inverse morphism. This is the zero section of π 0 . In terms of the coordinates on W 1 used in the proof above, it is just the map t → ([0, 1, 0], t). Discussion of Proposition 1.4. The algorithm mentioned in the Proposition is the subject of Tate's famous paper [Tat75]. His article does not mention blowing up, but the steps of the algorithm nevertheless give the recipe for the blow ups needed. The actual process of blowing up is explained in detail in [Sil94, IV.9] so we will not give the details here. Rather, we explain why there is a simple algorithm, following [Con05]. First note that the surface W is reduced and irreducible and so has no embedded components. Also, it has isolated singularities. (They are contained in the set of singular points of fibers of π 0 .) By Serre's criterion, W is thus normal. Moreover, and this is the key point, its singularities are rational double points. (See [Art86] for the definition and basic properties of rational singularities and [Bȃd01, Chapters 3 and 4] for many more details. See [Con05, Section 8] for the fact that the singularities of a minimal Weierstrass model are rational.) This implies that the blow up of W at one of its singular points is again normal (so has isolated singularities) and again has at worst rational double points. An algorithm to desingularize is then simply to blow up at a singular point and iterate until the resulting surface is smooth. Given the explicit nature of the equations defining W, finding the singular points and carrying out the blow ups is straightforward. In fact, Tate's algorithm also calls for blowing up along certain curves. (This happens at steps 6 and 7.) This has the effect of dealing with several singular points at the same time, so is more efficient, but it is not essential to the success of the algorithm. This completes our discussion of Proposition 1.4. See below for a detailed example covering a case not treated explicitly in [Sil94]. Conrad's article [Con05] also gives a coordinate-free treatment of integral minimal models of elliptic curves. It is worth remarking that Tate's algorithm and the possible structures of the bad fibers are essentially the same in characteristic p as in mixed characteristic. On the other hand, for non-perfect residue fields k of characteristic p ≤ 3, there are more possibilities for the bad fibers, in both equal and mixed characteristics-see [Szy04]. The zero section of W lifts uniquely to a section which we again denote by s 0 : C → E. The bundle ω and the height of E We construct an invertible sheaf on C as follows, using the notation of the proof of Proposition 1.1. Take the trivial invertible sheaf O U on U with its generating section 1 U . At each stage of the construction, extend this sheaf by glueing O U and O U ′ over U ∩ U ′ by identifying 1 U and u −1 1 U ′ where u is the function appearing in the change of coordinates (1.8). The resulting invertible sheaf ω has several other descriptions. For example, the sheaf of relative differentials Ω 1 E/C is invertible on the locus of E where π : E → C is smooth (in particular in a neighborhood of the zero section) and, more or less directly from the definition, ω can be identified with s * 0 (Ω 1 E/C ). Using relative duality theory, ω can also be identified with the inverse of R 1 π * O E . Finally, since W as only rational singularities, ω is also isomorphic to R 1 π 0 * O W One may identify the coefficients a i of the Weierstrass equation locally defining W with sections of ω i . Using this point of view, W can be identified with a closed subvariety of a certain P 2 -bundle over C. Namely, let V be the locally free O C module of rank three (where the exponents denote tensor powers). If PV denotes the projectivization of V over C, a P 2 bundle over C, then W is naturally the closed subset of PE defined locally by the vanishing of Weierstrass equations as in (1.6). Exercises 2.2. Verify the identifications and assertions in this section. In the case where C = P 1 , so K = k(t), check that ω = O P 1 (h) where h is the smallest positive integer such that E has a model (1.5) where the a i are in k[t] and deg a i ≤ hi. Exercises 2.3. Check that c 4 , c 6 , and ∆ define canonical sections of ω 4 , ω 6 , and ω 12 respectively, independent of the choice of equation for E. If p = 2 or 3, check that b 2 defines a canonical section of ω 2 and that c 4 = b 2 2 and c 6 = −b 3 2 . If p = 2, check that a 1 defines a canonical section of ω and that b 2 = a 2 1 . Note that since positive powers of ω have non-zero sections, the degree of ω is non-negative. Definition 2.4. The height of E, denoted h, is defined by h = deg(ω), the degree of ω as an invertible sheaf on C. Note that if E/K is constant (in the sense of Lecture 1) then the height of the corresponding E is 0. Examples The case when C = P 1 is particularly simple. First of all, one may choose a model (1.5) that is integral and minimal simultaneously at every finite v, i.e., for every v ∈ A 1 k . Indeed, start with any model and change coordinates so that the If w is a finite place where this model is not minimal, it is possible (because k[t] is a PID) to choose a change of coordinates where r, s, t, u ∈ k[t][1/w] and u a unit yielding a model that is minimal at w. Such a change of coordinates does not change the minimality at any other finite place. Thus after finitely many steps, we have a model integral and minimal at all finite places. (This argument would apply for any K and any Dedekind domain R ⊂ K which is a PID, yielding a model with the a i ∈ R that is minimal at all v ∈ Spec R.) As a very concrete example, consider the curve over F p (t) where p > 2 and d is not divisible by p. Since ∆ = 16t 2d (t d − 1) 2 , this model is integral and minimal at all non-zero finite places. It is also minimal at zero as one may see by noting that c 4 and c 6 are units at 0. At infinity, the change of coordinates Working with Tate's algorithm shows that E has I 2 reduction at the d-th roots of unity, I 2d reduction at t = 0, and either I * 2d or I 2d reduction at infinity depending on whether d is odd or even. Since the case of I n reduction is not treated explicitly in [Sil94], we give more details on the blow ups needed to resolve the singularity over t = 0. In terms of the coordinates on W 1 used in the proof of Proposition 1.4 we can consider the affine surface defined by which is an open neighborhood of the singularity at x = y = t = 0. If d = 1, then the tangent cone is the irreducible plane conic defined by x 2 + tx − y 2 = 0. The singular point thus blows up into a smooth rational curve and it is easy to check that the resulting surface is smooth in a neighborhood of the fiber t = 0. Now assume that d > 1. Then the tangent cone is the reducible conic x 2 − y 2 = 0 and so the singular point blows up into two rational curves meeting at one point. More precisely, the blow up is covered by three affine patches. In one of them, the surface upstairs is tx 3 1 + (t d + 1)x 2 1 + t d−1 x 1 − y 2 1 = 0 and the morphism is x = tx 1 , y = ty 1 . The exceptional divisor is the reducible curve t = x 2 1 − y 2 1 = 0 and the point of intersection of the components t = x 1 = y 1 = 0 is again a double point. Considering the other charts shows that there are no other singular points in a neighborhood of t = 0 and that the exceptional divisor meets the original fiber over t = 0 in two points. We now iterate this process d − 1 times, introducing two new components at each stage. After d−1 blow ups, the interesting part of our surface is given by At this last stage, blowing up introduces one more component meeting the two components introduced in the preceding step at one point each. The (interesting part of the) surface is now which is regular in a neighborhood of t = 0. Thus we see that the fiber over t = 0 in E is a chain of 2d rational curves, i.e., a fiber of type I 2d . The resolution of the singularities over points with t d = 1 is similar but simpler because only one blow up is required. At t = ∞, if d is even then the situation is very similar to that over t = 0 and the reduction is again of type I 2d . If d is odd, the reduction is of type I * 2d . We omit the details in this case since it is treated fully in [Sil94]. E and the classification of surfaces It is sometimes useful to know how E fits into the Enriques-Kodaira classification of surfaces. In this section only, we replace k with k and write E for what elsewhere is denoted E. Recall that the height of E is defined as h = deg ω. Proof. It is obvious that if E is constant, then ω ∼ = O C . Conversely, suppose ω ∼ = O C . Then the construction of π 0 : W → C in Proposition 1.3 yields an irreducible closed subset of P 2 C (because the P 2 -bundle PV in (2.1) is trivial): W ⊂ P 2 C = P 2 k × k C. Let σ : W → P 2 k be the restriction of the projection P 2 C → P 2 k . Then σ is not surjective (since most points in the line at infinity Z = 0 are not in the image) and so its image has dimension < 2. Considering the restriction of σ to a fiber of π 0 shows that the image of σ is in fact an elliptic curve E 0 and then it is obvious from dimension considerations that W = π −1 0 (π 0 (W)) = E 0 × C. It follows that E, the generic fiber of π 0 , is isomorphic to E 0 × Spec K, i.e., that E is constant. Now assume that h = 0. Then ∆ is a non-zero global section of the invertible sheaf ω 12 on C of degree 0. Thus ω 12 is trivial. It follows that there is a finite unramified cover of C over which ω becomes trivial and so by the first part, E becomes constant over a finite extension, i.e., E is isotrivial. Note that E being isotrivial does not imply that h = 0. Here we are using that E → C has a section and therefore no multiple fibers. The proof, which we omit, proceeds by considering R 1 π * O E and using relative duality. See for example [Bȃd01,7.15]. We now consider several cases: If 2g C − 2 + h > 0, then it follows from the Proposition that the dimension of H 0 (E, (Ω 2 ) ⊗ n ) grows linearly with n, so E has Kodaira dimension 1. If 2g C − 2 + h = 0, then the Kodaira dimension of E is zero and there are two possibilities: (1) g C = 1 and h = 0; or (2) g C = 0 and h = 2. In the first case, there is an unramified cover of C over which E becomes constant and so E is the quotient of a product of two elliptic curves. These surfaces are sometimes called "bi-elliptic." In the second case, If 2g C − 2 + h < 0, then the Kodaira dimension of E is −∞ and there are again two possibilities: (1) g C = 0 and h = 1, in which case E is a rational surface by Castelnuovo's criterion; or (2) g C = 0 and h = 0, in which case E is constant and E is a ruled surface E 0 × C = E 0 × P 1 . Points and divisors, Shioda-Tate If D is an irreducible curve on E, then its generic fiber is either empty or is a closed point of E. The former occurs if and only if D is supported in a fiber of π. In the latter case, the residue degree of D.E is equal to the generic degree of D → C. Extending by linearity, we get homomorphism Div(E) → Div(E) whose kernel consists of divisors supported in the fibers of π. There is a set-theoretic splitting of this homomorphism, induced by the map sending a closed point of E to its scheme-theoretic closure in E. However, this is not in general a group homomorphism. Let L 1 Div(E) be the subgroup of divisors D such that the degree of D.E is zero and let L 2 Div(E) be subgroup such that D.E = 0. We write L i Pic(E) and L i NS(E) (i = 1, 2) for the images of L i (E) in Pic(E) and NS(E) respectively. The Shioda-Tate theorem relates the Néron-Severi group of E to the Mordell-Weil group of E: This theorem seems to have been known to the ancients (Lang, Néron, Weil, ...) and was stated explicitly in [Tat66b] and in papers of Shioda. A detailed proof in a more general context is given in [Shi99]. Note however that in [Shi99] the ground field is assumed to be algebraically closed. See [Ulm11] for the small modifications needed to treat finite k. It is obvious that N S(E)/L 1 N S(E) is infinite cyclic. We saw in Example 8.6 of Lecture 2 that L 2 N S(E) is free abelian of rank 1 + v (f v − 1). So as a corollary of the theorem, we have the following rank formula, known as the Shioda-Tate formula: For more on the geometry of elliptic surfaces and elliptic curves over function fields, with an emphasis on rational and K3 surfaces, I recommend [SS09]. L-functions and Zeta-functions We are going to relate the L-function of E and the zeta function of E. We note that from the definition, Z(E, T ) depends only on the underlying set of closed points of E and we may partition this set using the map π. We have For y such that π −1 (y) is a smooth elliptic curve, we know that and the numerator here is the factor that enters into the definition of L(E, T ). To complete the calculation, we need an analysis of the contribution of the bad fibers. We consider the fiber π −1 (y) as a scheme of finite type over the residue field κ y , the field of q y elements. As such, it has irreducible components. Its "geometric components" are the components of the base change to κ y ; these are defined over some finite extension of κ y . For certain reduction types (I n , I * n (n ≥ 0), IV and IV * ) it may happen that all the geometric components are defined over κ y , in which case we say the reduction is "split", or it may happen that some geometric components are only defined over a quadratic extension of κ y , in which case we say the reduction is "non-split." This agrees with the standard usage in the case of I n reduction and may be non-standard in the other cases. Proposition 6.1. The zeta function of the a singular fiber of π has the form where the integers a, b, f , and g are determined by the reduction type at y and are given in the following table: a b f g split I n 0 0 n 0 non-split I n , n odd −1 1 (n + 1)/2 (n − 1)/2 non-split I n , n even −1 1 n/2 + 1 Exercise 6.2. Use an elementary point-counting argument to verify the proposition. In particular, check that the number of components of π −1 (y) that are rational over κ y is f and that the order of pole at Using the Proposition and the definition of the L-function (in Lecture 1, equation (9.1)) we find that where a v , b v , f v and g v are the invariants defined in the Proposition at the place v. Using the Weil conjectures (see Section 3 of Lecture 0), we see that the orders of L(E, s) and ζ(E, s) at s = 1 are related as follows: Remark 6.5. This simple approach to evaluating the order of zero of the L-function does not yield the important fact that L(E, T ) is a polynomial in T when E is nonconstant, nor does it yield the Riemann hypothesis for L(E, T ). For a slightly more sophisticated (and less explicit) comparison of ζ-functions and L-functions in a more general context, see [Gor79]. The Tate-Shafarevich and Brauer groups The last relationship between E and E we need concerns the Tate-Shafarevich and Brauer groups. Theorem 7.1. Suppose that E is an elliptic curve over K = k(C) and E → C is the associated elliptic surface as in Proposition 1.1. Then there is a canonical isomorphism Br(E) ∼ = (E/K). The proof of this result, which is somewhat involved, is given in [Gro68, Section 4]. The main idea is simple enough: one computes Br(E) = H 2 (E, G m ) using the morphism π : E → C and a spectral sequence. Using that the Brauer group of a smooth, complete curve over a finite field vanishes, one finds that the main term is H 1 (C, R 1 π * G m ). Since R 1 π * G m is the sheaf associated to the relative Picard group, it is closely related to the sheaf on C represented by the Néron model of E. This provides a connection with the Tate-Shafarevich group which leads to the theorem. See [Ulm11] for more details about this and the closely related connection between H 2 (E, Z ℓ (1)) G k and the ℓ-Selmer group of E. The main classical results We are now in a position to prove the theorems of Section 12 of Lecture 1. For convenience, we restate Theorem 12.1 and a related result. Theorem 8.1. Suppose that E is an elliptic curve over K = k(C) and E → C is the associated elliptic surface as in Proposition 1.1. (1) BSD holds for E if and only if T 2 holds for E. (3) The following are equivalent: • for any one prime number ℓ (ℓ = p is allowed ), the ℓ-primary part (E/K) ℓ ∞ is finite. (4) If K ′ /K is a finite extension and if the BSD conjecture holds for E over K ′ , then it holds for E over K. Since BSD is the assertion that the left hand side is zero and T 2 is the assertion that the right hand side is zero, these conjectures are equivalent. By Theorem 9.3 of Lecture 2, the right hand side is ≤ 0 and therefore so is the left. This gives the inequality Rank E(K) ≤ ord s=1 L(E, s). The statements about (E/K) follow from Theorem 7.1 ( (E/K) ∼ = Br(E)), the equivalence of BSD and T 2 (E), and Theorem 10.2 of Lecture 2. The last point follows from the equivalence of BSD and T 2 (E) and Proposition 11.1 of Lecture 2. Proofs of Theorems 12.2 and 12.3 of Lecture 1. Theorem 12.2 of Lecture 1 concerns isotrivial elliptic curves. By the last point of Theorem 8.1 above, it suffices to show that BSD holds for constant curves. But if E is constant, then E is a product of curves, so the Tate conjecture for E follows from Theorem 12.1 of Lecture 2. The first point of Theorem 8.1 above then gives BSD for E. Theorem 12.3 of Lecture 1 concerns elliptic curves over k(t) of low height. By the discussion in Section 4, if E/k(t) has height ≤ 2 then E is a rational or K3 surface. (Strictly speaking, this is true only over a finite extension of k, but the last point of Theorem 8.1 allows us to make this extension without loss of generality.) But T 2 (X ) for a rational surface follows from Proposition 13.1 of Lecture 2. For E such that E is a K3 surfaces, Artin and Swinnerton-Dyer proved the finiteness of (E/K) (and therefore BSD) in [ASD73]. Domination by a product of curves Combining part 1 of Theorem 8.1 with Proposition 13.1 of Lecture 2, we have the following. Theorem 9.1. Let E be an elliptic curve over K with associated surface E. If E is dominated by a product of curves, then BSD holds for E. Theorem 12.4 ("four monomials") and Berger's theorem 11.1 are both corollaries of Theorem 9.1, as we will explain in the remainder of this lecture. Four monomials We recall Shioda's conditions. Suppose that f ∈ R = k[x 1 , x 2 , x 3 ] is the sum of exactly four non-zero monomials: where c i ∈ k and the e ij are non-negative integers. Let e i4 = 1 − 3 j=1 e ij and form the 4 × 4 matrix A = (e ij ). Assuming that det(A) = 0 (in Z), let δ be the smallest positive integer such that there is a 4 × 4 integer matrix B with AB = δI 4×4 . We say that f satisfies Shioda's 4-monomial condition if δ = 0 in k, i.e., if p | δ. The following exercise shows that this is equivalent to the definition in Lecture 1. has a solution with d j ∈ F q , j = 1, . . . , 4. Proof of Theorem 12.4 of Lecture 1. Briefly, the hypotheses imply that the associated elliptic surface E → P 1 is dominated by a Fermat surface (of degree δ) and thus by a product of Fermat curves (of degree δ). Thus Theorem 9.1 implies that BSD holds for E. In more detail, note that E is birational to the affine surface V (f ) ⊂ A 3 k . So it will suffice to show that V (f ) is dominated by a product of curves. To that end, it will be convenient to identify k[t, x, y] and R = k[x 1 , x 2 , x 3 ] by sending t → x 1 , x → x 2 and y → x 3 , so that f becomes Exercise 10.2 implies that, after extending k if necessary, we may change coordinates (x j → d j x j ) so that the coefficients c i are all 1. Then the matrix A defines rational a map φ from V (f ) to the Fermat surface of degree 1 Similarly, the matrix B defines a rational map ψ from the Fermat surface of degree δ The composition of these maps is the standard projection from F 2 δ to F 2 1 , namely y i → z δ i and so both maps are dominant. Finally, Shioda and Katsura [SK79] showed that F 2 δ is dominated by the product of Fermat curves F 1 δ ×F 1 δ . Thus, after extending k, E is dominated by a product of curves and Theorem 9.1 finishes the proof. As we will explain below, this Theorem can be combined with results on analytic ranks to give examples of elliptic curves over F p (t) with arbitrarily large Mordell-Weil rank. (In fact, similar ideas can be used to produce Jacobians of every dimension with large rank. For this, see [Ulm07] and also [Ulm11].) Unfortunately, Theorem 12.4 is very rigid-as one sees in the proof, varying the coefficients in the 4-nomial f does not vary the isomorphism class of E over F q and so we get only finitely many non-isomorphic elliptic curves over F p (t). Berger's construction, explained in the next subsection, was motivated by a desire to overcome this rigidity and give families of examples of curves where one knows the BSD conjecture. Berger's construction Berger gave a much more flexible construction of surfaces that are dominated by a product of curves in a tower. More precisely, we note that if E → P 1 is an elliptic surface and φ : P 1 → P 1 is the morphism with φ * (t) = u d (corresponding to the field extension k(u)/k(t) with u d = t), then it is not in general the case that the base changed surface is dominated by a product of curves. Berger's construction gives a rich class of curves for which DPC does hold in every layer of a tower of coverings. We restate Theorem 12.5 from Lecture 1 in a slightly different (but visibly equivalent) form. Theorem 11.1. Let E be an elliptic curve over K = k(t) and assume that there are rational functions f (x) and g(y) on P 1 k such that E is birational to the curve V (f (x) − tg(y)) ⊂ P 1 K × P 1 K . Then the BSD conjecture holds for E over the field k(u) = k(t 1/d ) for all d prime to p. Proof. Clearing denominators we may interpret f (x) − tg(y) as defining a hypersurface X in the affine space A 3 with coordinates x, y, and t and it is clear that the elliptic surface E → P 1 associated to E is birationally isomorphic to X . On the other hand, X is visibly birational to P 1 × P 1 since we may eliminate t. Thus X and E are dominated by a product of curves. This checks the case d = 1. For larger d, note that the elliptic surface E d → P 1 associated to E/k(u) is birational to the hypersurface X d in A 3 k defined by f (x) − u d g(y). Berger showed by a fundamental group argument, generalizing [Sch96], that X d is dominated by a product of curves, more precisely, by a product of covers of P 1 . (For her argument to be correct, π 1 should be replaced by the prime-to-p fundamental group π p ′ 1 throughout.) This was later made more explicit in [Ulm09a], where it was observed that X d is dominated by a product of two explicit covers of P 1 . More precisely, let C d and D d be the covers of P 1 k defined by z d = f (x) and w d = g(y). Then there is a rational map from C d × D d to the hypersurface X d , namely (x, z, y, w) → (x, y, u = z/w). This is clearly dominant and so X d and E are dominated by products of curves. Applying Theorem 9.1 finishes the proof. Note that there is a great deal of flexibility in the choice of data for Berger's construction. As an example, take f (x) = x(x − a)/(x − 1) and g(y) = y(y − 1) where a ∈ F q is a parameter. Then if a = 1, the curve f (x) = tg(y) in P 1 × P 1 has genus 1 and a rational point. A simple calculation shows that it is birational to the Weierstrass cubic y 2 + txy − ty = x 3 − tax 2 + t 2 ax. Theorem 11.1 implies that this curve satisfies the BSD conjecture over F q n (t 1/d ) for all n and all d prime to p. Varying q and a we get infinitely many curves for which BSD holds at every layer of a tower. We will give more examples and discuss further applications of the idea behind Berger's construction in Lectures 4 and 5. Unbounded ranks in towers In order to prove results on analytic ranks in towers, we need a more sophisticated approach to L-functions. In this lecture we explain Grothendieck's approach to L-functions over function fields and then use it and a new linear algebra lemma to find elliptic curves with unbounded analytic and algebraic ranks in towers of function fields. Galois representations As usual, we let K = k(C) be the function field of a curve over a finite field k and G K = Gal(K sep /K) its Galois group. As in Lecture 0, Section 2, we write D v , I v , and Fr v for the decomposition group, inertia group, and (geometric) Frobenius at a place v of K. We fix a prime ℓ = p and consider a representation on a finite-dimensional Q ℓ vector space. We make several standing assumptions about ρ. First, we always assume ρ is continuous and unramified away from a finite set of places of K. By a compactness argument (see [KS99,9.0.7]) , it is possible to define ρ over a finite extension L of Q ℓ , i.e., there is a representation ρ ′ : G K → GL n (L) isomorphic to ρ over Q ℓ . Nothing we say will depend on the field of definition of ρ and we will generally not distinguish between ρ and isomorphic representations defined over subfields of Q ℓ . We also always assume that ρ is pure of integral weight w, i.e., for all v where ρ is unramified, the eigenvalues of ρ(Fr v ) are Weil numbers of size q w/2 v . Finally, we sometimes assume that ρ is "symplectically self-dual of weight w." This means that on the space V where ρ acts, there is an G K -equivariant, alternating pairing with values in Q ℓ (−w). Conductors The Artin conductor of ρ is a divisor on C (a formal sum of places of K) and is a measure of its ramification. We write Cond(ρ) = n = v n v [v]. To define the local coefficients, fix a place v of K and let G i ⊂ I v be the higher ramification groups at v (in the lower numbering). Then define Here V Gi denotes the subspace of V invariant under G i . It is clear that n v = 0 if and only if ρ is unramified at v. If ρ is tamely ramified at v (i.e., G 1 acts trivially), then In general, the first term of the sum above is the tame conductor and the rest of the sum is the Swan conductor . We refer to [Mil80,V.2] and also [Ser77,§19] for an alternative definition and more discussion about the conductor, including the fact that the local coefficients n v are integers. L-functions Let us fix an isomorphism Q ℓ ∼ = C so that we may regard eigenvalues of Frobenius on ℓ-adic representations as complex numbers. Having done this, a representation (1.1.1) gives rise to an L-function, defined as an Euler product: and L(ρ, s) = L(ρ, q −s ). The product is over the places of K, the exponent I v denotes the subspace of elements invariant under the inertia group I v , and Fr v is a Frobenius element at v. Because of our assumption that ρ is pure of weight w, the product defining L(ρ, s) converges absolutely and defines a holomorphic function in the region Re s > w/2 + 1. is a twisted version of the zeta function of C. Compare with Exercise 9.2 of Lecture 1. Note that a representation factors through G K → G k if and only if it is trivial on G kK , so this exercise fills in the missing cases in the following theorem. Theorem 1.3.3. Suppose that ρ is a representation of G K (satisfying the standing hypotheses of Subsection 1.1) that contains no copies of the trivial representation when restricted to G kK . Then there is a canonically defined Q ℓ -vector space H(ρ) with continuous G k action such that The dimension of H(ρ) is deg(ρ)(2g C − 2) + deg n where n is the conductor of ρ. Proof. (Sketch) The representation ρ : G K → GL(V ) gives rise to a constructible sheaf F ρ on C. In outline: ρ is essentially the same thing as a lisse sheaf F U on the open subset j : U ֒→ C over which ρ is unramified. We defined F ρ as the push-forward j * F U . For each closed point v of C, the stalk of ρ at v is V Iv . Let H i (C, F ) be theétale cohomology groups of F . They are finite dimensional Q ℓ vector spaces and give continuous representations of G k . The Grothendieck-Lefschetz fixed point formula says that for each finite extension F q n of k ∼ = F q , we have On the left hand side, the sum is over points of C with values in F q n and the summand is the trace of the action of the Frobenius at x on the stalk of F at a geometric point over x. Multiplying both sides by T n /n, summing over n ≥ 1, and exponentiating, one finds that . Now H 0 (C, F ) and H 2 (C, F ) are isomorphic respectively to the invariants and coinvariants of V under G kK and so under our hypotheses on ρ, H i (C, F ) vanishes for i = 0, 2. Thus we have The dimension formula comes from an Euler characteristic formula proven by Raynaud and sometimes called the Grothendieck-Ogg-Shafarevich formula. It says Since H 0 and H 2 vanish, this gives the desired dimension formula. Obviously we have omitted many details. I recommend [Mil80, V.1 and V.2] as a compact and readable source for several of the key points, including passing from ℓ-torsion sheaves to ℓ-adic sheaves, the conductor, and the Grothendieck-Ogg-Shafarevich formula. See [Mil80,VI.13] for the Grothendieck-Lefschetz trace formula. Remark/Exercise 1.3.4. If we are willing to use a virtual representation of G k in place of a usual representation, then the Theorem has a more elegant restatement which avoids singling out representations that are trivial when restricted to G kK . State and prove this generalization. Exercise 1.3.5. Check that we have the Artin formalism formula: if F/K is a finite separable extension and ρ is a representation of G F , then L(ρ, s) = L(Ind GK GF ρ, s). Note that the left hand side is an Euler product on F with almost all factors of some degree, say N , whereas the right hand side is an Euler product on K, with almost all factors of degree N [F : K]. The equality can be taken to be an equality of Euler products, where that on the left is grouped according to the places of K. Functional equation and Riemann hypothesis (Symmetric because ρ is skew-symmetric and H = H 1 .) That there is such a pairing is not as straightforward as it looks, because we defined the sheaf F as a push forward j * F U where j : U ֒→ C is a non-empty open set over which ρ is unramified and F U is the lisse sheaf on U corresponding to ρ. It is well-known that j * identifies H 1 (C, F ) with the image of the "forget supports" map then induces a pairing on H 1 (C, F ) via the above identification. Poincaré duality shows that the pairing is non-degenerate and so H(ρ) is orthogonally self-dual of weight w + 1. The location of the zeroes is related to the eigenvalues of Frobenius on H(ρ) = H 1 (C, F ) and these are Weil numbers of size q w+1 by Deligne's purity theorem [Del80]. I recommend the Arizona Winter School 2000 lectures of Katz (published as [Kat01]) for a streamlined proof of Deligne's theorem in the generality needed here. The case of an elliptic curve Next, we apply the results of the previous section to elliptic curves. Throughout, E will be an elliptic curve over a function field K = k(C) over a finite field k of characteristic p. The Tate module We consider the Tate module of E. More precisely, fix a prime ℓ = p and let Let ρ E be the representation of G K on the dual vector space V * ℓ = Hom(V ℓ E, Q ℓ ) ∼ = H 1 (E, Q ℓ ). Then ρ E is two-dimensional and continuous and (by the criterion of Ogg-Néron-Shafarevich, see [ST68, Thm. 1]) it is unramified outside the (finite) set of places where E has bad reduction. At every place v of K where E has good reduction, we have This follows from the smooth base change theorem [Mil80,VI.4] and the cohomological description of the zeta function of the reduction, as in Section 4 of Lecture 0. Thus ρ is pure of weight w = 1. The Weil pairing induces an alternating, G k -equivariant pairing V ℓ E × V ℓ E → Q ℓ (−1) and so ρ is symplectically self-dual of weight 1. If E is constant, then ρ E factors through G K → G k and since G k is abelian, ρ E is the direct sum of two characters. More precisely, if E ∼ = E 0 × k K and 1 − aT + qT 2 = (1 − α 1 T )(1 − α 2 T ) is the numerator of the Z-function of E 0 , then ρ E is the sum of the two characters that send Fr v to α deg v i . If E is non-isotrivial, then ρ E restricted to G kK has no trivial subrepresentations. One way to see this is to use a slight generalization of the MWLN theorem, according to which E(kK) is finitely generated (when E is non-isotrivial). Thus its ℓ-power torsion is finite and this certainly precludes a trivial subrepresentation in ρ| G kK . In fact, by a theorem of Igusa [Igu59], ρ| G kK is contains an open subgroup of SL 2 (Z ℓ ) so is certainly irreducible, even absolutely irreducible. Exercise 2.1.1. Show that if E is isotrivial but not constant, then ρ E restricted to G kK has no trivial subrepresentation. Hint: E is a twist of a contant curve E ′ = E 0 × k K. Relate the action of G K on the Tate module of E to its action on that of E ′ and show that there exists an element σ ∈ G kK that acts on V ℓ E via a non-trivial automorphism of E. But a non-trivial automorphism has only finitely many fixed points. We can summarize this discussion as follows. Proposition 2.1.2. Let ρ be the action of G K on the Tate module V ℓ E of E. Then ρ is continuous, unramified outside a finite set of places of K, and is pure and symplectically self-dual of weight 1. If E is non-constant, then ρ| G kK has no trivial subrepresentations. The conductor of ρ E as defined in the previous section is equal to the conductor of E as mentioned in Section 8 of Lecture 1. This was proven by Ogg in [Ogg67]. The L-function Applying the results of the previous section, we get a very satisfactory analysis of the L-function of E. Since we know everything about the constant case by an elementary analysis (cf. exercise 9.2 of Lecture 1), we restrict to the non-constant case. Theorem 2.2.1. Let E be a non-constant elliptic curve over K = k(C) and let q be the cardinality of k. Let n be the conductor of E. Then L(E, s) is a polynomial in q −s of degree N = 4g C − 4 + deg(n). Its inverse roots are Weil numbers of size q and it satisfies a functional equation Combining the Theorem with Theorem 12.1, we obtain the following. The sign in the functional equation can be computed as a product of local factors. This can be seen via the connection with automorphic forms (a connection which is outside the scope of these lectures) or, because we are in the function field situation, directly via cohomological techniques. See [Lau84] for the latter. Large analytic ranks in towers 3.1. Statement of the theorem We give a general context in which one obtains large analytic ranks by passing to layers of a suitable tower of function fields. As usual, let p be a prime and q a power of p. Let K = F q (t), for each d not divisible by p, Suppose that E is an elliptic curve over K. Let n be the conductor of E and let . This is the conductor of E except that we have removed the tame part at t = 0 and t = ∞. Theorem 3.1.1. Let E be an elliptic curve over K and define n ′ as above. Suppose that deg n ′ is odd. Then the analytic rank of E over F d (and K d ) is unbounded as d varies. More precisely, there exists a constant c depending only on E such that if d has the form d = q n + 1, then and This theorem is proven in detail in [Ulm07,[2][3][4]. We will sketch the main lines of the argument below. A linear algebra lemma Our analytic rank results ultimately come from the following odd-looking result of linear algebra. Proposition 3.2.1. Let V be a finite-dimensional vector space with subspaces W i indexed by i ∈ Z/aZ such that V = ⊕ i∈Z/aZ W i . Let φ : V → V be an invertible linear transformation such that φ(W i ) = W i+1 for all i ∈ Z/aZ. Suppose that V admits a non-degenerate, φ-invariant symmetric bilinear form , . Suppose that a is even and , induces an isomorphism W a/2 ∼ = W * 0 (the dual vector space of W 0 ). Suppose also that N = dim W 0 is odd. Then the polynomial 1 − T a divides det(1 − φT |V ). We omit the proof of this proposition, since it is not hard and it appears in two forms in the literature already. Namely, embedded in [Ulm05, 7.1.11ff] is a matrix-language proof of the proposition, and a coordinate-free proof is given in [Ulm07,§2]. 3.3. Sketch of the proof of Theorem 3.1.1 For simplicity, we assume that E is non-isotrivial. (If p > 3 and E is isotrivial, then the theorem is vacuous because all of the local conductor exponents n v are even.) Let ρ be the representation of G K on V = H 1 (E, Q ℓ ) = (V ℓ E) * and let ρ d be the restriction of ρ to G F d . Then by Grothendieck's analysis, we have Here H(ρ d ) is an H 1 on the rational curve whose function field is F d = F q (u) = F q (t 1/d ). The projection formula in cohomology (a parallel of the Artin formalism 1.3.5) implies that where 1 denotes the trivial representation. Since the cohomology H is computed on P 1 u (the P 1 with coordinate u, with scalars extended to F q ) and P 1 u → P 1 t is Galois with group µ d , we have where χ is a character of Gal(F q (u)/F q (t)) of order exactly d. Now the decomposition displayed above is not preserved by Frobenius. Indeed Fr q sends H(ρ ⊗ χ j ) to H(ρ ⊗ χ qj ). Thus we let o ⊂ Z/dZ denote an orbit for multiplication by q and we regroup: We write V o for the summand indexed by an orbit o ⊂ Z/dZ in the last display and a o for the cardinality of o. As we will see presently, the hypotheses of the theorem imply that Proposition 3.2.1 applies to most of the V o and for each one where it does, we get a zero of the L-function. Before we do that, there is one small technical point to take care of: The linear algebra proposition requires that V be literally self-dual (not self-dual with a weight) and it implies that 1 is an eigenvalue of φ on V . To get the eigenvalue q that we need, we should twist ρ by −1/2 (which is legitimate once we have fixed choice of square root of q) so that it has weight 0, apply the lemma, and twist back to get the desired zero. We leave the details of these points to the reader. Assuming we have made the twist just mentioned, we need to check which V o are self-dual. Since ρ is self-dual, Poincaré duality gives a non-degenerate pairing on H(ρ d ) which puts H(ρ⊗χ j ) in duality with H(ρ⊗χ −j ). Thus if d = q n +1 for some n > 0, then all of the orbits o will yield a self-dual V o . Possibly two of these orbits have odd order (those through 0 and d/2, which have order 1) and all of the other have a o even. Moreover, for the orbits of even order, setting W o,i = H(ρ ⊗ χ q i jo ) for some fixed j o ∈ o, we have The last point that we need is that W o,i should be odd-dimensional. The hypothesis on n ′ implies that for all characters χ j of sufficiently high order (depending only on E), the conductor of ρ ⊗ χ j is odd. The Grothendieck-Ogg-Shafarevich dimension formula (mentioned at the end of the proof of Theorem 1.3.3) then implies that for all orbits o consisting of characters of high order, H(ρ ⊗ χ jo ) has odd dimension. The linear algebra proposition 3.2.1 now implies that for d = q n + 1 and for most orbits o ⊂ Z/dZ, 1 is an eigenvalue of Fr q on V o (and q is an eigenvalue of Fr q on the corresponding factor of H(ρ d )). Since each of these orbits has size ≤ 2n, there is a constant c such that the number of "good" orbits is ≥ d/2n. Thus for a constant c depending only on E. To get the assertions over K d , note that in passing from F d to K d , each factor (1 − q ao T ao ) of L(E/F d , T ) becomes (1 − qT ) ao and so This completes our discussion of Theorem 3.1.1. We refer to [Ulm07, §2-4] for more details. Examples It is easy to see that the hypotheses in Theorem 3.1.1 are not very restrictive and that high analytic ranks are in a sense ubiquitous. The following rephrasing of the condition in the theorem should make this clear. Exercise 3.4.1. Prove that if p > 3 and E is an elliptic curve over K, then Theorem 3.1.1 guarantees that E has unbounded analytic rank in the tower F d if the number of geometric points of P 1 Fq over which E has multiplicative reduction is odd. Corollary 3.4.2. Let p be any prime number, K = F p (t), and let E be one of the curves E 7 , E 8 , or E 9 defined in Subsection 1.2 of Lecture 1. Then is unbounded as d varies through integers prime to p Proof. If p > 3, then one sees immediately by considering the discriminant and j-invariant that E has one finite, non-zero place of multiplicative reduction and is tame at 0 and ∞, thus it satisfies the hypotheses of Theorem 3.1.1. If p = 2 or 3, one checks using Tate's algorithm that E has good reduction at all finite non-zero places and is tame at zero, but the wild part of the conductor at ∞ is odd and so the theorem again applies. For another example, take the Legendre curve over F p (t), p > 2. It is tame at 0 and ∞ and has exactly one finite, non-zero place of multiplicative reduction. Examples via the four-monomial theorem Noting that the curves E 7 , E 8 , and E 9 are defined by equations involving exactly four monomials, we get a very nice result on algebraic ranks. Theorem 4.1.1. Let p be any prime number, K = F p (t), and let E be one of the curves E 7 , E 8 , or E 9 defined in Subsection 1.2 of Lecture 1. Then for all d prime to p and all powers q of p, the Birch and Swinnerton-Dyer conjecture holds for E over K d = F q (t 1/d ). Moreover, the rank of E(F p (t 1/d )) is unbounded as d varies. Proof. This follows immediately from Corollary 3.4.2 and Theorem 12.4 of Lecture 1 as soon as we note that E/K d is defined by an equation satisfying Shioda's conditions. Similar ideas can be used to show that for every prime p and every genus g > 0, there is an explicit hyperelliptic curve C over F p (t) such that the Jacobian of C satisfies BSD over F q (t 1/d ) for all q and d and has unbounded rank in the tower F p (t 1/d ). This is the main theorem of [Ulm07]. Examples via Berger's construction As we pointed out in Lecture 3, the Shioda 4-monomial construction is rigidvarying the coefficients does not lead to families that vary geometrically. Berger's thesis developed a new construction with parameters that leads to families of curves for which the BSD conjecture holds in a tower of fields. This together with the analytic ranks result 3.1.1 gives examples of families of elliptic curves with unbounded ranks. To make this concrete, we quote the first example with parameters from [Ber08] that, together with the analytic rank construction 3.1.1, gives rise to unbounded analytic and algebraic ranks. Theorem 4.2.1 (Berger). Let k = F q be a finite field of characteristic p and let a ∈ F q with a = 0, 1, 2. Let E be the elliptic curve over K = F q (t) defined by y 2 + a(t − 1)xy + a(t 2 − t)y = x 3 + (2a + 1)tx 2 + a(a + 2)t 2 x + a 2 t 3 . Then for all d prime to p the BSD conjecture holds for E over F q (t 1/d ). Moreover, for every q and a as above, the rank of E(F q (t 1/d )) is unbounded as d varies. Proof. This is an instance of Berger's construction (Theorem 11.1 of Lecture 3). Indeed, let f (x) = x(x − a)/(x − 1) and g(y) = y(y − a)/(y − 1). Then V (f − tg) ⊂ P 1 K × P 1 K is birational to E, which is a smooth elliptic curve for all a = 0, 1. Berger's Theorem 11.1 of Lecture 3 shows that E satisfies BSD over the fields F q (t 1/d ). Assume first that p > 3. One checks that ∆ is relatively prime to c 4 so that the zeroes of ∆ are places of multiplicative reduction. Since the discriminant (in t) of the quadratic factor a 2 t 2 − (2a 2 − 16a + 16)t + a 2 is −64(a − 1)(a − 2) 2 we see that there are three finite, non-zero geometric points of multiplicative reduction. Since p > 3, the reduction at 0 and ∞ is tame and so n ′ (defined as in Subsection 3.1 of Lecture 4) has degree 3. Thus by Theorem 3.1.1 of Lecture 4, E has unbounded analytic ranks in the tower F q (t 1/d ) and thus also unbounded algebraic ranks by the previous paragraph on BSD. If p = 2 or 3, one needs to use Tate's algorithm to compute n ′ , which again turns out to have degree 3. We leave the details of this computation as a pleasant exercise for the reader. LECTURE 5 More applications of products of curves In the last part of Lecture 4, we chose special curves E and used a domination C ×D E of the associated surface to deduce the Tate conjecture for E and thus the BSD conjecture for E. This yields an a priori equality of analytic and algebraic ranks. We then used other, cohomological, methods (namely the analytic ranks theorem) to compute the analytic rank. It turns out to be possible to use domination by a product of curves and geometry to prove directly results about algebraic ranks and explicit points. We sketch some of these applications in this lecture. More on Berger's construction Let k be a field (not necessarily finite), K = k(t), and K d = k(t 1/d ) = k(u). Recall that in Berger's construction we start with rational curves C = P 1 k and D = P 1 k and rational functions f (x) on C and g(y) on D. We get a curve in P 1 K × P 1 K defined by f (x) − tg(y) = 0 and we let E be the smooth proper model over K of this curve. (Some hypotheses are required for this to exist, but they are weaker than our standing hypotheses below.) The genus of E was computed by Berger in [Ber08, Theorem 3.1]. All the examples we consider will be of genus 1 and will have a K-rational point. We establish more notation to state a precise result. Let us assume for simplicity all the zeroes and poles of f and g are k-rational. Write As standing hypotheses, we assume that: (i) all the multiplicities a i , a ′ i ′ , b j , and b ′ j ′ are prime to the characteristic of k; and (ii) gcd(a 1 . . . , a k , a ′ 1 , . . . , a ′ k ′ ) = gcd(b 1 . . . , b ℓ , b ′ 1 , . . . , b ′ ℓ ′ ) = 1. Under these hypotheses, Berger computes that the genus of E is where δ(a, b) = (ab − a − b + gcd(a, b))/2. From now on we assume that we have chosen the data f and g so that E has genus 1. Two typical cases are where f and g are quadratic rational functions with simple zeroes and poles, or where f and g are cubic polynomials. There is always a K-rational point on E; for example, we may take a point where x and y are zeroes of f and g. Let E d → P 1 be the elliptic surface over k attached to E/K d . It is clear that E d is birational to the closed subset of P 1 k × P 1 k × P 1 k (with coordinates x, y, u) defined by the vanishing of f (x) − u d g(y). We saw in Section 11 of Lecture 3 that E is dominated by a product of curves and we would now like to make this more precise. Recall that we defined covers C d → C = P 1 and D d → D = P 1 by the equations z d = f (x) and w d = g(y). Note that there is an action of µ d , the d-th roots of unity, on C d and on D d . Proof. We have already noted that E d is birational to the zero set X of f (x)− u d g(y) in P 1 k × P 1 k × P 1 k . Define a rational map from C d × D d to X by sending (x, z, y, w) to (x, y, u = z/w). It is clear that this map factors through the quotient (C d × D d )/µ d . Since the map is generically of degree d, it induces a birational isomorphism between (C d × D d )/µ d and X . Thus (C d × D d )/µ d is birationally isomorphic to E d . In the next section we will explain how this birational isomorphism can be used to compute the Néron-Severi group of E d and the Mordell-Weil group E(K d ). A rank formula We keep the notation and hypotheses of the preceding subsection. Consider the base P 1 k , the one corresponding to K, with coordinate t. For each geometric point x of this P 1 k , let f x be the number of components in the fiber of E → P 1 over x. For almost all x, f x = 1 and its value at any point can be computed using Tate's algorithm. Define two constants c 1 and c 2 by the formulae and c 2 = (k − 1)(ℓ − 1) + (k ′ − 1)(ℓ ′ − 1). Here the sum is over geometric points of P 1 k except t = 0 and t = ∞ and k, k ′ , ℓ, and ℓ ′ are the numbers of distinct zeroes and poles of f and g (cf. equation (1.1)). Note that c 1 and c 2 depend only on the data defining E/K, not on d. Theorem 2.1. Suppose that k is algebraically closed and that d is relatively prime to all of the multiplicities a i , a ′ i ′ , b j , and b ′ j ′ and to the characteristic of k. Then we have Rank Here Hom(· · · ) µ d signifies the homomorphisms commuting with the actions of µ d on the two Jacobians induced by its action on the curves. Sketch of Proof. In brief, we use the birational isomorphism to compute the rank of the Néron-Severi group of E d and then use the Shioda-Tate formula to compute the rank of E(K d ). More precisely, we saw in Lecture 2, Subsection 8.4 that the Néron-Severi group of the product C d × D d is isomorphic to Z 2 × Hom(J C d , J D d ). It follows easily that the Néron-Severi group of the quotient (C d × D d )/µ d is isomorphic to Z 2 × Hom(J C d , J D d ) µ d . One then keeps careful track of the blow-ups needed to pass from (C d × D d )/µ d to E d . The effect of blow-ups on Néron-Severi is quite simple and was noted in Subsection 8.5 of Lecture 2. This is the main source of the term c 2 in the formula. Finally, one computes the rank of E(K d ) using the Shioda-Tate formula, as in Section 5 of Lecture 3. This step is the main source of the term c 1 d. The hypothesis that k is algebraically closed is not essential for any of the above, but it avoids rationality questions that would greatly complicate the formula. For full details on the proof of this theorem (in a more general context) see [Ulm09a, Section 6]. First examples One of the first examples is already quite interesting. We give a brief sketch and refer to [Ulm09a] for more details. With notation as in Section 1, we take f (x) = x(x − 1) and g(y) = y 2 /(1 − y). The genus formula (1.2) shows that E has genus 1. In fact, the change of coordinates x = −y/(x + t), y = −x/t brings it into the Weierstrass form y 2 + xy + ty = x 3 + tx 2 . We remark in passing that if the characteristic of k is not 2, E has multiplicative reduction at t = 1/16 and good reduction elsewhere away from 0 and ∞. Thus by the analytic rank result of Lecture 2, when k is finite, say k = F p and p > 3, we expect E to have unbounded analytic rank in the tower F p (t 1/d ). (In fact a more careful analysis gives the same conclusion for every p.) Now assume that k is algebraically closed. To compute the constant c 1 , one checks that (for k of any characteristic) E has exactly one irreducible component over each geometric point of P 1 k . Thus c 1 = 0. It is immediate from the definition that c 2 = 0. Thus our rank formula yields Next we note that there is an isomorphism φ : C d → D d sending (x, z) to (y = 1/x, w = 1/z). This isomorphism anti-commutes with the µ d action: Let ζ d be a primitive d-th root of unity and write [ζ d ] for its action on curves or Jacobians. Using φ to identify C d and D d , our rank formula becomes Rank E(K d ) = Rank End(J C d ) anti−µ d where "End(· · · ) anti−µ d " denotes those endomorphisms anti-commuting with µ d in the sense above. Suppose that k has characteristic zero. Then a consideration of the (faithful) action of End(J C d ) on the differentials H 0 (J C d , Ω 1 ) shows that End(J C d ) anti−µ d = 0 for all d (see [Ulm09a,7.6]). We conclude that for k of characteristic zero, the rank of E(K d ) is zero for all d. Now assume that k has characteristic p (and is algebraically closed). If we take d of the form p f + 1 then we get many elements of End(J C d ) anti−µ d . Namely, we consider the Frobenius Fr p f and compute that • Fr p f . The same computation shows that Fr p f •[ζ i d ] anticommutes with µ d for all i. It turns out that there are two relations among these endomorphism in End(J C d ) if p > 2 and just one relation if p = 2 (see [Ulm09a,.10]). Thus we find that, for d of the special form d = p f + 1, The reader may enjoy checking that this is in exact agreement with what the analytic rank result (Theorem 3.1.1 of Lecture 4) predicts. Somewhat surprisingly, there are more values of d for which we get high ranks. A natural question is to identify all pairs (p, d) such that E(F p (t 1/d ) has "new" rank, i.e, points of infinite order not coming from smaller values of d. The exact set of pairs (p, d) for which we get high rank is mysterious. There are "systematic" cases (such as (p, p f + 1), as above, or (p, 2(p − 1))) and other cases that may be sporadic. This is the subject of ongoing research so we will not go into more detail, except to note that the example in Section 5 below is relevant to this question. Explicit points The main ingredients in the rank formula of Section 2 are the calculation of the Néron-Severi group of a product of curves in terms of homomorphisms of Jacobians and the Shioda-Tate formula. Tracing through the proof leads to a homomorphism For elements of Hom(J C d , J D d ) µ d where we can find an explicit representation in DivCorr(C d , D d ), the geometry of Berger's construction leads to explicit points in E(K d ). This applies notably to the endomorphisms Fr p f •[ζ i d ] appearing in the analysis of the first example above. Indeed, these endomorphisms are represented in DivCorr(C d , D d ) by the graphs of Frobenius composed with the automorphisms [ζ i d ] of C d . Tracing through the geometry leads to remarkable explicit expressions for points in E(K d ). The details of the calculation are presented in [Ulm09a, §8] so we will just state the results here, and only in the case p > 2. Then the points P i = P (ζ i d t 1/d ) for i = 0, . . . , d − 1 lie in E(K d ) and they generate a finite index subgroup of E(K d ), which has rank d − 2. The relations among them are that It is elementary to check that the points lie in E(K d ). To check their independence and the relations by elementary means, one may compute the height pairing on the lattice they generate. It turns out to be a scaling of the direct sum of two copies of the A * (d−2)/2 lattice. Since we know from the previous section that E(K d ) has rank d − 2, the explicit points generate a subgroup of finite index. As another check that they have finite index, we could compute the conductor of E-it turns out to have degree d + 2-and apply Corollary 2.2.2 of Lecture 4. All this is explained in detail in [Ulm09a,§8]. Another example We keep the notation and hypotheses of Sections 1 and 2. For another example, assume that k = F p with p > 2. Let f (x) = x/(x 2 − 1) and g(y) = y(y − 1). The curve f (x)−tg(y) = 0 has genus 1 and the change of coordinates x = (x ′ +t)/(x ′ −t), y = −y ′ /2tx ′ brings it into the Weierstrass form This curve, call it E, has multiplicative reduction of type I 1 at the places dividing t 2 + 4, good reduction at other finite, non-zero places, and tame reduction at t = 0 and t = ∞. We find that the constants c 1 and c 2 are both zero and that Rank E(F p (t 1/d )) = Rank Hom(J C d , J D d ) µ d . Recall that the curves C d and D d are defined by the equations Now let us assume that d has the form d = 2p f − 1 and consider the map φ • Fr p f : C d → D d . Then we find that shows that they are almost independent; more precisely, they generate a subgroup of rank d − 1. Thus we find (for d of the form d = 2p f − 1) that the rank of E(k(t 1/d )) is at least d − 1. The reader may find it a pleasant exercise to write down explicit points in this situation, along the lines of the discussion in Section 4 and [Ulm09a, §8]. Further developments There have been further developments in the area of rational points on curves and Jacobians over function fields. To close, we mention three of them. In the examples of Sections 3 and 5, the set of d that are "interesting," i.e., for which we get high rank over K d , depends very much on p, the characteristic of k. In his thesis (University of Arizona, 2010), Tommy Occhipinti gives, for every p, remarkable examples of elliptic curves E over F p (t) such that for all d prime to p we have Rank E(F p (t 1/d )) ≥ d. The curves come from Berger's construction where f and g are generic degree two rational functions. The rank inequality comes from the rank formula in Theorem 2.1 and the Honda-Tate theory of isogeny classes of abelian varieties over finite fields. In the opposite direction, the author and Zarhin have given examples of curves of every genus over C(t) such that their Jacobians have bounded rank in the tower of fields C(t 1/ℓ n ) where ℓ is a prime. See [UZ10]. Finally, after some encouragement by Dick Gross at PCMI, the author produced explicit points on the Legendre curve over the fields F p (µ d )(t 1/d ) where d has the form p f + 1 and proved in a completely elementary way that they give Mordell-Weil groups of unbounded rank. In fact, this construction is considerably easier than that of Tate and Shafarevich [TS67] and could have been found in the 1960s. See [Ulm09b]. It appears that this territory is rather fertile and that there is much still to be discovered about high ranks and explicit points on curves and Jacobians over function fields. Happy hunting!
2011-01-10T20:12:30.000Z
2011-01-10T00:00:00.000
{ "year": 2011, "sha1": "f68d52cbdec855475e606143ed6e565fb988bb0b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f68d52cbdec855475e606143ed6e565fb988bb0b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
127266370
pes2o/s2orc
v3-fos-license
Purcell effect in GaN-based waveguiding structures We provide an analysis of Purcell coefficient dependence on frequency, wavevector and emitter position in nitride-based waveguide structures. Was shown that spontaneous emission of emitter placed in one-dimensional slab waveguide structure could lead to a modification of spontaneous emission rate in the both considered structures. Results for symmetric and asymmetric nitride-based waveguides in TE polarization case was demonstrated. Introduction Using nitride material system allows to utilize some important properties, like a wide transparency range [1], and suitability for harsh environments in optoelectronic devices. Various optical components have already been demonstrated based on the nitride material system like directional couplers [2], all-optical modulators, waveguide gratings [3,4]. In the nitride materials system highlights Gallium Nitride (GaN). GaN is perspective, high refractive index material to using in integrated optics (fabricate low loss waveguides in ultraviolet region and other components to optical circuits systems). In the last decade improvement in technological methods allow to fabricate nanoscale waveguide structures with high quality of interfaces with different geometries, like slab, ridge and rib on different kind of substrates [5,6]. Modification of spontaneous emission rate of emitters, coupled with different types of resonant modes has attracted attention of researchers due to perspectives of fabrication of high efficiency compact light sources, single-photon generators etc. The rate of spontaneous emission can be estimated by a number of various methods. Here, for the analysis of Purcell effect in layered waveguide structures, was used approach, called S-quantization method. S-quantization formalism allows to calculate the spontaneous emission rate in arbitrary layered dielectric structure for states inside and outside the light cone [7]. These rigorous and self-consistent method is based on analysis of scattering matrix eigenvalues and don't require to solve integral-differential equations and apply perturbation theory methods. The aim of that paper is to calculate the modification of the spontaneous emission rate of known GaN core-based waveguide structures by S-quantization method and to get analysis of the results with compare to analytics. Results and discussion Was considered the well-known slab waveguide model. As model structures was chosen slab GaN waveguide layer ( =2.33) of thickness 400 nm in two different cladding semi-infinite materials: in two semi-infinite AlN claddings( =2.075) and semi-infinite sapphire substrate layer ( 2 3 =1.76) with air on the top. Schemes of two waveguide structures are pictured on figure 1. As known, both cladding and core materials of first structure (GaN and AlN) has birefringence phenomenon and describes by ordinary and extraordinary refractive indexes. In our paper we limit consideration only on transfer electric (TE) waveguide modes that describes only with ordinary refractive index for simplicity. Scattering matrix ̂ interrelate incident and runaway waves at boundary of quantization box as: Field in S-quantization formalism quantizes by equating ̂ matrix eigenvalues to unity: (1,2) = 1. In waveguide regime, when the wavevector component is outside the light cone, equation (3) becomes: (1 ± √ 12 21 ) = 0, where 12 , 21 -the components of transfer matrix through the quantization box and "+" sign corresponds to symmetric eigenvector, and sign "-" corresponds to antisymmetric eigenvector. Equations (4) are easily covered to known form of dispersion equations for even and odd waveguide modes. Mode Purcell factor could get from Fermi Golden rule and reduces to relation between squared dipole matrix elements, in waveguide case to homogeneous media case respectively. The patterns of the modal Purcell factor in case of TE polarization are shown on figure 2. Figure 2 (a) shows distribution of mode Purcell coefficient on energy and wave vector component kx when the dipole is placed at the centre of GaN layer and oriented along Oy axis. It can be seen, that in area with Purcell factor much more than zero (inside the light cone) are interleaving of local maximums and minimums of Purcell factor. The maximums are corresponded to Fabry-Perot modes of the slab. On the edge of light cone (between light cones of core and cladding materials), shows two branches of considered even waveguide TE modes. Figure 2 (b) shows the case of dipole emitter placed at 150 nm from the structure centre. In comparison with (a) there are more waveguide modes: both even and odd. Because in this case, emitter is placed in area where odd modes electric field nonzero. The maximum value of mode Purcell factor in current structure is close to 1 and corresponded to fundamental mode (TE0) and dipole position in the centre of waveguide (where placed maximum magnitude of electric field too). It is known, that difference between core and cladding refractive indexes defines the confinement of the field in waveguide. To compare values of mode Purcell factors was considered the situation of known waveguide system GaN core layer in sapphire and air claddings. Figure 3 shown the pattern of modal Purcell factor dependence on wave vector component kx and energy, when dipole is placed at centre of waveguide layer. So as the second structure is the asymmetric waveguide, there are three light cone curves, that corresponding to each layer. Three even waveguide modes are shown in area 3. Areas 1 and 2 are areas within light cone for air and 2 3 layers, where as in previous case, figured interleaving of Fabry-Perot modes. In both considered structures, maximum value of mode Purcell factor, that corresponds to waveguide modes, does not exceed to value of 2. It can be explained by parameters of structure -there are not huge difference between core and cladding refractive indexes, as analytical equation for maximum mode Purcell factor from [7] says: ≈ 1.3 2 ( 2 − 2 ).
2019-04-23T13:21:36.733Z
2018-12-01T00:00:00.000
{ "year": 2018, "sha1": "1f1ab616d9e8ce0941d1297ff773c3fbd6f04dae", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1124/5/051055", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a29823f100ec13b5cdf68a5c5a899999c3526ba5", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
5584780
pes2o/s2orc
v3-fos-license
The structural de-correlation time: A robust statistical measure of convergence of biomolecular simulations Although atomistic simulations of proteins and other biological systems are approaching microsecond timescales, the quality of trajectories has remained difficult to assess. Such assessment is critical not only for establishing the relevance of any individual simulation but also in the extremely active field of developing computational methods. Here we map the trajectory assessment problem onto a simple statistical calculation of the ``effective sample size'' - i.e., the number of statistically independent configurations. The mapping is achieved by asking the question, ``How much time must elapse between snapshots included in a sample for that sample to exhibit the statistical properties expected for independent and identically distributed configurations?'' The resulting ``structural de-correlation time'' is robustly calculated using exact properties deduced from our previously developed ``structural histograms,'' without any fitting parameters. We show the method is equally and directly applicable to toy models, peptides, and a 72-residue protein model. Variants of our approach can readily be applied to a wide range of physical and chemical systems. What does convergence mean? The answer is not simply of abstract interest, since many aspects of the biomolecular simulation field depend on it. When parameterizing potential functions, it is essential to know whether inaccuracies are attributable to the potential, rather than under-sampling. In the extremely active area of methods development for equilibrium sampling, it is necessary to demonstrate that a novel approach is better than its predecessors, in the sense that it equilibrates the relative populations of different conformers in less CPU time [1]. And in the important area of free energy calculations, under-sampling can result in both systematic error and poor precision. To rephrase the basic question, given a simulation trajectory (an ordered set of correlated configurations), what characteristics should be observed if convergence has been achieved? The obvious, if tautological, answer is that all states should have been visited with the correct relative probabilities, as governed by a Boltzmann factor (implicitly, free energy) in most cases of physical interest. Yet given the omnipresence of statistical error, it has long been accepted that such idealizations are of limited value. The more pertinent questions have therefore been taken to be: Does the trajectory give reliable estimates for quantities of interest? What is the statistical uncertainty in these estimates [2,3,4,5]? In other words, convergence is relative, and in principle, it is rarely meaningful to describe a simulation as not converged, in an absolute sense. (An exception is when a priori information indicates the trajectory has failed to visit certain states.) Accepting the relativity of convergence points directly to the importance of computing statistical uncertainty. The reliability of ensemble averages typically has been gauged in the context of basic statistical theory, by noting that statistical errors decrease with the square-root of the number of independent samples. The number of independent samples N A pertinent to the uncertainty in a quantity A, in turn, has been judged by comparing the trajectory length to the timescale of A's correlation with itself-A's autocorrelation time [6,3,5]. Thus, a trajectory of length t sim with an autocorrelation time for A of τ A can be said to provide an estimate for A with relative precision of roughly (1/N A ) ∼ (2τ A /t sim ). However, the estimation of correlation times can be an uncertain business, as good measurements of correlation functions require a lot of data [7,8]. Furthermore, different quantities typically have different correlation times. Other assessment approaches have therefore been proposed, such as the er-godic measure [9,10], analysis of principle components [11] and, more recently, structural histograms [12]. Although these approaches are applicable to quite complex systems without invoking correlation functions, they attempt only to give an overall sense of convergence rather than quantifying precision. Flyvbjerg and Petersen provided perhaps the most satisfying approach to quantifying the precision of an estimate in any particular quantity without relying on correlation functions [4]. The sophisticated block averaging scheme they present gauges whether correlation effects have been removed by considering a range of block sizes. The reasoning underlying their analysis is that, once the block length is longer than any correlation time(s), the estimated precision (statistical uncertainty) will no longer depend on block size. Our approach generalizes the logic implicit in the Flyvbjerg-Petersen analysis by developing an overall structural de-correlation time which can be estimated, simply and robustly, in biomolecular and other systems. The key to our method is to view a simulation as sampling an underlying distribution (typically a Boltzmann factor) of the configuration space, from which all equilibrium quantities follow. Our approach builds implicitly on the multi-basin picture proposed by Frauenfelder and coworkers [13,14], in which conformational equilibration requires equilibrating the relative populations of the various conformational substates. On the basis of the configuration-space distribution, we can define the general effective sample size N and the associated (de-)correlation time τ dec associated with a particular trajectory. Specifically, τ dec is the minimum time that must elapse between configurations for them to become fully decorrelated (i.e., with respect to any quantity). Here, fully decorrelated has a very specific meaning, which leads to testable hypotheses: a set of fully decorrelated configurations will exhibit the statistics of an independently and identically distributed (i.i.d.) sample of the governing Boltzmann-factor distribution. Below, we detail the tests we use to compute τ dec , which build on our recently proposed structural histogram analysis [12]; see also [15]. The key point is that the expected i.i.d. statistics must apply to any assay of a decorrelated sample. The contribution of the present paper is to recognize this, and then to describe an assay directly probing the configurationspace distribution for which analytic results are easily obtained for any system, assuming an i.i.d. sample. Procedurally, then, we simply apply our assay to increasing values hypothesized for τ dec . When the value is too small, the correlations lead to anomalous statistics (fluctuations), but once the assayed fluctuations match the analytic i.i.d. predictions, the de-correlation time τ dec has been reached. Hence, there is no fitting of any kind. Importantly, by a suitable use of our "structural histograms" [12], which directly describe configuration-space distributions, we can map a system of any complexity to an exactly soluble model. In practical terms, our analysis readily computes the configurational/structural decorrelation time τ dec (and hence the number of independent samples N ) for a long trajectory many times the length of τ dec . In turn, this provides a means for estimating statistical uncertainties in observables of interest, such as relative populations. Of equal importance, our analysis can reveal when a trajectory is dominated by statistical error, i.e., when the simulation time t sim ∼ τ dec . We note, however, that our analysis remains subject to the intrinsic limitation pertinent to all methods which aim to judge the quality of conformational sampling-of not knowing about parts of configuration space never visited by the trajectory being analyzed. In contrast to most existing quantitative approaches, which attempt to assess convergence of a single quantity, our general approach enables the generation of ensembles of known statistical properties. These ensembles in turn can then be used for many purposes beyond ensemble averaging, such as docking, or developing a better understanding of native protein ensembles. In the remainder of the paper, we describe the theory behind our assay, and then successfully apply it to a wide range of systems. We first consider a two-state Poisson process for illustrative purposes, followed by molecular systems: di-leucine peptide (2 residues; 50 atoms), Met-enkephalin (5 residues; 75 atoms), and a coarse-grained model of the N-terminal domain of calmodulin (72 united residues). For all the molecules, we test that our calculation for τ dec is insensitive to details of the computation. Theory Imagine that we are handed a "perfect sample" of configurations of a proteinperfect, we are told, because it is made up of configurations that are fully independent of one another. How could we test this assertion? The key is to note that, for any arbitrarily defined partitioning of the sample of N configurations into S subsets (or bins), subsamples of these N configurations obey very simple statistics. In particular, the expected variance in the population of a bin, as estimated from many subsamples, depends only on the population of the bin and size n of the subsample, as long as N >> n. Of course, a sample generated by a typical simulation is not made up of independent configurations. But since we know how the variance of subsamples should behave for an ideal sample of independent configurations, we are able to determine how much simulation time must elapse before configurations may be considered independent. We call this time the structural decorrelation time, τ dec . Below, we show how to partition the trajectory into structurally defined subsets for this purpose, and how to extract τ dec . There is some precedence for using the populations of structurally defined bins as a measure of convergence [12]. Smith et al considered the number of structural clusters as a function of time as a way to evaluate the breadth and convergence of conformational sampling, and found this to be a much more sensitive indicator of sampling than other commonly used measures [16]. Simmerling and coworkers went one step further, and compared the populations of the clusters as sampled by different simulations [15]. Here, we go another step, by noting that the statistics of populations of structurally defined bins provide a unique insight into the quality of the sample. Our analysis of a simulation trajectory proceeds in two steps, both described in Sec. 4: (I) A structural histogram is constructed. The histogram is a unique classification (a binning, not a clustering) of the trajectory based upon a set of reference structures, which are selected at random from the trajectory. The histogram so constructed defines a discrete probability distribution, P (S), indexed by the set of reference structures S. (II) We consider different subsamples of the trajectory, defined by a fixed interval of simulation time t. A particular "t subsample" of size n is formed by pulling n frames in sequence separated by a time t (see Fig. 1). When t gets large enough, it is as if we are sampling randomly from P (S). The smallest such t we identify as the structural decorrelation time, τ dec , as explained below. Structural Histogram A "structural histogram" is a one-dimensional population analysis of a trajectory based on a partitioning (classification) of configuration space. Such classifications are simple to perform based on proximity of the sampled configurations to a set of reference structures taken from the trajectory itself [12]. The structural histogram will form the basis of the decorrelation time analysis. It defines a distribution, which is then used to answer the question, "How much time must elapse between frames before we are sampling randomly from this distribution?" Details are given in Sec. 4. Does the the equilibration of a structural histogram reflect the equilibration of the underlying conformational substates (CS)? Certainly, several CS's will be lumped together into the same bin, while others may be split between one or more bins. But clearly, equilibration of the overlying histogram bins requires equilibration of the underlying CS's. We will present evidence that this is indeed the case in Sec. 2. Furthermore, since the configuration space distribution (and the statistical error associated with our computational estimate thereof) controls all ensemble averages, it determines the precision with which these averages are calculated. We will show that the convergence of a structural histogram is very sensitive to configuration space sampling errors. Statistical analysis of P (S) and the decorrelation time τ dec In this section we define an observable, σ 2 obs (t), which depends very sensitively on the equilibration of the bins of a structural histogram as a function of simulation time t. Importantly, σ 2 obs (t) can be exactly calculated for a histogram of fully decorrelated structures. Plotting σ 2 obs (t) as a function of t, we identify the time at which the observed value equals that for fully decorrelated structures as the structural decorrelation time. Given a trajectory of N frames, we build a uniform histogram of S bins P (S), using the procedure described in Sec. 4. By construction, the likelihood that a randomly selected frame belongs to bin 'i' of P is simply 1/S. Now imagine for a moment that the trajectory was generated by an algorithm which produced structures that are completely independent of one another. Given a subsample of n frames of this correlationless trajectory, the expected number of structures in the subsample belonging to a particular bin is simply n/S, regardless of the "time" separation of the frames. As the trajectory does not consist of independent structures, the statistics of subsamples depend on how the subsamples are selected. For example, a subsample of frames close together in time are more likely to belong to the same bin, as compared to a subsample of frames which span a longer time. Frames that are close together (in simulation time) are more likely to be in similiar conformational substates, while frames separated by a time which is long compared to the typical inter-state transition times are effectively independent. The difference between these two types of subsamples-highly correlated vs. fully independent-is reflected in the variance among a set of subsampled bin populations (see Fig. 1). Denoting the population of bin i obseved in subsample k as m k i , the fractional population f k i is defined as where overbars denote averaging over subsamples: Since here we are considering only uniform probability histograms, f i is the same for every i: The expected variance of bin populations when allocating N fully independent structures to S bins is calculated in introductory probability texts under the rubric of "sampling without replacement [?]." The variance in fractional occupancy of each bin of this (hypergeometric) distribution depends only on the total number of independent structures N , the size n of the subsamples used to "poll" the distribution, and the fraction f of the structures which are contained in each bin: But can we use this exact result to infer something about the correlations that are present in a typical trajectory? Following the intuition that frames close together in time are correlated, while frames far apart are independent, we compute the variance in Eq. 1 for different sets of subsamples, which are distinguished by a fixed time t between subsampled frames (Fig. 1). We expect that averaging over subsamples that consist of frames close together in time will lead to a variance which is higher than that expected from an ideal sample (Eq. 2). As t increases, the variance should decrease as the frames in each subsample become less correlated. Beyond some t (provided the trajectory is long enough), the subsampled frames will be independent, and the computed variance will be that expected from an i.i.d. sample. In practice, we turn this intuition into a (normalized) observable σ 2 obs (f ; n, t) in the following way: By construction, σ 2 obs (f ; n, t) = 1 for samples consisting of independent frames. (iv) Repeat (ii) and (iii) for increasing t, until the subsamples span a number of frames on the order of the trajectory length. Plotting σ 2 obs (f ; n, t) as a function of t, we identify the subsampling interval t at which the variance first equals the theoretical prediction as the structural decorrelation time, τ dec . For frames which are separated by at least τ dec , it is as if they were drawn independently from the distribution defined by P (S). The effective sample size N is then the number of frames T in the trajectory divided by τ dec . Statistical uncertainty on thermodynamic averages is proportional to N −1/2 . But does τ dec correspond to a physically meaningful timescale? Below, we show that the answer to this question is affirmative, and that, for a given trajectory, the same τ dec is computed, regardless of the histogram. Indeed, τ dec does not depend on whether it is calculated based on a uniform or a nonuniform histogram. Results In the previous section, we introduced an observable, σ 2 obs (f ; n, t), and argued that it ought to be sensitive to the conformational convergence of a molecular simulation. However, we need to ask whether the results of the analysis reflect physical processes present in the simulation. After all, it may be that good sampling of a structural histogram is not indicative of good sampling of the conformation space. Our strategy is to first test the analysis on some models with simple, known convergence behavior. We then turn our attention to more complex systems, which sample multiple conformational substates on several different timescales. Poisson process Perhaps the simplest nontrivial model we can imagine has two-states, with rare transitions between them. If we specify that the likelihood of a transition in a unit interval of time is a small constant κ < 1 (Poisson process), then the average lifetime of each state is simply 1/κ. Transitions are instantaneous, so that a "trajectory" of this model is simply a record of which state (later, histogram bin) was occupied at each timestep. Our decorrelation analysis is designed to answer the question, "Given that the model is in a particular state, how much time must elapse before there is an equal probability to be in either state?" Figure 2 shows the results of the analysis for several different values of κ. The horizontal axis measures the time between subsampled frames. Frames that are close together are likely to be in the same state, which results in a variance higher than that expected from an uncorrelated sample of the two states. As the time between subsampled frames increases, the variance decreases, until reaching the value predicted for independent samples, where it stays. The inset demonstrates that the time for which the variance first reaches the theoretical value is easily read off when the data are plotted on a log-log scale. In all three cases, this value correlates well with the (built-in) transition time 1/κ. It is noteworthy that, in each case, we actually must wait a bit longer than 1/κ before the subsampled elements are uncorrelated. This likely reflects the additional waiting time necessary for the Poisson trajectory to have equal likelihood of being in either state. As t gets larger, the number of subsamples which "fit" into a trajectory decreases, and therefore σ 2 obs (t) is averaged over fewer subsamples. This results in some noise in measured value of σ 2 obs (t), which gets more pronounced with increasing t. To quantify this behavior, we added an 80% confidence interval to the theoretical prediction, indicated by the error bars in the inset of Fig. 2. Given an n and t, the number of subsamples is fixed. The error bars indicate the range where 80 % of variance estimates fall, based on this fixed number of (hypothesized) independent samples from the hypergeometric distribution defined by P (S). Leucine dipeptide Our approach readily obtains the physical timescale governing conformational equilibration in molecular systems. Implicitly solvated leucine dipeptide (ACE-Leu 2 -NME), having fifty atoms, is an ideal test system because a thorough sampling of conformation space is possible by brute force simulation. The degrees of freedom that distinguish the major conformations are the φ and ψ dihedrals of the backbone, though side-chain degrees of freedom complicate the landscape by introducing many locally stable conformations within the major Ramachandran basins. It is therefore intermediate in complexity between a "toy-model" and larger peptides. Two independent trajectories of 1 µsec each were analyzed; the simulation details have been reported elsewhere [17]. For each trajectory, 9 independent histograms consisting of 10 bins of uniform probability were built as described in Sec. 1.1. For each histogram, σ 2 obs (n, t) (Eq. 3) was computed for n = 2, 4, 10. We then averaged σ 2 obs (n, t) over the 9 independent histograms separately for each n and each trajectory-these averaged signals are plotted in Fig. 3. When the subsamples consist of frames separated by short times t, the subsamples are made of highly correlated frames. This leads to an observed variance greater than that expected for a sample of independent snapshots, as calculated for each n from Eq. 2 and shown as a thick black horizontal line. σ 2 obs (n, t) then decreases monotonically with time, until it matches the theoretical prediction for decorrelated snapshots at about 900 psec. The agreement between the computed and theoretical variance (with no fitting parameters) indicates that the subsampled frames are behaving as if they were sampled at random from the structural histogram. We therefore identify τ dec = 900 psec, giving an effective sample size of just over 1, 100 frames. Does the decorrelation time correspond to a physical timescale? First, we note that τ dec is independent of the subsample size n, as shown in Fig. 3. Second, we note that the decorrelation times agree between the two independent trajectories. This is expected, since the trajectories are quite long for this small molecule, and therefore should be very well-sampled. Finally, the decorrelation time is consistent with the typical transition time between the α and β basins of the Ramachandran map, which is on the order of 400 psec in this model. As in the Poisson process, τ dec is a bit longer than the α → β transition time. How would the data look if we had a much shorter trajectory, of the order of τ dec ? This is also answered in Fig. 3, where we have analyzed a dileucine trajectory of only 1 nsec in length. Frames every 10 fsec, so that this trajectory had the same total number of frames as each of the 1 µsec trajectories. The results are striking-not only does σ 2 obs (n, t) fail to attain the value for independent sampling, but the values appear to connect smoothly (apart from some noise) with the data from the longer trajectories. (We stress that the 1 nsec trajectory was generated and analyzed independently of both 1 µsec trajectories-it is not simply the first nsec of either.) In the event that we had only the 1 nsec trajectories, we could state unequivocably that they are poorly converged, since they fail to attain the theoretical prediction for a well-converged trajectory. We also investigated whether the decorrelation time depends on the number of reference structures used to build the structural histogram. As shown in Fig. 4, τ dec is the same, whether we use a histogram of 10 bins or 50 bins. (Fig. 3 used 10 bins.) It is interesting that the data are somewhat smoothed by dividing up the sampled space among more reference structures. While this seems to argue for increasing the number of reference structures, it should be remembered that increasing the number of references by a factor of 5 increases the computational cost of the analysis by the same factor, while τ dec is robustly estimated based on a histogram containing 10 bins. Calmodulin We next considered a previously developed united-residue model of the Nterminal domain of calmodulin [18]. In the "double native" Gō potential used, both the apo (Ca 2+ -free) [19] and holo (Ca 2+ -bound) [20] structures are stabilized, so that occasional transitions are observed between the two states. In contrast with the dileucine model just discussed, our coarse-grained calmodulin simulation has available a much larger conformation space. The apo-holo transition represents a motion entailing 4.6Å RMSD, and involves a collective rearrangement of helices. In addition to apo-holo transitions, the trajectories include partial unfolding events, which do not lend themselves to an interpretation as transitions between well-defined states. In light of these different processes, it will be interesting to see how our analysis fares. Two independent trajectories were analyzed, each 5.5 × 10 7 Monte Carlo sweeps (MCS) in length. Each trajectory was begun in the apo configuration, and approximately 40 transition events were observed in each. For both trajectories, the analysis was averaged over 4 independent histograms, each with 10 bins of uniform probability. The results of the analysis are shown in Fig. 5. It is interesting that the decorrelation time estimated from Fig. 5 is about a factor of 2 shorter than the average waiting time between α → β transitions. This is perhaps due to the noisier signal (as compared to the previous cases), which is in turn due to the small number of transition events observed-about 40 in each trajectory, compared to about 2.5 × 10 3 events in the dileucine trajectories. Alternatively, it may be that there are other, longer timescale processes, such as partial unfolding and refolding events, which must be sampled before convergence is attained. In either case, our analysis yields a robust estimate of the decorrelation time, regardless of the underlying processes. The conclusion we draw from this data is that one should only interepret the decorrelation analysis as "logarithmically accurate" (up to a factor of ∼ 2) when the data are noisy. Met-enkephalin In the previous examples, we considered models which admit a description in terms of two dominant states with occasional transitions between them. Here, we study the highly flexible pentapeptide met-enkephalin (NH + 3 -Tyr-[Gly] 2 -Phe-Met-COO − ), which does not lend itself to such a simple description. Our aim is to see how our convergence analysis will perform in this case, where multiple conformations are interconverting on many different timescales. Despite the lack of a simple description in terms of a few, well-defined states connected by occasional transitions, our decorrelation analysis yields an unambiguous signal of the decorrelation time for this system. The data (Fig. 6) indicate that 4 or 5 nsec must elapse between frames before they be considered statistically independent, which in turn implies that each of our 1 µsec trajectories has an effective sample size of 200 or 250 frames. We stress that this is learned from a "blind" analysis, without any knowledge of the underlying free energy surface. Discussion We have developed a new tool for assessing the quality of molecular simulation trajectories, quantifying "structural correlation", the tendency for snap-shots which are close together in simulation time to be similiar. The analysis first computes a structural decorrelation time, which answers the question, "How much simulation time must elapse before the sampled structures display the statistics of an i.i.d sample." This in turn implies an effective sample size, N , which is the number of frames in the trajectory that are statiscally independent, in the sense that they may be thought of as independent and identically distributed. In several model systems, for which the timescale needed to decorrelate snapshots was known in advance, we have shown that the decorrelation analysis is consistent with the "built-in" timescale. We have also shown that the results are not sensitive to the details of the structural histogram or to the subsampling scheme used to analyze the resulting timeseries. There are no adjustable parameters. Finally, we have demonstrated a calculation of an effective sample size for a system which cannot be approximately described in terms of a small number of well-defined states and a few dominant timescales. This is critically important, since the important states of a system are generally not known in advance. Our method may be applied in a straightforward way to discontinuous trajectories, which consist of several independent pieces [21]. The analysis would be carried forward just as for a continuous trajectory. In this case, a few subsamples will be corrupted by the fact that they span the boundaries between the independent pieces. The error introduced will be minimal, provided that the correlation time is shorter than the length of each independent piece. The analysis is also readily applicable to exchange-type simulations, in which configurations are swapped between different simulations running in parallel. For a ladder of M replicas, one would perform the analysis on each of the M continuous trajectories that are had by following each replica as it wanders up and down the ladder. If the ladder is well-mixed, then all of the trajectories should have the same decorrelation time. And if the exchange simulation is more efficient than a standard simulation, then each replica will have a shorter decorrelation time than a "standard" simulation. This last observation attains considerable exigence, in light of the fact that exchange simulations have become the method of choice for state-of-the-art simulation. There is a growing sense in the modeling and simulation community of the need to standardize measures of the quality of simulation results [22,23]. Our method, designed specifically to address the statistical quality of an ensemble of structures, should be useful in this context. Histogram Construction Previously, we presented an algorithm which generated a histogram based on clustering the trajectory with a fixed cutoff radius [12], resulting in bins of varying probability. Here, we present a slightly modified procedure, which partitions the trajectory into bins of uniform probability, by allowing the cutoff radius to vary. For a particular continuous trajectory of N frames, the following steps are performed: (i) A bin probability, or fractional occupancy f is defined. (ii) A structure S 1 is picked at random from the trajectory. (iii) Compute the distance, using an appropriate metric, from S 1 to all remaining frames in the trajectory. (iv) Order the frames according to the distance, and set aside the first f × N frames, noting that they have been classified with reference structure S 1 . Note also the "radius" r 1 of the bin, i.e., the distance to the farthest structure classified with S 1 . (iv) Repeat (ii)-(iv) until every structure in the trajectory is classified. Calmodulin We analyzed two coarse-grained simulations of the N-terminal domain of calmodulin. Full details and analysis of the model have been published previously [18], here we briefly recount only the most relevant details. The model is a one bead per residue model of 72 residues (numbers 4 − 75 in pdb structure 1cfd), linked together as a freely jointed chain. Conformations corresponding to both the apo (pdb ID 1cfd) and holo (pdb ID 1cll) crystal structures [20,19] are stabilized by Go interactions [24]. Since both the apo and holo forms are stable, transitions are observed between these two states, ocurring on average about once every 5 × 10 4 Monte Carlo sweeps (MCS). Met-enkephalin We analyzed two independent 1 µsec trajectories and a single 1 nsec trajectory, each started from the PDB structure 1plw, model 1. The potential energy was described by the OPLSaa potential [25], with solvation treated implicitly by the GB/SA method [26]. The equations of motion were integrated stochastically, using the discretized Langevin equation implemented in Tinker v. 4.2.2, with a friction constant of 5 psec − 1 and the temperature set to 298 K [27]. A total of 10 6 evenly spaced configurations were stored for each trajectory. Figure 1: A trajectory can be subsampled in many ways, corresponding to different subsample sizes n and intervals t. In the top figure, the pink highlighted frames belong to an n = 3, t = 2 subsample, the blue frames to another subsample of the same type. The bottom figure shows two n = 2, t = 3 subsamples. The frame index (simulation time) is labelled by j. (n,t) Figure 6: Convergence data for two independent 1 µsec met-enkephalin trajectories, distinguished by solid and dashed lines, for subsample sizes n = 4 (red) and n = 10 (green). Error bars indicate 80 % confidence intervals for uncorrelated subsamples of size n = 4.
2007-02-18T14:47:36.000Z
2006-07-21T00:00:00.000
{ "year": 2006, "sha1": "a1da55ff96d015b9d3f27c743890e15a9442330e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a1da55ff96d015b9d3f27c743890e15a9442330e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Computer Science" ] }
31881023
pes2o/s2orc
v3-fos-license
Extraneural metastases in anaplastic ependymoma Ependymoma are rare glial neoplasm, it rarely metastasize outside the central nervous system. We present a case of anaplastic ependymoma with extraneural metastases with review of literature. A ten-year-old male child presented with anaplastic ependymoma of choroid plexus and treated with craniospinal radiotherapy in 1998. He had intracranial recurrence in 2004, confirmed by biopsy. He was given adjuvant chemotherapy in form of PCV. At 10 months after completion of chemotherapy, he developed extracranial scalp metastasis and so was treated with palliative local radiation therapy to the scalp metastasis and systemic chemotherapy with oral Etoposide. Scalp metastasis completely disappeared and ataxia improved. After five cycles of chemotherapy, the patient had progression of disease in form of scalp and cervical lymph node metastasis confirmed by fine needle aspiration cytology, biopsy and immunohistochemistry. He was given salvage chemotherapy (carboplatin + ifosfamide + etoposide) at 3-weekly. He had partial response and was still on chemotherapy till May 2007. INTRODUCTION Ependymomas are rare glial neoplasm. They comprise of 5% of all intracranial tumors in adults and 10% in children. [1] Ependymoma usually arise intracranial within infratentorial location or supratentorial brain and less commonly from spinal cord. They mainly relapse at primary and spinal cord, but rarely metastasize outside the central nervous system. Extra neural metastasis occurs mainly in the lung/ pleura, liver and lymph nodes. [2][3][4][5] Here we report a case of anaplastic ependymoma metastasizing to cervical lymph node and scalp. CASE REPORT A ten-year-old male child presented with complaints of progressive loss of vision, headache and vomiting since March 1997. Contrast-enhanced computed tomography (CECT) of brain showed mass in the choroid plexus of the right occipital horn of the lateral ventricle, so he underwent subtotal resection in April 1997. Total surgical excision was abandoned due to intraoperative bleeding. Histopathological examination of the resected tissue showed a tumor with morphology consistent with anaplastic ependymoma. The patient was advised post operative radiotherapy but did not report for radiotherapy. In July 1998, he again complained of headache and vomiting, a repeat CECT of head showed a large calcified residual mass surrounded by cystic area in right occipital horn of lateral ventricle. He was re-explored with right parieto-occipital craniotomy and total excision of choroid plexus mass was done. Histopathological examination revealed the diagnosis of anaplastic clear cell ependymoma. Postoperatively he had no neurological deficit. Patient was treated with craniospinal irradiation with dose of 36Gy in 18 fractions to whole brain followed by a boost of 18 Gy in 12 fractions to a total of 54 Gy in 30 fractions to primary tumour along with 30Gy in 20 fractions to whole of the spinal cord. Radiation was completed in September 1998 and the next five year period was uneventful. He again complained of weakness in right upper and lower limb in February 2004. CECT scan showed a recurrent mass in right parieto-occipital area. Sub total excision of mass was done and histopathology revealed anaplastic ependymoma without clear cell areas [ Figure 1]. On immuno-histochemistry the tumor showed diffuse strong positive immunostaining for glial fibrillary acidic protein (GFAP). After surgery power in right upper and lower limb improved. Adjuvant PCV chemotherapy was planned (Procarbazine 60 mg/ m 2 D8-21, Lomustine 100 mg/m 2 D1, Vincristine 1.4 mg/m 2 D1 and D29) at six weekly interval. Patient with metastatic ependymoma [ Figure 3]. CECT head showed a small enhanced mass lesion in right parieto-occipital area with perifocal edema suggestive of local recurrence Patient was given palliative radiotherapy of 8 Gy in one fractions to the scalp swelling with single agent chemotherapy (oral Etoposide 50 mg/m 2 day 1-21) at 4 weekly. The scalp nodule completely disappeared with improvement in ataxia and gait. After five cycles of chemotherapy, the patient had disease progression in August 2006 with multiple nodules on right side of the scalp, largest measuring 1.2 × 1.2 cm 2 , hard fixed. It was treated again by 8 Gy single fraction. He subsequently developed right upper deep cervical lymphadenopathy and underwent excision biopsy. Histopathological examination confirmed the diagnosis of anaplastic ependymoma metastatic to cervical lymph node. On immunohistochemistry the tumor was positive for EMA, Vimentin and GFAP [ Figure 4]. In view of progression of disease during chemotherapy, palliative chemotheray using ifosfamide (1.8 gm/m 2 d1-5) plus etoposide (100 mg/m 2 d1-5) alternating with carboplatin (AUC 5) plus vincristine (1.4 mg/m 2 ) at three weekly interval was started and patient has partial response without any new lesion for last nine months till May 2007. DISCUSSION Ependymoma mainly relapses at primary site and some times at spinal cord within the central nervous system. It rarely metastasizes outside the central nervous system as extraneural metastasis. Newton et al. [5] reviewed 81 ependymomas at Memorial Sloan-Kettering Cancer Center between 1956 -1989. Only five (6.2%) had extra neural metastasis and the time from diagnosis to development of extra neural metastasis was 0-288 months. In 4/5 cases primary tumor progression was present at the time of metastasis. The sites of metastasis were lung and thoracic lymph node -2, pleura and peritoneum -2 and liver -1. Dunst et al., [2] reported a case of ependymoma of left occipital area in which metastasis in cervical lymph nodes appeared along with recurrence at primary and multiple metastases to Kumar et al.: Extraneural metastases in anaplastic ependymoma the spinal canal at four and a half years after surgical resection and craniospinal radiotherapy. Graff et al., [4] reported a case of spinal ependymoma with extra neural metastasis to pleura, liver, nodes in abdomen and thorax. Rousseau et al.,[6] reported 80 children of ependymoma with five years actuarial survival and event-free survival of 56% and 38%, respectively. There were 34 relapses at 3-72 months after diagnosis. Twenty patients had local failure at the primary site while, 14 had leptomeningeal dissemination without any extraneural metastasis. McLaughlin et al., [7] reported 41 patients, who received post operative craniospinal radiotherapy between 1966 to 1989. Local recurrence was noticed in 51% at six years while10 year overall and relapse free survival was 51% and 46% respectively. There was no metastasis outside the central nervous system. The tumor site was the only factor that influenced the survival (P=0.0004).Timmerman et al., [8] reported that primary tumor site is the predominant sites of failure with one case of extracranial metastasis to the spinal cord without any extraneural metastasis, so it was suggested to intensify the local treatment. The metastasis to and from the central nervous system is low due to unique property of brain and tumour interaction with blood-brain barrier, microglia, matrix protein, cytokines and growth factors. [9] Because of rarity of extraneural metastasis to scalp and cervical lymph node from intracranial ependymoma and lack of any consensus of treatment strategy, the patient was treated by local radiation and salvage chemotherapy to stabilize the disease with possibility to improve the survival.
2017-09-30T09:06:29.679Z
2007-04-01T00:00:00.000
{ "year": 2007, "sha1": "793a5ff170c40e05e49a0b65ab6a21325b328133", "oa_license": null, "oa_url": "https://doi.org/10.4103/0973-1482.34689", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "793a5ff170c40e05e49a0b65ab6a21325b328133", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
106401960
pes2o/s2orc
v3-fos-license
Assessing Urban Water Management Sustainability of a Megacity : Case Study of Seoul , South Korea Many cities are facing various water-related challenges caused by rapid urbanization and climate change. Moreover, a megacity may pose a greater risk due to its scale and complexity for coping with impending challenges. Infrastructure and governance also differ by the level of development of a city which indicates that the analysis of Integrated Water Resources Management (IWRM) and water governance are site-specific. We examined the status of IWRM of Seoul by using the City Blueprint®Approach which consists of three different frameworks: (1) Trends and Pressures Framework (TPF), (2) City Blueprint Framework (CBF) and (3) the water Governance Capacity Framework (GCF). The TPF summarizes the main social, environmental and financial pressures that may impede water management. The CBF assesses IWRM of the urban water cycle. Finally, the GCF identifies key barriers and opportunities to develop governance capacity. The results indicate that nutrient recovery from wastewater, stormwater separation, and operation cost recovery of water and sanitation services are priority areas for Seoul. Furthermore, the local sense of urgency, behavioral internalization, consumer willingness to pay, and financial continuation are identified as barriers limiting Seoul’s governance capacity. We also examined and compared the results with other mega-cities, to learn from their experiences and plans to cope with the challenges in large cities. Introduction Globally, more than half of the world's population resides in urban areas, and this figure is projected to increase to 66% by 2050 [1].Cities are important engines of innovation and wealth creation, as well as sources of improved efficiencies for the use of materials and energy [2].On the other hand, primarily due to the concentration of people in a relatively small area, cities also act as centres of intense resource consumption and pollution [3,4].Rapid urbanization along with the effects of climate change creates multiple challenges regarding water quality, water scarcity, and flooding resulting in high vulnerability and, sometimes, unforeseen consequences [5].Actually, these risks are amplified in cities that lack the necessary infrastructure and/or institutional arrangements with the adaptive capacity to Water 2018, 10, 682 2 of 16 cope with these challenges [6,7].A sustainable city thus requires appropriate and efficient management and control of a large variety of issues, notably the availability of sufficient clean freshwater and the protection against flooding as a prerequisite for health, economic development and social well-being of their inhabitants [7]. Water management and water governance challenges are often more prominent in larger cities [6].Twenty-five million people-50% of the population of the Republic of Korea-reside in the metropolitan area of Seoul, which is amongst the largest urban regions in the world [8].The city of Seoul has undergone extensive growth over the past half-century and has grown into a prosperous metropolis.The city's growth has been accompanied by the development and adoption of advanced water technologies and water policies.However, continuous efforts are necessary to improve Seoul's water management to cope with pressures that constantly change and may aggravate due to climate change, aging infrastructures, and evolving social demands.Moreover, due to its complex geomorphology [9] and a high spatiotemporal variability in hydro-climatic conditions, water management in Korea has always been challenging [10]. The City Blueprint®Approach (CBA) has been developed to assess the sustainability of Integrated Water Resources Management (IWRM) in a municipality [11,12].The CBA consists of three assessment frameworks: (1) the Trends and Pressures Framework (TPF), which summarizes the principal social, environmental and financial pressures that impedes water management, (2) the City Blueprint Framework (CBF), which provides an overview of the performances of IWRM, and, (3) the water Governance Capacity Framework (GCF), which identifies key barriers and opportunities in urban water governance (Figure 1).The CBF has been used extensively since its development for rapid baseline assessments in about 70 cities around the globe.This allows for a comparison with other cities and facilitates city-to-city learning on strategic planning, exchange of knowledge, experiences, and best practices [13].Results for 45 municipalities and regions in 27 different countries have been published [14], and, a recent update with references to publications and presentations for 70 municipalities and regions in 37 countries is available as an E-Brochure (European Commission: Brussels, Belgium) on the EIP (European Innovation Partnerships) Water website [15]. Water 2018, 10, x 2 of 17 and efficient management and control of a large variety of issues, notably the availability of sufficient clean freshwater and the protection against flooding as a prerequisite for health, economic development and social well-being of their inhabitants [7]. Water management and water governance challenges are often more prominent in larger cities [6].Twenty-five million people-50% of the population of the Republic of Korea-reside in the metropolitan area of Seoul, which is amongst the largest urban regions in the world [8].The city of Seoul has undergone extensive growth over the past half-century and has grown into a prosperous metropolis.The city's growth has been accompanied by the development and adoption of advanced water technologies and water policies.However, continuous efforts are necessary to improve Seoul's water management to cope with pressures that constantly change and may aggravate due to climate change, aging infrastructures, and evolving social demands.Moreover, due to its complex geomorphology [9] and a high spatiotemporal variability in hydro-climatic conditions, water management in Korea has always been challenging [10]. The City Blueprint® Approach (CBA) has been developed to assess the sustainability of Integrated Water Resources Management (IWRM) in a municipality [11,12].The CBA consists of three assessment frameworks: (1) the Trends and Pressures Framework (TPF), which summarizes the principal social, environmental and financial pressures that impedes water management, (2) the City Blueprint Framework (CBF), which provides an overview of the performances of IWRM, and, (3) the water Governance Capacity Framework (GCF), which identifies key barriers and opportunities in urban water governance (Figure 1).The CBF has been used extensively since its development for rapid baseline assessments in about 70 cities around the globe.This allows for a comparison with other cities and facilitates city-to-city learning on strategic planning, exchange of knowledge, experiences, and best practices [13].Results for 45 municipalities and regions in 27 different countries have been published [14], and, a recent update with references to publications and presentations for 70 municipalities and regions in 37 countries is available as an E-Brochure (European Commission: Brussels, Belgium) on the EIP (European Innovation Partnerships) Water website [15].The aim of this study is to identify barriers, enablers, and city-to-city learning opportunities to improve Seoul's water management and resilience.Also, regarding the scale of Seoul, and given that scale matters for tackling water management challenges, we compare CBF results with other megacities that were examined in earlier studies [14,15].This comparative study will allow Seoul to learn from other well-managed cities and improve on weaknesses that were identified through this assessment.The aim of this study is to identify barriers, enablers, and city-to-city learning opportunities to improve Seoul's water management and resilience.Also, regarding the scale of Seoul, and given that scale matters for tackling water management challenges, we compare CBF results with other megacities that were examined in earlier studies [14,15].This comparative study will allow Seoul to learn from other well-managed cities and improve on weaknesses that were identified through this assessment. Study Area Korea is located in the northeastern part of the Asian continent, and Seoul, the capital of Korea, is in the northwestern part of the country.Seoul has been the capital of the country for more than 600 years since the foundation of the Joseon Dynasty in 1394.The geographical area of Seoul has expanded throughout history with the increasing population, and has shown explosive growth since the end of the Korean War in the early 1950s [16].The total area of Seoul is 605.2 km 2 with a population of 10,112,070 as of 2017 [17].The population of Seoul increased from 1.7 million in 1950 to 10 million in 1992, with an average growth rate of 278,583/year.Since then, the population has stabilized at around 10 million due to elevated housing prices and the government policy of controlling urban sprawl by constructing satellite cities and towns around Seoul [16].Currently, the population density is around 17,200/km 2 , which has been sustained for a decade.However, this number is still 70% greater than the average population density of the 34 other megacities worldwide (10,100/km 2 ).Although the population density within the administrative boundary of Seoul has stabilized since the early 1990s, the population of the Seoul metropolitan area, which includes several large satellite cities, keeps increasing and is expected to grow further in coming decades. Seoul has four distinct seasons with the average temperature varying from −1 • C during the winter season (from December to February) to 25 • C during the summer season (from June to August).About 65% of the annual rainfall is concentrated in summer due to the monsoon.While the highly variable hydro-climatic conditions have already posed many water-related challenges in Seoul, climate change effects are also apparent in precipitation and air temperature records [18].While the mean annual rainfall before the 1950s was around 1230 mm, it has now increased to around 1400 mm.The frequency and intensity of torrential rainfall in summer also increased, resulting in a greater intra-annual rainfall variability [18].The mean air temperature increased from 10.4 • C in 1909 to 13.4 • C in 2014 by an overall rate of 0.0238 • C/year, which is higher than global trends of 0.0066 ~0.0189 • C/year [16,19].This trend has become more significant since the 1950s due to rapid urbanization and, correspondingly, the urban heat island (UHI) effect [16,20].Since the UHI effect increases energy consumption, health problems (e.g., heat strokes), and surface water quality deterioration, a rapid increase in air temperature poses a serious IWRM challenge [21]. City Blueprint Approach In order to assess the trends and pressures, IWRM and the governance capacities of Seoul, we applied the CBA (Figure 1).Detailed information about the data sources, the calculations and examples are provided in three questionnaires available on the EIP Water website [15]. Trends and Pressure Framework (TPF) Each city has its own unique social, financial and environmental background.As such, cities' performance regarding urban water management should be carefully assessed based on the context that has shaped the current state of infrastructure and governance for urban water management.The TPF aims to provide a concise understanding of these contextual trends and pressures that affect water management of a city [13].It is evaluated with 12 indicators, which are divided over social, environmental, and financial categories (Table 1).Each indicator is scaled from 0 to 4 points, where a higher score indicates stronger pressure or concern.Note that many of these indicators are evaluated based on the ranking of the city among all countries, thus a specific score does not necessarily imply its absolute pressure state [13,15].After each indicator is scored, these scores are classified into five categories: no concern (0-0.5),little concern (0.5-1.5), medium concern (1.5-2.5),concern (2.5-3.5), and great concern (3.5-4).More detailed descriptions of the indicators, data requirements and sample calculations as well as a critical discussion on its limitations, can be found elsewhere [13,14]. City Blueprint Framework (CBF) The CBF comprises 25 indicators divided over seven broad categories: (I) water quality, (II) solid waste, (III) basic water services, (IV) wastewater treatment, (V) infrastructure, (VI) climate robustness and (VII) governance (Table 1) [5,6].The indicators are scored on a scale between 0 (very poor) to 10 (excellent).The geometric average of these 25 indicators is the Blue City Index (BCI) [6,7].As the CBF was developed as the first framework of the CBA [11], the applications of this methodology in many municipalities and regions have been published [14,15].Details regarding data, calculation of each index, and scaling methods are described in Koop and Van Leeuwen [13] and on the EIP Water website [15].For the assessment of Seoul, most of the data were collected from public sources.For indicators that require self-assessment, the relevant materials and data were collected by interviewing experts in the Seoul Water Institute and the Seoul Metropolitan Government. Governance Capacity Framework (GCF) The sustainability of any resource management regime depends on the institutional capacity that enables adaptive management that can cope with external shocks and pressures [23,24].The GCF was developed as the third framework of the CBA to assess the governance capacity of a city that allows or limits its sustainable management of water [12,23].The GCF aims to identify the key enabling or limiting of governance conditions regarding five main urban water challenges that are relevant to urbanization and climate change.These challenges include (1) water scarcity, (2) flood risk, (3) wastewater treatment, (4) solid waste treatment, and (5) UHI [23].For each challenge, the GCF assesses nine governance conditions, each of which includes three indicators.Each indicator is evaluated by a Likert-scale scoring method which ranges from 'very encouraging' (++) to 'very limiting' (−−) (Table 1).Since its development, the GCF has been successfully operationalized in several cities including Amsterdam, Quito, Ahmedabad, and New York City [22,[25][26][27].More details on the methodology are reported in Koop et al. [23]. The GCF indicator scoring was done through two steps: (1) preliminary scoring based on an analysis of policy documents and scientific literature, and (2) confirmatory scoring based on qualitative semi-structured interviews and surveys with experts to obtain additional details on the governance for each water challenge.The respondents were categorized as government personnel and academic scholars.Ten respondents were carefully selected based on their relevance to each of the five water challenges.Several respondents from the government sector had professional experience in multiple categories (e.g., flood risk and wastewater treatment).In those cases the interviewees were allowed to respond to multiple water challenges, resulting in at least two to three responses for each water challenge (Table 1). Trends and Pressures of Seoul All TPF indicators of Seoul ranged from no concern (0-0.5) to medium concern (1.5-2.5),except heat risk, for which the indicator score was 2.72 (concern).The indicators categorized as medium concern included education rate, political instability, water scarcity, and economic pressure, with respective scores of 1.70, 1.92, 1.67 and 2.12.The arithmetic mean of all indicators, i.e., the Trends and Pressures Index (TPI) was 0.90, which is rather low and comparable to cities in the Netherlands and Sweden [15].Among the 11 Asian cities analyzed with CBF, Singapore, with a TPI of 1.0, and Taipei, with a TPI of 1.4, were most comparable to Seoul.However, the other eight Asian cities (with TPIs of 1.9~2.6)face greater concerns, generally due to social pressure from high urbanization rates, environmental pressure from water scarcity, flooding, and heat risk, and financial pressure from low GDPs [28].A full overview of TPI scores for 70 municipalities and regions, including 11 Asian cities, is provided in the most recent version of the E-Brochure [15]. City Blueprint of Seoul The CBF presents a snapshot, i.e. the current performance of a city regarding IWRM.The geometric mean of all 25 CBF indicators, i.e. the Blue City Index, for Seoul is 7.3 (Figure 2).Based on a hierarchical clustering analysis of CBF indicator scores of 45 municipalities, Koop and Van Leeuwen [14] identified five different levels of sustainability of IWRM in cities worldwide: (1) cities lacking basic water services (BCI 0-2), (2) wasteful cities (BCI 2-4), water efficient cities (BCI 4-6), resource efficient and adaptive cities (BCI 6-8), and (5) water-wise cities (BCI 8-10).According to this categorization, Seoul is classified as a 'resource efficient and adaptive city.'Moreover, among the 70 cities assessed so far, Seoul has one of the highest BCI scores.However, our analysis reveals that there are also opportunities for improvement.The specific areas where improvement can be made are represented by relatively low indicator scores.Since many of the indicators obtained a full score of 10, we arbitrarily regarded any score less than six as the criterion for selecting areas for further improvement.Indicators that scored lower than six included nutrient recovery, average age of the sewer network, operation cost recovery, and stormwater separation (Figure 2a). cities assessed so far, Seoul has one of the highest BCI scores.However, our analysis reveals that there are also opportunities for improvement.The specific areas where improvement can be made are represented by relatively low indicator scores.Since many of the indicators obtained a full score of 10, we arbitrarily regarded any score less than six as the criterion for selecting areas for further improvement.Indicators that scored lower than six included nutrient recovery, average age of the sewer network, operation cost recovery, and stormwater separation (Figure 2a).Among the seven broad categories, wastewater treatment (IV) and infrastructure (V) have average scores of 6.4 and 4.0, respectively (Figure 2b).In particular, infrastructure includes three indicators, i.e. stormwater separation, average age of sewer, and operating cost recovery, where improvements can be made.In other words, infrastructure improvement is thought to be an effective measure to enhance IWRM in Seoul. The Water Governance Capacity of Seoul Table 2 shows the results of GCF analysis for five urban water challenges in Seoul, whereas Figure 3 summarizes the average of each indicator score for all five challenges.According to our analysis, four indicators, i.e., indicator 1.2 local sense of urgency, indicator 1.3 behavioral internalization, indicator 8.2 consumer willingness to pay, and indicator 8.3 financial continuation, were found to be Water 2018, 10, 682 7 of 16 limiting (Table 3).Furthermore, the governance capacity for water scarcity and UHI was relatively low with a few indicators that limited the governance capacity.Specifically, the indicators 1.2 and 1.3 were found to limit the capacity to govern the challenges of water scarcity, wastewater treatment, and UHI.In addition, the indicators 8.2 and 8.3 limited the capacity to govern flood risk and solid waste treatment.Water scarcity was the only challenge with five limiting governance indicator, i.e., indicators 1.2, 1.3, 6.3, 7.1, and 7.3.To what extent do actors have a sense of urgency, resulting in widely supported awareness, actions, and policies that address the water challenge? The perception regarding this indicator varied considerably between the stakeholders.Few experts and NGOs have recognized the uncertain threats from climate change and urbanization, and express their increasing concerns for the future.However, most of the general public does not feel this urgency about these water-related challenges. Behavioral internalization To what extent do local communities and stakeholders try to understand, react, anticipate and change their behavior in order to contribute to solutions regarding the water challenge?Although actions to improve urban water-related resilience (e.g.separate collection, green roofs, green space) exist, measures are only taken under external pressure, including restraints and economic incentives. Consumer willingness to pay How is expenditure regarding the water challenge perceived by all relevant stakeholders (i.e., is there trust that the money is well-spent)?Differences in awareness of the urgency of water challenges in communities determine the willingness to pay for measures.In general, rates of cost recovery in each neighborhood of the city are lower than the actual costs, even when funds are provided by the national or local governments, leading the neighborhood to maintain the status quo. Financial continuation To what extent do financial arrangements secure long-term, robust policy implementation, continuation, and risk reduction? To deal with future water challenges, long-term strategies have been planned in a ten-year cycle.However, since financial resource allocation support and maintain the status quo, there is a lack of resources to deal with prevention of unpredictable future risks.Furthermore, some water challenges that seem relatively minor issues for the communities do not receive sufficient financial resources for research and improvement. Comparison with Other Cities A full overview of BCI scores for 70 municipalities and regions, including 11 Asian cities, is provided in Figure 4. Cities with BCIs higher than Seoul are Singapore, and some cities in the Netherlands (e.g., Amsterdam and Groningen) and Sweden (e.g., Helsingborg, Malmo, Kristianstad, and Stockholm).As the major purpose of the CBA is city-to-city learning, i.e., improving implementation capacities of cities and regions by sharing best practices [15], these cities can be prime candidates for benchmarking.However, except for Singapore (with a population of 5.7 million in 2018), the scales of the other cities are much smaller than Seoul.The city with the largest population among these cities is Amsterdam with a population of 850,000, which is less than 10% of that of Seoul (or than 4% of the metropolitan area of Seoul).Also, all cities with BCIs lower than Seoul but higher than 6.0 are still not comparable to Seoul by scale.As many urban water management policies and plans are constrained by the scale of a city, e.g.large-scale replacement of sewer networks, we chose to limit the comparative analysis to megacities of a comparable size, i.e., Istanbul, London, and New York City (NYC).These are megacities with approximately 8-15 million inhabitants.The comparison of the 4 megacities is shown in Figure 5.The BCIs are highest for Seoul (7.3), then London (5.3),New York City (NYC) (4.8), and Istanbul (3.5).The common category with a high score is basic water services (Figure 5; indicators 7-9).On the contrary, wastewater treatment (indicators 10-13) and infrastructure (indicators 14-17) showed high variability among these cities.More specifically, in the wastewater treatment category, London and NYC showed better performance for the nutrient recovery (indicator 10) compared to Seoul. In the category of infrastructure, NYC showed a higher indicator score than Seoul for stormwater separation.The type of sewer system depends upon the history of infrastructure installment of a city, and typically younger drainage systems are better separated in stormwater and sewage systems.Thus, Istanbul shows high indicator scores for both the average age of the sewer and stormwater separation.However, in NYC, the score for stormwater separation is relatively higher than the average age of the sewer system, which is exactly opposite to Seoul.This implies that there are opportunities to learn from NYC if Seoul is to improve its sewer system by expanding the portion of separate stormwater systems.Operation cost recovery is an indicator for which Seoul scores lower than London.In the category of infrastructure, NYC showed a higher indicator score than Seoul for stormwater separation.The type of sewer system depends upon the history of infrastructure installment of a city, and typically younger drainage systems are better separated in stormwater and sewage systems.Thus, Istanbul shows high indicator scores for both the average age of the sewer and stormwater separation.However, in NYC, the score for stormwater separation is relatively higher than the average age of the sewer system, which is exactly opposite to Seoul.This implies that there are opportunities to learn from NYC if Seoul is to improve its sewer system by expanding the portion of separate stormwater systems.Operation cost recovery is an indicator for which Seoul scores lower than London. The Challenges for Seoul Analyzing water management and water governance of a megacity, especially when its infrastructure has been developed over several decades, provides unique insights that may not be identified in smaller and younger cities. Due to its large scale and past propensity to build centralized infrastructures driven by economic efficiency, a full-scale replacement or renovation of these infrastructures to cope with changing conditions may not be economically nor technically feasible in the short-term.Thus, finding an innovative way to increase resilience in cities may be required and offers learning opportunities to other megacities, especially those on rapid development trajectories. South Korea is recognized for its fast, intense economic development and industrialization.This has been accompanied by rapid urbanization along with an extensive installation of urban infrastructure in Seoul [29,30].Due to the urgent need to provide essential water services for the rapidly growing population in the city, however, past water policies have focused on the expansion of water infrastructure in a quantitative manner without deliberation for long-term sustainable water management in the urban environment.This urban development process has been successful as reflected by the CBA analysis, which categorized Seoul as a 'resource efficient and adaptive city' with a BCI score of 7.3.However, climate change and aging water infrastructure act as drivers for further change and new emerging challenges, which call for further and continuous adaptation and improvement in infrastructure, policies, and practices of urban water management and governance [7,12].Improving resilience is the big challenge.Based on our analysis, we provide some suggestions for priorities to improve IWRM in Seoul, which could potentially transform Seoul from a 'resource efficient and adaptive city' to a 'water-wise city' in the near future. Nutrient Recovery Nutrient recovery is one of the indicators which offers clear opportunities for improving Seoul's IWRM as it was shown as the only weakness in the category of wastewater treatment (Figure 1).Nutrient recovery is necessary in Korea for several reasons: (1) phosphorus is nonrenewable and a limited resource [31]; (2) Korea entirely depends on the imports of phosphorus; (3) phosphorus removal from wastewater will significantly contribute to the reduction of eutrophication of surface The Challenges for Seoul Analyzing water management and water governance of a megacity, especially when its infrastructure has been developed over several decades, provides unique insights that may not be identified in smaller and younger cities. Due to its large scale and past propensity to build centralized infrastructures driven by economic efficiency, a full-scale replacement or renovation of these infrastructures to cope with changing conditions may not be economically nor technically feasible in the short-term.Thus, finding an innovative way to increase resilience in cities may be required and offers learning opportunities to other megacities, especially those on rapid development trajectories. South Korea is recognized for its fast, intense economic development and industrialization.This has been accompanied by rapid urbanization along with an extensive installation of urban infrastructure in Seoul [29,30].Due to the urgent need to provide essential water services for the rapidly growing population in the city, however, past water policies have focused on the expansion of water infrastructure in a quantitative manner without deliberation for long-term sustainable water management in the urban environment.This urban development process has been successful as reflected by the CBA analysis, which categorized Seoul as a 'resource efficient and adaptive city' with a BCI score of 7.3.However, climate change and aging water infrastructure act as drivers for further change and new emerging challenges, which call for further and continuous adaptation and improvement in infrastructure, policies, and practices of urban water management and governance [7,12].Improving resilience is the big challenge.Based on our analysis, we provide some suggestions for priorities to improve IWRM in Seoul, which could potentially transform Seoul from a 'resource efficient and adaptive city' to a 'water-wise city' in the near future. Nutrient Recovery Nutrient recovery is one of the indicators which offers clear opportunities for improving Seoul's IWRM as it was shown as the only weakness in the category of wastewater treatment (Figure 1).Nutrient recovery is necessary in Korea for several reasons: (1) phosphorus is nonrenewable and a limited resource [31]; (2) Korea entirely depends on the imports of phosphorus; (3) phosphorus removal from wastewater will significantly contribute to the reduction of eutrophication of surface waters.Thus, the introduction of technology for recovering nutrients from wastewater is an effective option for coping with diminishing resources, while simultaneously reducing eutrophication and improving surface water quality.However, nutrient recovery from wastewater treatment is a rather recent technology which still needs further improvement to become economically feasible and to meet regulations for the quality of recovered materials in many countries.London and NYC recover phosphorus by producing biosolids.In the UK, 3-4 million tons of biosolids, which is about 75% of sewage sludge production, are applied annually to agricultural land [32].Also, NYC produces approximately 1200 tons of biosolids every day.In 1988, US Federal government banned the ocean disposal of biosolids, and NYC needed to find alternative uses for this material.The NYC Department of Environmental Protection implemented a program to beneficially use most of the biosolids to fertilize crops and improve soil conditions for plant growth [33]. However, biosolids also contain chemical contaminants such as heavy metals and persistent organic chemicals, which limit the use of biosolids is many countries.This is one of the reasons that biosolids are not yet actively used in Seoul.Similar problems were observed in Amsterdam, and the produced struvite (MgNH 4 PO 4 •6H 2 O) can now be applied as fertilizer in parks and sports fields, preventing contaminants from entering the food chain [34,35].Currently, in Korea, nutrient recovery from wastewater treatment plants (WWTPs) is done only by recycling through earthworm rearing and composting.The national legislation allows the use of these composts for landscaping of gardens, parks, etc., and not for edible and feed production purposes, as is the case in Amsterdam [34,35].In 2014, the Seoul Metropolitan Waterworks Research Institute succeeded in developing a device for recovering phosphorus from sewage, but it has not yet been applied on a commercial scale mainly due to the economic feasibility. The recovery of nutrients is still not common for WWTPs, even though recovery technologies are available.Countries like the Netherlands, Denmark, and Germany upgraded their plants recently.Although there are many economic, technical, and legislative issues to overcome for recovering nutrients from WWTPs, the limited availability of phosphorous, which is essential for food production, imposes a potential future geo-political risk.Currently, the economic viability and safety issues for using the recovered materials are the major barriers that hinder the active introduction of nutrient recovery facilities in Seoul.However, given that several cities, including Amsterdam, already installed the technology successfully [34,35], a stronger willingness of the government to achieve a long-term IWRM will be the most critical decision factor for enabling the new technology to become economically viable. Operation Cost Recovery Water infrastructure is the most expensive infrastructure in cities [12], which means securing an adequate long-term financial condition is necessary for effective maintenance and improvement thereof [36].However, as the indicator of the operation cost recovery in Seoul shows, it may be a major obstacle hindering large-scale improvement of the sewer systems [37]. Among the other megacities, London is the only city with a higher score than Seoul.The primary reason for London's higher operation cost recovery is privatization in water sectors since 1989.Although London's water and sewerage charges are set by the regulator 'Ofwat', a non-ministerial government department that protects the interests of consumers, the charges should ensure the profitability of water companies.Total annual charges for drinking water and sewerage in London were 3.98 USD/m 3 in 2013 [38]. On the contrary, the total annual charge of water services in Seoul is extremely low with 0.53 USD/m 3 in 2013 [39,40] which makes the realization rate of drinking water and sewerage 89% and 67%, respectively.According to an OECD (Organisation for Economic Co-operation and Development) survey, the average water price of 114 cities in OECD countries was 3.84 USD/m 3 and the water price of Korea is the lowest among the OECD countries [38].The low water pricing in Korea hinders secure reinvestment of resources for introducing new water infrastructure and improving old water facilities, such as those seen in other indicators.This may pose a serious threat to the long-term water security and supply stability in Seoul, which may result in a decline in service levels.Given that climate change tends to increase the vulnerability of water infrastructure in various ways, securing enough recovery from operation costs will be an essential option to provide water services sustainably to Seoul citizens.A political discussion and decision on a sufficient water price is one of the key components for improving urban resilience to unexpected future risks [37]. Sewer Systems During the 1970s and 80s, the majority of the sewer system in Seoul was installed with combined sewers.Since 2000, the installment of separate sewer systems has been given priority.However, as the sewer maintenance project is limited to only small redeveloping areas, it is unlikely to increase the portion of the separated sewer system in the short term unless there is a substantial change in local water policy.Also, adverse effects, such as land subsidence, from the aging of pipe systems are escalating [16].Thus, along with expanding separate sewer systems, giving higher priority to the replacement of old pipes may reduce the water-related risks significantly.The relevance of infrastructure maintenance is high as observed by both the OECD [7] and UN-Habitat [6]. Combined sewer systems are common in many cities, such as Seoul, London, and NYC.However, due to a continued increase in the impermeable surface area that increases stormwater runoff and higher peak precipitation due to climate change, there is a high likelihood of combined sewer overflow (CSO) [41].NYC is also concerned about CSOs as 60~70% are combined sewer systems.Since it is a daunting task to change the existing infrastructure in such large cities, London and NYC have tried to implement various alternative ways to deal with CSOs.These include the construction of sewer tunnels or CSO retention tanks, upgrades in key WWTPs, and the development of green infrastructure [42]. Seoul and NYC are trying to increase the portion of separate sewer systems, but it will take a long time due to the scale of construction.Also, only a partial retrofitting of the whole system may result in misconnections between the different drainage types, which can cause undesired effects on sewer management and water quality.In this situation, implementing an alternative way to deal with combined sewers may serve as a solution for large cities. Implications from the Governance Capacity Analysis Even if weaknesses of specific sectors for the water management of a city can be identified (e.g., by CBF), if there is a barrier in the governance for disseminating core information and promoting future actions for fixing existing weaknesses, a city may not be able to cope with the challenges that threaten the sustainable provision of water services.Our analysis of the governance capacity indicated that awareness and financial viability are the weakest governance conditions in Seoul.Specifically, there is a low local sense of urgency and behavioral internalization for the challenges of water scarcity, wastewater treatment, and urban heat islands (UHI).Furthermore, there is a low consumer willingness to pay and financial continuation for the challenges of flood risk and UHI. There are various efforts in education, promoting the engagement of local stakeholders, and using media for dispersing the information on water challenges that the citizens may face in the near future.In spite of these efforts, however, raising the local sense of urgency and behavioral internalization is a difficult task.Ironically, the most effective boost in awareness can be achieved if the local communities are frequently exposed to water threats and have experiences encountering inconveniencies within water services.As one of the cities with well-equipped water infrastructures, Seoul has overcome various urban water problems that have been the norm in the past.As unprecedented changes are expected, as indicated by our CBF analysis, it is important to disseminate existing and newly obtained information to the public as a governance condition for adequately managing urban resilience.Resilience is not a static property of a system but requires constant adaptation and transformation [43].Keeping the status quo will not ensure the sustainability of IWRM of Seoul in the coming decades, as pressures are likely to aggravate.This will require strong governance for raising consumer willingness to pay and financial continuation.Without such a sense of urgency as water infrastructures currently perform well, efforts for enhancing financial capacity and continuity may raise political objection.Therefore, local stakeholders must perceive that resilience of the current water infrastructure in Seoul may be adequate to respond to pressures of the past, but not to pressures of the future.Therefore, in addition to mitigation, it is essential to develop adequate adaptive responses as a means of moderating damages or realizing opportunities associated with climate change [12,44]. Conclusions We examined the current status of IWRM of Seoul by using the City Blueprint Approach in order to explore the options for improvement of water management and water governance in Seoul.This study also sets a good example to the challenges faced in IWRM in megacities, especially those with well-established water infrastructure.Our analysis revealed that there are several options to achieve better IWRM in Seoul.These include nutrient recovery, stormwater separation, and operation cost recovery, for which vigorous investments with high priority may provide strong opportunities for improvement. When compared to other megacities, stormwater separation is a common weakness.Because of the large scale of these cities, their urban environment is complicated by the intertwined structure of multiple infrastructures, reducing the ability to manage infrastructure adaptively.While it is imperative to increase the portion of separated drainage systems in a city, especially to reduce the risk from combined sewer overflows, the experiences in other cities show that there are also other sustainable options such as expanding green infrastructure.When implemented with a long-term view on the sustainability of IWRM in cities, these options can also enhance other sectors, such as increasing the green area ratio and reducing impervious surface areas, thereby improving e.g.recreation, attractiveness and the livability of the city. Resource recovery is another indicator that can improve Seoul's IWRM, especially regarding the depletion of phosphorous reserves in the near future [14].Due to the early stage of establishment, nutrient recovery technologies are still not common in use.Low recovery rates, economic feasibility, and limited applicability of end-products by regulations are barriers that making cities reluctant to deploy the technology on a commercial scale.Nonetheless, megacities, where large-scale WWTPs are typically in operation, are the right places since the large flow rate of wastewater can improve the phosphorous recovery rate with high economic efficiency and profitability and, at the same time, improve surface water quality.Regarding its necessity in the coming decades, the benefits that a city gains include securing the depleting resources and relative advantages of a mega-scale for improving the efficiency, Seoul can benefit from embarking on a path to promote the development and installation of nutrient recovery technologies. Several potential barriers-the local sense of urgency, behavioral internalization, consumer willingness to pay, and financial continuation-that may retard the efforts for improving urban water management sustainability were also identified through the water GCF.This finding is especially important for cities that rely heavily upon the current system with a false sense of security.A resilient city requires an adequate preparedness for the occurrence of future threats, as well as an adaptive capacity to cope with continuously changing pressures [14,44,45]. Many cities are facing similar water-related challenges.Although Seoul gained a high overall BCI score, it does not necessarily ensure the sustainability of IWRM in the future.Water-related risks constantly evolve due to non-stationary conditions resulting from urban dynamics and climate change, and it requires timely adaptation as well as the transformation of a city to respond to the changing environment [14,44,45].The challenges of megacities may be proportional to their scale, but large cities are also known as centers of innovation.Finding sustainable and prompt solutions can be stimulated by sharing the experiences and knowledge of multiple cities that are trying to cope with these challenges [15]. Figure 1 . Figure 1.Overview of the City Blueprint Approach which consists of three complementary diagnostic frameworks to assess the urban water cycle management and governance [12-14]. Figure 1 . Figure 1.Overview of the City Blueprint Approach which consists of three complementary diagnostic frameworks to assess the urban water cycle management and governance [12-14]. Figure 2 . Figure 2. The City Blueprint of Seoul (a) based on 25 indicator scores and (b) the average scores of the seven categories.The Blue City Index, the geometric mean of all 25 indicators, is 7.3. Figure 2 . Figure 2. The City Blueprint of Seoul (a) based on 25 indicator scores and (b) the average scores of the seven categories.The Blue City Index, the geometric mean of all 25 indicators, is 7.3. Water 2018, 10 , x 9 of 17 Figure 3 . Figure 3. Result of Governance Capacity Framework (GCF) of Seoul.The 27 indicators are organized clockwise around the spider web by most limiting (−−) to most encouraging (++) for the overall governance capacity. Figure 3 . Figure 3. Result of the Governance Capacity Framework (GCF) of Seoul.The 27 indicators are organized clockwise around the spider web by most limiting (−−) to most encouraging (++) for the overall governance capacity. Figure 4 . Figure 4. Results of the City Blueprint analysis of 70 municipalities and regions in 37 different countries.The Blue City Index, the geometric mean of 25 indicators of the City Blueprint, has been calculated according to Koop and Van Leeuwen [13-15].The BCIs of Seoul, New York City, London, and Istanbul are highlighted. Figure 4 . Figure 4. Results of the City Blueprint analysis of 70 municipalities and regions in 37 different countries.The Blue City Index, the geometric mean of 25 indicators of the City Blueprint, has been calculated according to Koop and Van Leeuwen [13-15].The BCIs of Seoul, New York City, London, and Istanbul are highlighted. Figure 5 . Figure 5.Comparison of 25 indicator scores between Seoul, New York City, London, and Istanbul. Figure 5 . Figure 5.Comparison of 25 indicator scores between Seoul, New York City, London, and Istanbul. and Pressures Framework (TPF) [15]Public data or data provided by the water and wastewater utilities and cities based on a questionnaire[15]Scores 0 (low performance) to 10 (high performance)Overall score Blue City Index (BCI), the geometric mean of 25 indicatorsGovernance Capacity Framework (GCF)GoalBaseline assessment of the governance capacity of a city Framework Five challenges: (1) water scarcity, (2) flood risk, (3) wastewater treatment, (4) solid waste treatment, and (5) UHI.In each water challenge, 27 indicators are divided over nine broad categories: Data Policy documents, scientific literature, and interviews Total interviewees: 10 (academia: 5, practitioners or civil servants: 5) 2~3 interviewees for each challenge Scores 'very encouraging (++)' to 'very limiting (−−)' Table 2 . Results of the water governance capacity analysis of Seoul. Table 3 . Overview of the four most limiting governance indicators. 1.2 Local sense of urgency.
2019-04-07T12:47:01.863Z
2018-05-24T00:00:00.000
{ "year": 2018, "sha1": "886b1e0b35ae2d15a3c392a0936f1aef13e2622a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4441/10/6/682/pdf?version=1527168386", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "886b1e0b35ae2d15a3c392a0936f1aef13e2622a", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
777208
pes2o/s2orc
v3-fos-license
Exploring parent-reported barriers to supporting their child’s health behaviors: a cross-sectional study Background Parents can influence the health behaviors of their children by engaging in supportive behaviors (e.g., playing outside with their child, limiting recreational screen time). How, and the extent to which parents engage in supportive behaviors may be influenced by perceived barriers. The purpose of this study is to explore whether the frequency, and types, of barriers to providing parental support are dependent on the type of child health behavior being supported (i.e., physical activity, recreational screen time reduction, healthy eating, and sleep). Methods Study participants were 1140 Ontario parents with at least one child under the age of 18 who completed a Computer Assisted Telephone Interview (CATI) survey about parental support behaviors. Open-ended responses about perceived barriers to parental support were coded, and aggregated to meta-categories adopted from the social-ecological model (i.e., individual child, individual parent, interpersonal, environmental). Freidman rank sum tests were used to assess differences across child behaviors. Wilcoxon rank sum tests with Bonferroni adjustments were used as a post hoc test for significant Freidman results. Results There were more barriers reported for supporting physical activity than for any other child behavior (ps < .01, As ≥ .53). Parents reported more parent level and environmental level barriers to supporting child physical activity versus other behaviors (ps < .001, As ≥ .55), child level barriers were more frequently reported for supporting healthy eating and sleep (ps < .001, As ≥ .57), and interpersonal barriers were more frequently reported for supporting recreational screen time reduction (ps < .001, As ≥ .52). Overall, parents reported more child and parent level barriers versus interpersonal and environmental barriers to supporting child health. Conclusions Parents experience a variety of barriers to supporting their children’s health behaviors. Differences in types of barriers across child health behaviors emerged; however, some frequently reported barriers (e.g., child preferences) were common across behaviors. Interventions promoting parental support should consider strategies that can accommodate parents’ busy schedules, and relate to activities that children find enjoyable. Creating supportive environments that help facilitate support behaviors, while minimizing parent level barriers, may be of particular benefit. Future research should explore the impact of barriers on parental support behaviors, and effective strategies for overcoming common barriers. Background The physical, mental, social and emotional health of Canadian children and youth is influenced by their participation in several health behaviors. Participating in physical activity [1][2][3], eating foods that provide a balanced diet rich in fruits and vegetables, and low in sugar-sweetened beverages [4][5][6], getting enough good quality sleep [7][8][9][10], and limiting sedentary recreational screen time [11,12], all contribute to desirable health outcomes by protecting children against the development of chronic diseases. Concurrently increasing child participation in these distinct behaviors is crucial for improving child health at the population level. Parents play an important role in their children's health, in particular, by undertaking parental support behaviors that can influence the extent to which their children engage in health behaviors [13,14]. For physical activity, for example, this may extend to providing transportation to places where children can be active, participating in physical activity with their children, or discussing the benefits of being active [15][16][17][18]. Similarly, setting and enforcing rules about child screen time, and modelling good screen time habits, have been associated with reductions in child screen time [19]. Eating meals together as a family, ensuring healthy foods are easily accessible, and restricting TV-viewing during meals are supportive behaviors and practices associated with child healthy eating [20,21]. Whereas having a bed time routine and positive parent-child interactions before bed can help to improve child sleep [22]. Supporting each of these child health behaviors is important; however, it is clear that the types of support behaviors (i.e., facilitative, restricting, encouraging) and strategies for support can vary substantially depending on the child behavior being supported, and thus, the desired outcome. Developing effective interventions to promote parental support behaviors requires an understanding of the barriers to providing that support. A review of interventions targeting parents to improve child weight-related behaviors concluded that identifying barriers parents experience in the early stages of an intervention was a common feature across effective interventions [23]. By identifying barriers to providing support, strategies to overcome these barriers can be developed proactively. A handful of studies and reviews have investigated barriers to parental support for child physical activity, screen time reduction, and/or healthy eating [24][25][26][27][28][29][30]. The literature search undertaken for this study yielded no articles that have focused on barriers to supporting child sleep. Results from previous work have identified commonly reported barriers to supporting physical activity as lack of time, safety concerns, child preferences, and weather [24-27, 29, 30]. Common barriers to screen time reduction were child preference for screens, child social networks and communication, parent challenges setting rules and enforcing them due to conflict, weather, parent time, and their own role modelling [26,[28][29][30]. For healthy eating, child preferences for eating, parent lack of time, cost, parental presence and influence of other carers, and their own role modelling were common barriers [25,27,29,30]. When seeking to understand barriers, it can be useful to group similar types of barriers together using principles from the social-ecological model for health promotion [31]. This model distinguishes influences on behavior change as resulting from intrapersonal (i.e., individual child or parent characteristics), interpersonal (i.e., influences from social networks like friends and family), and broader environmental factors (i.e., community and external influences). This model has been applied previously to understand barriers to supporting child health behaviors [24,25,27,29], with more barriers typically being identified at the child and parent individual levels [29]. This model will be applied in this study to compare and contrast types of barriers across child health behaviors. Given the breadth of supportive behaviors across distinct child health behaviors, it is reasonable to expect that parents experience diverse barriers to supporting these different child health behaviors. Uncovering both these unique barriers, as well as where similar barriers overlap, is especially relevant as we consider that many current population-level child health programs (e.g., the Healthy Kids Community Challenge [HKCC] in Ontario, Canada; Obesity Prevention and Lifestyle [OPAL] in Australia) as well as new behavioral guidelines (i.e., Canadian 24-Hour Movement Guidelines for Children and Youth [32]) target multiple child health behaviors, and corresponding parental support behaviors, concurrently. Comparing similarities and differences in types of barriers across supporting different child health behaviors may allow for the development of efficient program strategies that can help parents overcome both unique and common barriers to supporting their children. Exploring which child behaviors parents perceive as having the most barriers to support will provide an indication of the areas where parents require the most assistance from program planners and public health. Building on the literature, the current study aimed to explore whether the number of barriers to providing parental support for child physical activity, recreational screen time reduction, healthy eating, and/or sleep differed by the child health behavior being supported. A secondary objective of this study was to explore the types of barriers parents reported for supporting these individual child health behaviors and whether the types of barriers differed by child health behavior. This study adds to the literature by assessing barriers to supporting different child health behaviors in the same parents, including barriers to supporting child sleep. Study design This was a cross-sectional study with a sample of parents living in Ontario, Canada. Procedure This survey was conducted for baseline data collection of a provincial evaluation for a program targeting child physical activity and healthy eating through Ontario communities [33]. This study represents secondary analysis of this baseline survey which focused on parental support behaviors, including barriers to parental support, and parent-reported child health behaviors. Once recruited over the phone, participants provided consent to participate, and responded to the child demographic questions. They were then randomized to complete a child behavior module of questions (i.e., physical activity, screen time, healthy eating, or sleep). Due to the length of the survey, participants were asked to complete at least two behavioral modules; however, they could complete additional optional modules if they agreed. Participants who did not respond to all four behavioral modules of questions were excluded from the analysis. Although parents or guardians (referred to as parents going forward) of children ages less than 1 year were eligible to participate in the survey, they did not complete the behavioral modules and thus were excluded from this analysis. Before concluding the survey, participants were asked to provide demographic information. Data collection for this survey was completed entirely over the phone. This study was approved by the Public Health Ontario Ethics Review Board. Participants Data were collected by a hired research services vendor, between February and March 2015, using Computer Assisted Telephone Interviewing. A random sample of publically available phone numbers was drawn, including both landlines and cellular phones, sampled participants were recruited by phone. Eligible participants were parents with at least one child less than 18 years of age living in their household. Measures Demographics Parents reported their own gender, employment status, household income, marital status, and education. Parents also reported the age and gender of their child with the next birthday, and the number of children living in their household. Barriers to parental support Barriers to parental support were asked with one openended item for each of the child health behaviors being investigated. For child physical activity, healthy eating, and/or sleep participants were asked, "Can you please tell me what, if anything, makes helping your child be (physically active /eat healthy foods/get enough sleep) difficult"? For child screen time participants were asked, "Can you please tell me what, if anything, makes helping your child lower his/her screen time difficult"? An openended question was thought to be appropriate to allow unique and broad barriers to be reported; however, common coding categories of potential barriers to parental support were pre-identified before data collection using the literature. Interviewers were instructed to quantitatively indicate responses into the appropriate preidentified coding category if it emerged. Barriers beyond the pre-identified categories were indicated as "other" and the open-ended response was transcribed verbatim. Data analysis Data analysis was completed using Microsoft Excel 2010 and R version 3.2.3. As the data are non-parametric (i.e., counts), normality was not assessed. Random regression imputation was used to account for missing income data [34]. Specifically, parent age, marital status, employment status, immigration status, and community were used in a prediction model to estimate income values where data were missing. Demographics Fisher-exact tests for categorical demographic variables, and a Mann-Whitney test for the continuous demographic variable, were used to assess whether there were significant demographic differences between the full participant sample, and the sub-sample included in this study. Coding barriers to parental support Two coders independently coded 25% of the open-ended "other" barrier responses for each behavior in order to deduce further coding categories, and to code any responses that fit into pre-identified coding categories but were not pre-coded by the interviewers. Overlap of 25% was thought to be conservative and appropriate as the codes reached saturation. The coders then clarified any disagreements regarding the new coding categories that emerged inductively and assigned each code a clarified definition that was added to the codebook for each child behavior. The coders then coded an additional 25% of the open-ended responses for each child behavior category with the finalized codebook. Cohen's kappa was calculated to establish inter-rater reliability with > .80 being deemed acceptable agreement. Since the interrater reliability exceeded the acceptable agreement (Cohen's kappas all > .80), a single coder completed the remainder of the coding. In addition to the barriers coding categories established prior to data collection, several new codes emerged for each child behavior. In line with previous work, the social-ecological framework [31] was adopted to organize the coding categories by barriers at the individual level (both parent and child), interpersonal level barriers, and environmental barriers. Differences between child health behaviors Freidman rank sum tests were used to assess differences in the number of reported barriers across child behaviors, and the differences in types (meta-categories) of barriers reported across child behaviors. The frequencies and types of reported barriers represent how often parents' perceive these barriers, not the relative strength or magnitude of that barrier. Wilcoxon rank sum tests with a Bonferroni adjustment were used as a post hoc test for significant Freidman test results. The Bonferroni adjustment corrects that all effects were reported at a 0.0125 level of significance. Effect sizes were calculated using Vargha and Delaney's paired A statistic, which denotes the probability that a randomly chosen score for one behavior would be greater than a randomly chosen score for the comparator [35,36]. This effect size does not require parametric assumptions and allows for multiple group comparisons, and can be interpreted as a percent chance of difference, with 0.5 (i.e., 50%) indicating equality between groups [35,36]. Participants Of the 3206 participants that responded to the survey, 1140 (35.6%) responded to all four behavioral modules and were therefore included in these analyses. Using the American Association for Public Opinion Research Outcome Rate Calculator [37], this study achieved a response rate 1 of 6.6%, and a cooperation rate 2 of 11.9%. Fisher-exact tests for categorical demographic variables revealed that participants who completed all behavioral modules (i.e., subsample) did not significantly differ from the remaining sample by their reported income, marital status, education, gender, gender of child, or number of children in the household (ps > .05). The subsample was significantly different from the remaining sample by their language spoken at home, and time since immigration to Canada (ps < .05). A Mann-Whitney testfor child age revealed that parents in the subsample had children who were significantly older than the remaining sample (p < .05; mean subsample = 9.1 years vs. mean remaining = 8.7 years). Demographic information for the subsample included in the analyses vs. the remaining participant sample is available in Table 1. Barriers coding Descriptive statistics for the number of barriers to parental support reported by each parent by child health behavior are presented in Table 2. In general, the majority of parents reported at least one barrier to supporting each child health behavior, with less than a third reporting no barriers. The number of coding categories for barriers to supporting each child health behavior were 30, 36, 25, and 32 for child physical activity, recreational screen time, healthy eating, and sleep respectively. Descriptive statistics for the number of parent reported barriers by child health behavior, by both coding categories and metacategories from the social-ecological model, are available in Table 3. Differences between child health behaviors being supported Number of parent-reported barriers by child health behavior The number of parent-reported barriers to parental support were significantly different when considering the different child health behaviors being supported (χ 2 (3) = 22.31, p < .001). Post hoc tests revealed that there were significantly more barriers reported for supporting child physical activity compared to supporting recreational screen time reduction (V = 99967, p < .001, A = .53) healthy eating (V = 101830, p = .002, A = .54) and sleep (V = 107130, p < .001, A = .55). There were no other differences between behaviors (ps > .0125, .48 ≤ As ≥ .52). Types of parent-reported barriers by child health behavior Child individual level barriers were significantly different by child health behavior being supported (χ 2 (3) = 151.46, p < .001). Post hoc tests revealed that compared to physical activity, there were significantly more child barriers reported to supporting both child healthy eating (V = 29316, p < .001, A = .59) and sleep (V = 35396, p < .001, A = .59). Similarly, compared to recreational screen time there were significantly more child barriers reported to supporting both child healthy eating (V = 42202, p < .001, A = .58) and sleep (V = 36318, p < .001, A = .57). There were no other differences between behaviors (ps > .0125, .50 ≤ As ≥ .52). Environmental barriers were significantly different by child health behavior being supported (χ 2 (3) = 253.47, p < .001). Post hoc tests revealed that environmental barriers were reported significantly more frequently for supporting child physical activity compared to recreational Note. Subsample is parents who responded to all four behavioral modules, and were subsequently included in analyses screen time reduction (V = 28027, p < .001, A = .57), healthy eating (V = 29118, p < .001, A = .59), and sleep (V = 31330, p < .001, A = .58). In addition, compared to healthy eating, parents reported significantly more environmental barriers to supporting both child recreational screen time reduction (V = 6278.5, p < .001, A = .52) and sleep (V = 1925, p < .001, A = .51). There were no other differences between groups (p > .0125, A = .51). Discussion The results for both the number of parent-reported barriers to engaging in parental support activities, as well as the types of barriers reported, yielded several important and interesting findings. Though effect sizes were small, parents reported significantly more barriers to supporting their child's physical activity than any other child health behavior. Considering types of barriers reported in the context of the social-ecological model, significant differences emerged by child behavior. At the individual level, parents reported more child barriers to supporting child healthy eating and sleep than either physical activity participation or recreational screen time reduction. Conversely, more parent level individual barriers were reported for supporting child physical activity participation than any of the other child health behavior. More parent level barriers emerged for supporting both healthy eating and recreational screen time reduction than for supporting sleep, with small effect sizes. When considering barriers related to the interpersonal influence of others, parents reported significantly more barriers to supporting child recreational screen time reduction than any other child behavior, with healthy eating and sleep support barriers being reported more often than physical activity support barriers, all with small effect sizes. Similar to the parent level barriers, parents reported significantly more environmental type barriers to supporting child physical activity participation than any other child health behavior. More environmental barriers were also reported for supporting recreational screen time reduction and sleep than for supporting healthy eating, with small effect sizes. For child physical activity, parents reported experiencing more barriers in general, and more parent and environmental level barriers specifically, than for supporting any other child behavior. The difference in the number of barriers being reported may be understood by comparing the types of barriers, and types of support behaviors, involved with supporting physical activity versus the other behaviors. Many types of support for child physical activity involve direct and active participation from parents (e.g., play outside with them), whereas for other child behaviors parental support may be more restrictive (e.g., setting rules for screens and bed time) or efficient (e.g., making dinner for the whole family at once, purchasing healthier options). As such, physical activity support behaviors may be more effortful and time-consuming for parents in addition to their everyday tasks, presenting more barriers. Common parent level (e.g., time/busy), and environmental barriers (e.g., weather) to supporting child physical activity that were identified in this study are also common in the literature [24][25][26][27]. The prevalence of these reported barriers suggests that they may be barriers that parents have difficulty overcoming. This makes sense given parent work commitments, (e.g., "shift work taking away all my family time") and the unpredictable nature of the weather, (e.g., "freezing temperatures"), might be realistically or perceived to be outside of parents' control. Interestingly, time as a barrier may be due to competing demands of parents, or it may be reflective of parents valuing other activities for their children above physical activity, such as homework [24]. Strategies that can help parents support their children's physical activity within limited time constraints (e.g., getting children to assist with active chores, letting them walk/ride a bike to school, registering them for active after-school programs in the community), and in a variety of weather scenarios (e.g., winter, rain), may be particularly beneficial for improving parental support for child physical activity. Parents reported experiencing more interpersonal barriers to supporting child recreational screen time reduction that any other type of behavior, although the effect size was small. This is not surprising given that screens are commonly used by children as a means of communicating with their social networks [26,28], and are increasingly being integrated into society [29]. When Table 2 Number of barriers to support reported by each participant by child health behavior Reported barriers per participant (#) Barriers to supporting child physical activity (%(n)) Barriers to supporting child screen time reduction (%(n)) Barriers to supporting child healthy eating (%(n)) Barriers to supporting child sleep (%(n)) observing common barriers (of any type) to supporting recreational screen time, at the parent level lack of parental control (e.g., "he/she is so strong willed" or "he/she does not listen"), and a lack of alternate activity (e.g., "finding something else that he/she can do herself"), were prominent whereas at the child level child preferences (e.g., "he/she is obsessed", "they enjoy screen time") was common, aligning with previous studies [26,28,29]. Since children both enjoy screen time and use it for peer communication, parents may be met with resistance when trying to support screen time reduction [28]. This may lead parents to feel a lack of control over their child's behavior. However, finding alternate activities that children enjoy as much as screen time, that provide other opportunities for peer interaction, and that do not require active investment from parents and thereby manifest in other barriers (e.g., time/busy), may be challenging. Child level barriers to supporting healthy eating and sleep were reported more often than for physical activity or recreational screen time. Interestingly, child level barriers for healthy eating and sleep accounted for the two most reported categories of barriers at any socialecological level for any behavior. Taken together, it is evident that not only are more child level barriers being reported for healthy eating and sleep in comparison to other child behaviors, but that many parents are experiencing these types of barriers. For healthy eating, the vast majority (75%) of these barriers were accounted for by child preferences (e.g., "does not like certain fruits or veggies" or "the taste and texture of some foods"), whereas for sleep the majority was split between child preferences (35%; e.g., "doesn't want to go to sleep") and child schedule (24%; e.g., "evening activities" or "if he/she had a nap during the day"). Similar to recreational screen time, child preferences as a barrier to supporting healthy eating is well documented in the literature [25,27,29]. Assisting parents in developing the food skills and confidence to prepare foods that are both healthy and that children enjoy is a strategy worthy of further investigation. Although literature relating to supporting child sleep was not found, the barrier of child preferences emerged also for sleep indicating that strategies that make bed time more enjoyable for children may be effective. It is possible that these potentially useful strategies (i.e., find alternate activities more enjoyable than screen time, make foods healthy and enjoyable, and make bed time fun), which shift the type of parental support behaviors from more restrictive behaviors, to facilitative and participatory (similar to physical activity), might amplify other parent level barriers such as parent time, busy schedules, and cost. This raises an important consideration about achieving efficiencies in feasible ways. That is, what strategies make sense to combat common barriers? Will these ultimately be feasible for parents to implement in the face of new or evolving barriers? The burden that the aforementioned strategies for support place on parents is especially relevant considering that parents already report more barriers at the parent and child individual levels than interpersonal and environmental levels [29]. As such, alleviating child level barriers with strategies that may manifest in additional parent level barriers to support, might not be the most effective approach for reducing the overall number of barriers, and ultimately increasing parental support and child health. Although parental support strategies should make suggestions to parents about how they can effectively support their child within their given constraints, perhaps strategies focusing on creating supportive environments, thereby making it easier and more feasible for parents to engage in support behaviors, might be a more effective use of resources. This study is not without limitations. These analyses were conducted in a subsample of parents that decided to optionally complete two additional behavioral modules of a study. It is possible that this subsample represents a subpopulation of parents that are interested in child health behaviors. In this survey, parents were asked separately about whether they experienced barriers to accessing sports and recreation facilities. As such, some parents might have under-reported limited access to goods and services barriers for supporting child physical activity (felt they had already reported), or over-reported those barriers (already primed to consider them). Lastly, this study analyzed barriers to parental support considering the number of barriers that parents' reported, taking a greater number of barriers to mean that it was more often experienced by parents. However, we did not collect data regarding the magnitude of importance of each barrier, and therefore lack information about how impactful these barriers are to inhibiting parental support. Future research should extend these findings by seeking to understand which of the most commonly reported barriers identified here are most debilitating to parental support, and for what reasons. In doing so, it can be determined which specific barriers are both prevalent and challenging for parents to overcome, and thus make meaningful targets for interventions promoting parental support. Despite these limitations, this study presents several strengths. Specifically, this study extends qualitative work on barriers to parental support by quantifying which types of barriers are reported most frequently by parents across unique child behaviors. Using repeated measures to assess barriers to support for four distinct child behaviors in the same sample of parents is novel and allowed for complex comparison and interpretation between behaviors. Lastly, there is a lack of literature on barriers to parental support for child sleep. Given the growth of research on the importance of child sleep to health, acknowledged in the literature as well as by the inclusion of sleep in the most recent Canadian 24-Hour Movement Guidelines for Children and Youth [32], understanding how parents can support their child in getting enough sleep is a health promotion priority. This study can be considered a first step in beginning to understand which types of barriers to parental support might exist for sleep. Conclusions Parents reported different numbers of barriers to supporting child physical activity, recreational screen time, healthy eating, or sleep, with physical activity having marginally more reported barriers. Parents also reported experiencing different types of barriers when supporting these distinct child health behaviors. Overall, parental support strategies that help parents overcome the constraints of a busy schedule for supporting physical activity, and child level barriers such as child preferences are needed. Importantly, these strategies cannot undermine the fact that asking parents to do more, might not be feasible. Further research investigating which barriers are most challenging for parents to overcome, and why, and the effectiveness of strategies to overcome these common and challenging barriers, is needed. Endnotes 1 Number of people who participated, divided by the total number of eligible people in the total sample 2 Number of people who participated, divided by the number of eligible people with whom contact was made
2017-06-27T20:00:35.242Z
2017-05-15T00:00:00.000
{ "year": 2017, "sha1": "b518a26f6ed4dfd348134964f679b720bdec8814", "oa_license": "CCBY", "oa_url": "https://ijbnpa.biomedcentral.com/track/pdf/10.1186/s12966-017-0508-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b518a26f6ed4dfd348134964f679b720bdec8814", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249319873
pes2o/s2orc
v3-fos-license
Microneedle Array Technique for the Longitudinal Extraction of Interstitial Fluid without Hair Removal Interstitial fluid (ISF) bathes the cells and tissues and is in constant exchange with blood. As an exchange medium for waste, nutrients, exosomes, and signaling molecules, ISF is recognized as a plentiful source of biomolecules. Many basic and pre-clinical small animal studies could benefit from an inexpensive and efficient technique that allows for the in vivo extraction of ISF for the subsequent quantification of molecules in the interstitial space. We have previously reported on a minimally invasive technique for the extraction of ISF using a 3D-printed microneedle array (MA) platform for comprehensive biomedical applications. Previously, hairless animal models were utilized, and euthanasia was performed immediately following the procedure. Here, we demonstrate the technique in Sprague Dawley rats, without the need for hair removal, over multiple extractions and weeks. As an example of this technique, we report simultaneous quantification of the heavy metals Copper (Cu), Lead (Pb), Lithium (Li), and Nickel (Ni) within the ISF, compared with whole blood. These results demonstrate the MA technique applicability to a broader range of species and studies and the reuse of animals, leading to a reduction in number of animals needed to successfully complete ISF extraction experiments. Introduction Pre-clinical development of novel diagnostics and therapies requires not only a concrete understanding of circulating biomarkers, but also an understanding of how circulating biomarkers compare with tissue-level expression. Serum, plasma, and urine are commonly utilized biofluids; however, interstitial fluid (ISF) is also recognized as a plentiful source of biomolecules [1][2][3][4][5]. ISF bathes the cells and tissues, is in constant exchange with blood, and acts as an exchange medium for waste, nutrients, exosomes, and signaling molecules [5,6]. Biomolecules quantified in ISF often have comparable levels to those found in serum, plasma, and whole blood [2,6,7], suggesting that ISF sampling could replace blood collection. However, unique biomolecules have also been identified in ISF compared to serum and plasma [2,3], which could result in novel biomarker identification. Sampling ISF for basic and pre-clinical animal studies shows promise, with numerous examples of ISF monitoring of specific molecules [8][9][10][11][12]. However, there remains a need for an inexpensive and facile technique that extracts ISF in vivo for general analysis. A simple ISF extraction technique could supplement or replace blood collection in a variety of time course studies. Techniques for sampling ISF have evolved rapidly [2,6,13]. We previously reported on a minimally-invasive technique for the extraction of ISF using a 3D-printed [1][2][3]14]. While this technique enables a variety of ISF analysis approaches [1][2][3], hairless animal models have been used, and euthanasia was performed following the procedure. Here, we demonstrate the technique in a widely used rat model, namely Sprague Dawley rats, without the need for hair removal, over multiple extractions and weeks. As an example of this technique, we report the simultaneous quantification of the heavy metals Copper (Cu), Lead (Pb), Lithium (Li), and Nickel (Ni) within the ISF, compared with whole blood. These developments allow for the use of a broader range of species and studies and the reuse of animals, leading to a reduction in the number of animals needed to complete ISF extraction experiments. As an example, chronic exposure to heavy metals (HMs) is associated with many detrimental health effects, including cardiovascular disease, cancer, reproductive problems, kidney disease, and liver damage [15][16][17][18][19]. HM contamination in soil and water costs trillions of dollars annually to the U.S. and global economies in remediation and health costs [20,21]. As an example, Ni is a heavy metal that has been implicated in numerous medical conditions, including cancer, lung fibrosis, contact dermatitis, asthma, and cardiovascular disease [15,16]. Jewelry is commonly made from Ni, and prolonged contact with the skin can lead to Ni ions being absorbed through the skin. This leads to allergic effects in some individuals. To date, the authors are only aware of one study that examined the heavy metal concentrations in ISF. Bonde et al. [22] used a suction-blister microneedle technique to extract ISF from 12 women with a nickel allergy (ISF successfully extracted from 10 subjects; 83% success rate), compared with individuals with no known Ni allergy. Atomic Absorption Spectroscopy (AAS) was then used to quantify the Ni in the ISF. Their results suggest that the Ni concentration in the individuals with nickel allergy were significantly lower than the controls. The authors suggested that an interesting question, warranting further study, is whether the Ni differences are due to possible differences in cellular uptake. However, the authors also suggested that the suction blister microneedle technique of ISF extraction may also lead to escape of serum components through the microvasculature. Additionally, the suction blister technique inherently relies on localized trauma, in the form of a blister, which likely causes inflammation, separation of the dermal layers, and molecular changes within the ISF [2,23,24]. To demonstrate the applicability of our minimally invasive MA technique to longitudinal studies in haired animals, we simultaneously quantified the baseline Cu, Li, Ni, and Pb concentrations in the MA-extracted ISF of Sprague Dawley rats with ad lib access to tab water over 8 weeks, using inductively coupled plasma-Mass spectrometry (ICP-MS). Materials and Methods The animal care and use program of The University of New Mexico (UNM) is accredited by AAALAC International, and it approved all experiments (#19-200827-HSC). A total of three, 7-10-week-old, CD ® hairless, Crl:CD-Prss8hr, rats (2 female, 1 male) (Charles River Laboratories, Wilmington, MA, USA) and six, 5-6-week-old, Sprague Dawley, Crl:SD, rats (3 female, 3 male) (Charles River Laboratories, Wilmington, MA, USA) were used. Animals were anesthetized with 2.0 % Isoflurane using a nose cone. The Sprague Dawley rats were used for longitudinal studies to determine the baseline Cu, Pb, Li, and Ni concentrations in ISF, compared with whole blood. Formal power calculations to prespecify sample size were not possible due to the preliminary nature of this study. Ultra-fine Nano PEN needles (BD, Franklin Lakes, NJ, USA) were placed into MA holders [1,14] (Figure 1) and attached to calibrated pipet capillary tubes (Drummond Scientific Co., Broomall, PA, USA). The array assembly [1,14] was pressed onto the abdominal dermal tissue of the rats until a sufficient volume of ISF was collected. ISF, collected from the six Sprague Dawley animals in the longitudinal study, was transferred into microcentrifuge tubes containing 20 µL of HPLC-grade nitric acid for Inductively Coupled Plasma-Mass Spectrometry (ICP-MS). The six animals in the longitudinal study were removed from anesthesia following the microneedle applications. Rats were monitored during recovery, returned to their cage, and allowed to recover for 6 days under daily monitoring. Spectrometry (ICP-MS). The six animals in the longitudinal study were removed from anesthesia following the microneedle applications. Rats were monitored during recovery, returned to their cage, and allowed to recover for 6 days under daily monitoring. On day 7, the above MA procedure was repeated. This process of ISF extraction, 6day recovery, and ISF Extraction was repeated for 8 weeks. All animals had a terminal cardiac puncture under heavy anesthesia at the conclusion of each experiment. Samples were transferred into digestion tubes (15 mL), and the sample containers were rinsed with 200 μL of Ultra High Purity (UHP)-grade nitric acid (HNO3). Samples were then digested at 95 °C using a small digestion block for about 15 min. Samples were then cooled and brought to a final volume of 10 mL with 18-mega Ohm water to match the matrix of the calibration standards. The samples tubes were then transferred into Sea-Fast or PrepFast autosampler racks for analysis using the PerkinElmer NexION300D ICP/MS. The ICP/MS was optimized using a tuning solution for a wide range of masses. Both systems (SeaFast and ICP/MS) were conditioned twice using 2% HNO3 (UHP Grade). The ICP/MS was calibrated using a blank and four calibration standards, ranging from 1.25-10.0 μg/L. Two calibration verification quality control samples (Initial Calibration Blank Verification "ICBV" and Initial Calibration Verification "ICV") were analyzed after calibration standards to verify their accuracy. Samples were analyzed, and a Continuing Calibration Verification (CCV) quality control sample was analyzed at a frequency after every 20 samples to validate instrument and calibration stability. Data were verified, validated, On day 7, the above MA procedure was repeated. This process of ISF extraction, 6-day recovery, and ISF Extraction was repeated for 8 weeks. All animals had a terminal cardiac puncture under heavy anesthesia at the conclusion of each experiment. Samples were transferred into digestion tubes (15 mL), and the sample containers were rinsed with 200 µL of Ultra High Purity (UHP)-grade nitric acid (HNO 3 ). Samples were then digested at 95 • C using a small digestion block for about 15 min. Samples were then cooled and brought to a final volume of 10 mL with 18-mega Ohm water to match the matrix of the calibration standards. The samples tubes were then transferred into SeaFast or PrepFast autosampler racks for analysis using the PerkinElmer NexION300D ICP/MS. The ICP/MS was optimized using a tuning solution for a wide range of masses. Both systems (SeaFast and ICP/MS) were conditioned twice using 2% HNO 3 (UHP Grade). The ICP/MS was calibrated using a blank and four calibration standards, ranging from 1.25-10.0 µg/L. Two calibration verification quality control samples (Initial Calibration Blank Verification "ICBV" and Initial Calibration Verification "ICV") were analyzed after calibration standards to verify their accuracy. Samples were analyzed, and a Continuing Calibration Verification (CCV) quality control sample was analyzed at a frequency after every 20 samples to validate instrument and calibration stability. Data were verified, validated, exported, and reported via an Excel file. The data analysis was performed using Excel and Python. Results We successfully extracted ISF, using our MAs (Figure 1), from all CD ® Hairless and unshaven Sprague Dawley rats, with average ISF extraction rates per application of 1.26 ± 1.00 µL/min and 0.81 ± 0.83 µL/min, respectively ( Figure 2A). As expected, extraction rates were higher for the hairless animals; however, extraction rates were sufficient for the collection of up to 10 µL of ISF in under 30 min from the unshaven Sprague Dawley rats in the longitudinal study. Methods Protoc. 2022, 5, x FOR PEER REVIEW 4 of 8 exported, and reported via an Excel file. The data analysis was performed using Excel and Python. Results We successfully extracted ISF, using our MAs (Figure 1), from all CD ® Hairless and unshaven Sprague Dawley rats, with average ISF extraction rates per application of 1.26 ± 1.00 μL/min and 0.81 ± 0.83 μL/min, respectively ( Figure 2A). As expected, extraction rates were higher for the hairless animals; however, extraction rates were sufficient for the collection of up to 10 μL of ISF in under 30 min from the unshaven Sprague Dawley rats in the longitudinal study. There were no significant changes in mean fluid volume or extraction rates in the Sprague Dawley animals from week to week over the eight-week longitudinal study. A single application is defined as one MA insertion. Multiple MA insertions were performed per animal. No special preparation, such as shaving, was used prior to extraction. Table 1 Methods Protoc. 2022, 5, 46 5 of 8 shows characteristics of these extractions. ISF was collected in 89.5% and 63.9% of MA applications in CD ® Hairless and Sprague Dawley rats, respectively, and we were successful in collecting ISF from 100% of all hairless and unshaven haired rats. Additionally, no adverse events, such as lethargy or changes in appetite, water consumption, or physical appearance, were observed for any of the 6 animals in the longitudinal study. No weight loss was evident, and all 6 animals had growth curves consistent with the supplier (Figure 2B). The six Sprague Dawley animals (unexposed) had ad lib access to tap water without any additional heavy metals added. As reference, the Environmental Protection Agency (EPA) sets the maximum contaminant level (MCL) of Cu, Li, Ni, and Pb in drinking water at 1.3 ppm, 0 ppm, 0.2 ppm, and 0.006 ppm, respectively [26,27]. ISF was extracted every 7 days for 8 weeks. At the time of each extraction, blood was also collected through a tail snip. The ISF and whole blood samples had all four heavy metals simultaneously quantified using ICP-MS. Figure 3 shows the unexposed blood vs. ISF concentrations for Cu, Pb, Li, and Ni. We found no significant difference between the ISF and blood concentrations of Cu (0.005 ± 0.034 and -0.001 ± 0.025 ppm, respectively), Li (0.046 ± 0.034 and 0.046 ± 0.025 ppm, respectively), Ni (0.122 ± 0.069 and 0.111 ± 0.108 ppm, respectively), or Pb (0.005 ± 0.070 and −0.001 ± 0.032 ppm, respectively). There were no significant changes in heavy metal concentrations from week one through to eight in either ISF or blood. The similarity between the ISF and blood concentrations of each of the heavy metals suggests that MA-extracted ISF may be a useful surrogate for blood in clinical applications and could be useful as a fluid for minimally invasive remote monitoring of chronic heavy metal exposures in the field. To our knowledge, this is the first report on the simultaneous quantification of multiple heavy metals in ISF in vivo. Discussion This expansion of our MA ISF technique into unshaven haired animals is safe over multiple repeated procedures and over multiple weeks. This allows for the re-use of animals and a reduction in the number of animals needed for ISF extraction experiments. Discussion This expansion of our MA ISF technique into unshaven haired animals is safe over multiple repeated procedures and over multiple weeks. This allows for the re-use of animals and a reduction in the number of animals needed for ISF extraction experiments. This development has also demonstrated the applicability in haired animals without the need for shaving. Eliminating the need for shaving reduces the time under anesthesia, total experiment time, and animal stress. This also opens the door for a much broader spectrum of ISF studies in possibly a much broader range of haired species. All previous publications on ISF extractions have described hairless or shaved animal models and/or alternate extraction techniques, such as the suction-blister technique. These techniques require either the formation of a blister, shaving, negative pressure, or the application of depilatory agents [23,24,28]. Each of these can have unwanted consequences to the concentrations of biomolecules in the ISF due to tissue injury and inflammation. We have established an MA design with greater spacing between microneedles and potential space in between contact sites, which allows for similar extraction rates in both hairless and haired animals, without the need for hair removal (1.26 ± 1.00 µL/min and 0.81 ± 0.83 µL/min, respectively). Although variability was evident in the ISF extraction rates, this variability was very similar between the two strains of rats. Additionally, individual animal and device factors may contribute to this variability. Skin thickness was not investigated in this study, as it requires biopsy or necropsy. We have previously investigated the anatomical positioning of the extractions, as well as different tip designs of the MA [14]. Future experiments to further define what variables contribute to the extraction of ISF will increase the applicability of this method. Additionally, we found that MA-extracted ISF may be a useful surrogate for blood for minimally invasive remote monitoring of chronic heavy metal exposures in the field. We measured Copper, Lead, Lithium, and Nickel in the ISF and blood of six Sprague Dawley rats and found that concentrations in ISF and blood were not significantly different: Cu (0.005 ± 0.034 and −0.001 ± 0.025 ppm), Li (0.046 ± 0.034 and 0.046 ± 0.025 ppm), Ni (0.122 ± 0.069 and 0.111 ± 0.108 ppm), or Pb (0.005 ± 0.070 and −0.001 ± 0.032 ppm), respectively. Future studies investigating ISF concentrations of other heavy metals such as arsenic, cadmium, and uranium may shed more light on the distribution of heavy metals within the ISF, their toxicity, and methods for remotely monitoring subjects with chronic heavy metal exposures, such as those living near abandoned uranium mines. Funding: The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Baca and Taylor received infrastructure support from NIH CTSC (grant number ULITR00449). Baca, Bolt, and Taylor received research support from NIH grant R03ES031724. Bolt was supported in part from P20GM130422. Baca also received support from NIH Grant KL2TR001448. Zhu was supported in part by NIH (grant number 1 P20 GM13042201A1, 2 P20GM109089, 1 P20 GM 121176). Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki and approved by the animal care and use program of The University of New Mexico (UNM), which is accredited by AAALAC International, and they approved all experiments (#19-200827-HSC). Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
2022-06-04T07:09:13.540Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "1054156be446fde45b4794992e329fdefd5cf262", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2409-9279/5/3/46/pdf?version=1654646625", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fe404fc07bbc241db4f7dbd23367eae809ff246a", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267041419
pes2o/s2orc
v3-fos-license
An electronic nose can identify humans by the smell of their ear Abstract Terrestrial mammals identify conspecifics by body odor. Dogs can also identify humans by body odor, and in some instances, humans can identify other humans by body odor as well. Despite the potential for a powerful biometric tool, smell has not been systematically used for this purpose. A question arising in the application of smell to biometrics is which bodily odor source should we measure. Breath is an obvious candidate, but the associated humidity can challenge many sensing devices. The armpit is also a candidate source, but it is often doused in cosmetics. Here, we test the hypothesis that the ear may provide an effective source for odor-based biometrics. The inside of the ear has relatively constant humidity, cosmetics are not typically applied inside the ear, and critically, ears contain cerumen, a potent source of volatiles. We used an electronic nose to identify 12 individuals within and across days, using samples from the armpit, lower back, and ear. In an identification setting where chance was 8.33% (1 of 12), we found that we could identify a person by the smell of their ear within a day at up to ~87% accuracy (~10 of 12, binomial P < 10−5), and across days at up to ~22% accuracy (~3 of 12, binomial P < 0.012). We conclude that humans can indeed be identified from the smell of their ear, but the results did not imply a consistent advantage over other bodily odor sources. Introduction Biometric identification has grown to encompass a diverse repertoire of methods.Older methods such as fingerprinting and retinal scanning have been augmented with newer tools such as facial, gate, and voice recognition.Whereas visual and auditory source information is applied extensively in biometrics (Andrijchuk et al. 2005;Takayuki 2019), olfactory information is not.This technological lack is in sharp contrast to animal behavior.Terrestrial mammals can effectively identify conspecifics by body odor (Brennan and Kendrick 2006), suggesting that human body odor may provide an added measure for biometrics.Humans, in turn, exhibit mostly rudimentary capabilities in identifying conspecifics by smell.They can identify themselves at just above chance levels (Russell 1976;Hold and Schleidt 1977;Olsson et al. 2006), with women outperforming men (Schleidt et al. 1981).Humans can also identify their own offspring in multiple-alternative forced-choice tests (Porter and Moore 1981;Kaitz et al. 1987;Weisfeld et al. 2003), and can identify more distant kin in 2 alternative forced-choice tests (Porter et al. 1986).However, only about a third of humans can identify their regular sexual partners by body odor (Hold and Schleidt 1977;Schleidt et al. 1981).Curiously, this ability to identify partners by body odor is 2-fold better in women who experience repeated spontaneous unexplained pregnancy loss, a pattern that echoes the rodent Bruce effect (Rozenkrantz et al. 2020).Finally, humans can also identify their non-romantic friends by body odor (Olsson et al. 2006), and, in fact, may initiate friendships in part based on similarity in body odor (Ravreby et al. 2022).In the majority of the above instances, however, testing was in the form of two-or multiple-alternative forced-choice tests rather than biometric-type identification out of a large pool of candidates.Does this rudimentary performance imply that biometric-quality information is unavailable in human body odor, or rather that humans are simply not fully tuned to this information?Several lines of evidence point to the latter.First, in contrast to humans, dogs are in fact highly capable of identifying humans by smell (Hepper 1988;Pinc et al. 2011;Marchal et al. 2016;Woidtke et al. 2018;Filetti et al. 2019), even when the odor was collected from across body regions (Schoon and De Bruin 1994).Further evidence for individual olfactory fingerprints is available from gas-chromatography mass-spectrometry (GCMS) studies of human body odor (Bernier et al. 1999;Haze et al. 2001;Curran et al. 2005aCurran et al. , 2005bCurran et al. , 2007)).A particularly large study on axillary sweat, saliva, and urine samples from 197 adults found that members of the same family had more similar GCMS fingerprints to one another than to members of other families.Moreover, they found significant similarity in GCMS fingerprints per individual when comparing repeat samples of the same individual versus other participants (Penn et al. 2007).In a separate study, this approach enabled individual identification of 10 humans by the smell of their hand alone using GCMS (Curran et al. 2010).GCMS may also provide information beyond identification alone, such as body odor fingerprints associated with differing emotional states (Smeets et al. 2020). Whereas GCMS may allow for human olfactory fingerprinting, it is not a practical tool, primarily due to its cost, size, and complexity of operation.A practical alternative is in the sensing platform typically referred to as an electronic nose (eNose), namely an array of chemical sensors with various sensitivities that generate odorant-specific patterns (Persaud and Dodd 1982;Hodgins 1995;Schaller et al. 1998;James et al. 2005;Röck et al. 2008;Wilson and Baietto 2009;Karakaya et al. 2020;Khatib and Haick et al. 2022).Although there are many flavors of eNose, we know of only 3 pilot efforts to identify humans using these devices.An initial effort used cotton swabs to sample body odor from the armpits of 2 participants and later measured these samples offline with a lab-constructed eNose containing 5 different metal oxide sensors (Wongchoosuk et al. 2009).The authors reported successful discrimination between the 2 participants in a PCA plot.In an ensuing effort by the same group, the authors switched to online rather than offline sampling and reported successful discrimination between 4 individuals from armpit odor with 95% confidence (Wongchoosuk et al. 2011).Finally, a third effort entailed a wearable eNose containing 6 functionalized carbon nanotube sensors measuring from the armpit, and it was worn by 8 participants for 63 min each (Zheng et al. 2019).Using all the data, the authors reached 91.67% identification accuracy.Notably, all these previous efforts did not tackle new data obtained on a day different from the training data.In sum, the overall data on human identification by eNose is very limited. Two primary questions when setting out to identify humans by eNose are which eNose to use, and which bodily odor source to sample.This study relates to the latter.An obvious candidate for sampling is breath.Given that breath is a transfer media between the inside of the body and the outside world and is subject to metabolic bodily processes, it is potentially an ideal candidate.Indeed, breath is the primarily targeted media in various eNose-based disease detection efforts (Dragonieri et al. 2007(Dragonieri et al. , 2017;;Fens et al. 2009;De Vries et al. 2015;Scarlata et al. 2015;Wilson, 2015;Nardi Agmon et al. 2022).However, breath is also dramatically impacted by transient events, such as diet, and contains very high and variable humidity.Humidity is the primary enemy of many eNose platforms (James et al. 2005;Kashwan and Bhuyan, 2005;Röck et al. 2008) because water adsorbed on the sensing surface increases the resistance of the sensing layers and blocks the reaction site, causing gas sensor response drift (Abdullah et al. 2022).A second obvious candidate is armpit, which was indeed the target of the above-noted eNose efforts.The armpit, however, is often doused in cosmetics.Although such cosmetics may themselves provide relevant information (Allen and Roberts 2015), they may nevertheless stand in the way of individual fingerprinting.By contrast, in this study, we propose an alternative: the inside of the ear.The ear is easily accessible, it is not particularly humid (nor does its humidity fluctuate with the respiratory cycle), and the inside of the ear is not typically subjected to cosmetics (in contrast to the outside back of the ear).Most critically, the inside ear contains cerumen, a known source of body volatiles (Preti and Leyden 2010;Harker et al. 2014;Shokry et al. 2017).Moreover, an initial GCMS study identified 12 cerumen volatiles that can discriminate between East Asians and non-Asians (Prokop-Prigge et al. 2014), and a second GCMS study used cerumen to discriminate between African, Caucasian, and Asian descent participants (Prokop-Prigge et al. 2015).These ethnic specificities likely reflect differing overall quantities/intensity of cerumen, a pattern that has been linked to the ABCC11 gene (Preti and Leyden 2010;Harker et al. 2014).This, combined with the existence of volatile disease markers in cerumen (Barbosa et al. 2019), together suggests that this media may be a promising candidate for individual identification by smell. Here, we will address this hypothesis using an AirSense PEN3 eNose.This commercial device consists of a gas sampling unit and a sensor array which is composed of 10 thermo-regulated metal oxide sensors.Each sensor has a unique coating, making it sensitive to a particular set of chemical compounds.When a compound interacts with the sensor, the resulting oxygen exchange causes decreased electrical conductivity.These changes are seen in a 10-channel time series from which we can extract meaningful information about the odor.We used this device to sample 12 individuals once a day for 5 days.Each sampling session contained 3 samples from each of 3 body regions: the armpit, ear, and lower back.We asked 2 questions: can we identify participants from the odor of their ear, and does ear outperform armpit and lower back. Participants Twelve healthy adults were selected for this study with no other exclusion criteria (8F, 4M, mean age 30.5 ± 7.8 years).The 12 individuals came to the lab for 5 consecutive days (Sunday to Thursday) where their ear, armpit, and lower back were sampled by 2 eNoses 3 times each on each day.All participants provided informed written consent to procedures approved by the Weizmann Institute of Science Institutional Review Board, in compliance with the declaration of Helsinky for Medical Research involving human subjects.Participants were paid 50 NIS (~14 Euro) per day, leading to a total of 250 NIS (~70 Euro) paid in full on the last day. eNose setup We used 2 PEN3 eNoses (AIRSENSE Analytics GmbH, Schwerin, Germany).The PEN3 consists of a gas sampling unit and a sensor array.The sensor array is made of 10 different thermo-regulated metal oxide sensors held in a stainless-steel chamber (volume: 1.8 mL, temperature: 110°C).Each sensor is coated in a unique material that makes it sensitive to different sets of chemical compounds.The sensitivities of each sensor can be seen in Table 1 (From the manufacturer, recreated in Zheng and Wang 2006;Wu et al. 2017).When a compound interacts with the surface of the sensor, the oxygen exchange that occurs causes a change in electrical conductivity.This conductivity is the unit of measurement displayed in the time series produced by the device.We used the PEN3 with its native software (Winmuster). Two eNoses were used simultaneously in this experiment.Both eNoses used the following parameters: Measurement time = 50 s, Flush time = 40 s, Zero-point trim time = 10 s. eNose 1 was given a Chamber flow and Initial injection flow of 400 mL/min, and eNose 2 was given a Chamber flow and Initial injection flow of 600 mL/min.Both eNoses were connected via USB to 2 separate computers.To run a measurement on both eNoses, the experimenter pressed "play" on both Winmuster interfaces simultaneously.This would start an entire eNose cycle, which for this experiment consisted of a 40 s Flush phase, 10 s Baseline phase, and a 50 s Measurement phase, for a total of 100 s per measurement.Each measurement from each eNose was then named and saved to its respective computer. Sampling cup and tip We developed a device that combined both eNose sampling tubes into one Teflon probe that was placed at the opening of the ear canal in a steady and controlled manner (Fig. 1).The device contained a sampling cup that was 3D-printed in a shape that covers the entire ear.This cup allowed for an ear headspace unaffected by room airflow, blocked possible cosmetic VOCs from behind the ear, and held the sampling tip at a fixed location in front of the ear canal.The sampling tip was machined from Teflon to assure minimal odor contamination.The 6 mm diameter tip contained an inner septum such that both eNose flow paths remained independent to the tip.The tip was machined as a perforated ball, with 3 1-mm perforations for each flow path.We prepared an independent cup for each participant and a total of 20 tips for the experiment.In each session, we used a separate tip for each body region, and tips were washed in boiling water and air-pressure dried between uses. Control odorant To later account for potential eNose drift, we collected 2 samples of an identical odorant at the beginning of each session.The 2 samples differed in that one accumulated jar headspace for 12 h before sampling and the other was fresh.This control odorant contained 10 components present in general body odor (de Lacy Costello et al. 2014;Jha et al. 2015;Prokop-Prigge et al. 2016) Procedures Participants were instructed not to wear deodorant or perfume and were provided with identical body soap, shampoo, and conditioner to use throughout the study period such that any cosmetics-associated odors would be constant across the cohort.Each day before the first participant arrived (Supplementary Table 1), the eNoses were turned on and sent through a "clean cycle."A "clean cycle" refers to running the eNoses through a full 100 s eNose cycle but with the sampling tubes not attached to anything.This is done to clean any residual odor held in the eNose tubes.The eNoses were run on these "clean cycles" as many times as necessary until the measurement showed that all sensors were steady at 1 G0/G.During this time, the fresh control odor was prepared (~5-10 min before sampling it) before each participant.Once clean, the first Baseline 0 was taken.This baseline was done by running the eNoses through another cycle and saving this result.Then the overnight control was sampled followed by the fresh control, using the sampling protocol for eNose vials through a septum cap.The eNose was run through another clean cycle, followed by taking Baseline 1 of the open room air with the sampling cup and tip attached to the eNose tubes. Then the participant's first body region (either right ear, right armpit, or lower back) (Supplementary Table 2) was sampled 3 times in a row, followed by a clean cycle and taking Baseline 2 with a new tip and the same sampling cup attached.The second body part was measured 3 times with this new tip, followed by another clean cycle and Baseline 3 with another new tip.Finally, the last body part was measured 3 times, and the eNose was run through clean cycles until the next participant came (Fig. 2). The right ear was sampled by asking the participant to hold the sampling cup and tip up to their ear and ensure the tip did not touch any part of their ear.The right armpit was sampled by adjusting the tip as far out of the cup as possible and helping the participant hold the unit on their armpit.The lower back was sampled with the tip in the same location as the armpit, and then asking the participant to lift their shirt while the experimenter held the unit in place on their back.For each body part, the sampling unit was placed against the body part 5 s before the end of the Baseline Trim phase in the eNose cycle. Data analysis All analyses were conducted using MATLAB software (Mathworks, USA).We used the endpoints of each sensor to condense the 10-channel × 50 s time series for one eNose to a vector of 1 × 10 endpoints.We did this for each of the 2 eNoses, then combined the data from both sets of 10 sensors to create a hybrid dataset of all 20 sensors together yielding 1 × 20 endpoints per sample.Initial analysis was conducted using the previously selected Fine K-nearest neighbors (Fine KNN) classifier (Ravreby et al. 2022).Additional exploratory testing was conducted by entering Day-1 data for each body region into MATLAB's Classification Learner Toolbox with 36-fold cross-validation (leave-one-out test) where classification was attempted by 24 classifiers.For each body region, we first identified the best classifier.The a-priori selected Fine KNN classifier was indeed best for ear data, and the Linear Discriminant model was best for armpit and lower back data.For each comparison, we then trained and tested the model on both leave-one-out (36-fold cross validation) and 3-fold cross validation (test on 66.7%, train on 33.3%) for 500 iterations.In a leave-one-out validation test, the model was trained on 35 samples and tested on one remaining sample.In a 3-fold cross validation test the model was trained on 2 samples per person and tested on the third sample per each person.After reviewing the initial results, we proceeded to only use the stricter 3-fold cross validation test.We ran the classification 500 times and took the mean value as the true accuracy of the model. Accounting for drift To account for drift, we took regular eNose measurements of the above-described control odorant.An individual control odorant that was made fresh prior to each participant was sampled before measuring each participant.To generate drift-corrected values, all ear, armpit, and lower back data from each participant on any given day was then divided by the matched fresh control odorant endpoint measurements from the corresponding day.This means that all sensor endpoint data of Ear Day 1, Armpit Day 1, and Back Day 1 for Participant 1 were divided by all sensor endpoint data of Fresh Control Odor Day 1 Participant 1, and so on. Permuting data and statistical testing To determine whether the classification accuracy for each body part was statistically significant, we shuffled the class labels, ran the shuffled data through the trained classifiers, and used the resulting accuracy values from 500 iterations to build a null distribution.This was done for both the 3-fold cross validation test as well as for the 36-fold cross validation test.The accuracy values of the shuffled distribution were then compared to the median accuracy from the distribution of real accuracies to obtain the p statistic.The binomial p statistic was then calculated with the following formula: n−r , where the probability (p) of a correct outcome by chance is 0.0833, n is 12 for the 12 individuals, and the number correct (r) is the number out of 12 that yielded the mean accuracy value on the test for which we are computing the statistic. Results We tested real-time sampling of body odors for personal identification using eNose technology.Twelve participants (8F, 4M) had their right ear, right armpit, and lower back sampled 3 times each per session, for 5 consecutive days.Two control odors (fresh and overnight) were sampled before each participant, and a baseline measurement was taken between each body part and control odors.There are 2 primary approaches to treating eNose sensor signals: one is to use the entire timecourse, considering its full shape, and the other is to use only the point at which the sensor reaches equilibrium.In the current instance, the latter reduces the 10-sensor 50-s series to a 1 × 10 vector.We explored both approaches, yet to maintain a manageable manuscript extent, we report only on the latter, which produced superior performance in this particular case.Moreover, we simultaneously sampled using 2 technically identical eNoses, each sampling at a slightly different airflow rate (400 mL/min and 600 mL/min).We acknowledge that there are various possible paths to combining these data traces, yet this manuscript is focused not on optimizing eNose methodology, but rather on the question of whether humans can be identified by the smell of their ear, and, therefore, we limit our report to the simple combination of both devices, that is, we treat them as one eNose with 20 sensors, as this approach yielded slightly better results than each eNose alone. We note that all the raw data of this manuscript is available for download in Supplementary Data Set 1, allowing for alternative investigations of the data. Individuals can be identified from samples within a day We sought to find if, within one day of sampling, the 12 participants could be classified from one another accurately.The data were split into individual days where each day contained three samples per body part per person (36 ear samples per day, 36 armpit samples per day, 36 lower back samples per day).Given drift in eNose signal (see estimate of the drift in Supplementary Fig. 1), we conducted analyses twice: once without and once with correction for drift by calibrating to the prepared control odorant.In a previous study using this same eNose to measure body odor, we found that the KNN classifier provided the strongest outcome (Ravreby et al. 2022), so we, therefore, applied this same classifier here.The chance probability for classifying an individual in this test is 8.33%.We observed that with k = 2 neighbors, using a leave-one-out test, the Fine KNN classifier identified individuals from the smell of their ear within a single day with an average accuracy of 67.2% without drift-correction (difference from chance, binomial P < 10 −5 ) (Supplementary Fig. 2A) and 87.8% with correction (difference from chance, binomial P < 10 −5 ) (Fig. 3A).Using the stricter 3-fold cross validation, identification was 50.9% for uncorrected (difference from chance, binomial P < 10 −5 ) (Supplementary Fig. 2B) and 69% for drift-corrected data (difference from chance, binomial P < 10 −5 ) (Fig. 3B).Using the same classifier for armpit, with the leave-one-out test, we observe 58.9% for uncorrected (difference from chance, binomial P < 10 −5 ) and 77.2% for driftcorrected data (difference from chance, binomial P < 10 −5 ) (Fig. 3A).Using the stricter 3-fold cross validation, identification was 45.5% for uncorrected (difference from chance, binomial P < 0.0002) and 63.3% for drift-corrected data (difference from chance, binomial P < 10 −5 ) (Fig. 3B).Finally, using the same KNN classifier for lower back data, with the leave-one-out test, we observe 59.4% for uncorrected (difference from chance, binomial P < 10 −5 ) and 75.6% for driftcorrected data (difference from chance, binomial P < 10 −5 ) (Fig. 3A).Using the stricter 3-fold cross validation, identification was 40.7% for uncorrected (difference from chance, binomial P < 0.001) and 62.4% for drift-corrected data (difference from chance, binomial P < 10 −5 ) (Fig. 3B).To directly compare these results, we conducted a one-way analysis of variance (ANOVA) on the 3-fold validated data, with a condition of body region.When using the data not corrected for drift, we observe a significant effect (F = 770.9,P < 10 −5 ). Here, however, post-hoc tests revealed a different order of effectiveness: lower back was significantly better than armpit (P < 10 −5 , Cohen's d = 0.2855), yet ear was significantly better than both (ear vs lower back: P < 10 −5 , Cohen's d = 1.784, ear vs armpit: P < 10 −5 , Cohen's d = 2.438).In sum, using the previously applied KNN classifier, within-day classification was better than chance using any body region, both with and without correction for drift.When correcting for drift, the best results were obtained from ear data.Consistent with our hypothesis, the final above analysis using our a-priori selected classifier (Ravreby et al. 2022) implied an advantage for sampling from the ear.To gauge the strength of this, we tested whether we could negate this ear advantage by identifying an optimal classifier for each body region.We tested the 24 classifiers available in the MATLAB Classification Learner Toolbox.We found that the Linear Discriminant model provided the best results for both armpit and lower back data.Using a 36-fold cross-validation test on drift-corrected data, the average identification accuracy within 1 day for armpit was 73.9% (difference from chance, binomial P < 10 −5 ), and lower back was 77.2% (difference from chance, binomial P < 10 −5 ) (Fig. 3C).On a 3-fold crossvalidation test, the accuracy for armpit was 79.1% (difference from chance, binomial P < 10 −5 ) and lower back was 83% (difference from chance, binomial P < 10 −5 ) (Fig. 3D).We now again conducted a one-way ANOVA with a condition of body region, this time using the best classifier for each region.We found a significant effect of body region (F = 1667, P < 10 −5 on leave one out, and F = 1273, P < 10 −5 on 3-fold).Post-hoc tests on the 3-fold condition revealed that armpit was better than ear (P< 10 −5 , Cohen's d = 1.784), and lower back was better than both (lower back vs armpit: P < 10 −5 , Cohen's d = 0.8061, lower back vs ear: P < 10 −5 , Cohen's d = 3.722).Using the best classifier we could find for each body region, the lower back was now better than the ear for classification. With the results of the optimal classifiers in hand, we sought to better evaluate the difference from chance.We compared the median value of 500 iterations of 3-fold classification on real data with the median value of 500 iterations of classification on shuffled data using both the worst-performing day and the best-performing day for each body region respectively.For both uncorrected and drift-corrected data, and on best and worst-performing days for the ear, armpit, and lower back, this test (the Bernoulli probability) yielded a permutation P-value = 0.002 (Fig. 4, Supplementary Fig. 3).In conclusion, even on the worst sampling day, the ear, armpit, and lower back all perform significantly better than chance, with or without drift correction. Individuals can be identified using samples accumulated over days We next sought to determine whether we can identify people not only within 1 day but using multiple accumulated days.We applied the same analysis with the same optimal classifiers for each region and trained and tested on consecutively aggregated days (e.g. 2 days of data, 3 days, 4 days, and 5 days).Specifically, we conducted both the leave-one-out and the leave-one-sample-per-participant-out tests on each of these sets.For the leave-one-sample-per-participant-out test, this meant that as more data were added to the train/test set, we adjusted the folds in the cross validation to maintain the same level of stringency.For example, on 2 days of data, the training set took 5 samples per person and tested on one sample per person, yet on 4 days of data, the training set took 11 samples per person and tested on one sample per person.The results from the leave-one-sample-per-participant-out test are presented here.We compared the median accuracy of 500 iterations of classification on real data to 500 iterations of classification on shuffled data.For the ear, armpit, and lower back, training and testing on 5 days yielded a permutation P-value of 0.002 for both uncorrected and drift-corrected data (Supplementary Fig. 4C-E; Fig. 5C-E).Using uncorrected data, a two-way ANOVA with factors of body region and number of days used to train and test the model revealed Fig. 3. Individuals can be identified across samples within a day.A) Mean classification accuracy of individuals using the drift-corrected odor from ear, armpit, and lower back when trained and tested within single days using the Fine KNN classifier in a leave-one-out test for all body regions.Drift correction is done by dividing each sample by its matched fresh control odor.The overall average accuracy with standard error bars for ear, armpit, and back is shown as an insert in each panel.The chance level prediction accuracy is shown with the dashed black line at 8.3% accuracy.B) Withinday classification accuracy under the same conditions but for the 3-fold cross validation case plotted with standard deviation.C) Mean within-day classification accuracy of individuals using the drift-corrected odor from ear, armpit, and lower back trained with each body region's best-performing respective classifier in a leave-one-out test for all body regions.D) Within-day classification accuracy under the same conditions but for the 3-fold cross validation test plotted with standard deviation.a significant effect of body region (F(2, 5988) = 25,570, P < 10 −5 , ear = 0.379 ± 0.018, armpit = 0.527 ± 0.022, lower back = 0.445 ± 0.023), and a significant effect of number of days used to train and test the model (F(3, 5988) = 7109, P < 10 −5 , days 1-2 = 0.515 ± 0.027, days 1-3 = 0.446 ± 0.02, days 1-4 = 0.428 ± 0.018, days 1-5 = 0.414 ± 0.014).There was also a significant interaction between the body region and the number of days used to train and test the model (F(6, 5988) = 1879, P < 10 −5 ).A post-hoc Tukey's test revealed that lower back performed better than ear (P < 10 −5 d = 3.262), and armpit performed better than both (armpit vs ear: P < 10 −5 , d = 7.51, armpit vs lower back: P < 10 −5 , d = 3.696) (Supplementary Fig. 4A and B).Repeating this ANOVA with drift-corrected data revealed a significant effect of body region (F(2, 5988) = 49,280, P < 10 −5 , ear = 0.708 ± 0.014, armpit = 0.576 ± 0.021, lower back = 0.524 ± 0.021), and a significant effect of number of days used to train and test the model (F(3, 5988) = 52,097, P < 10 −5 , days 1-2 = 0.756 ± 0.024, days 1-3 = 0.612 ± 0.017, days 1-4 = 0.545 ± 0.017, days 1-5 = 0.497 ± 0.013).There was also a significant interaction between the body region and the number of days used to train and test the model (F(6, 7488) = 6910, P < 10 −5 ).A post-hoc Tukey's test revealed that armpit performed better than lower back (P < 10 −5 , d = 2.465), and ear performed better than both (ear vs lower back: P < 10 −5 , d = 10.13,ear vs armpit: P < 10 −5 , d = 7.372) (Fig. 5A and B). To judge the impact of added days of data on overall classification accuracy, we conducted a one-way ANOVA for each body region on the accuracy values from 500 iterations of a leave-one-sample-per-participant-out test using each region's respective classifier.When using uncorrected data, the classification accuracy initially decreased then slightly recovered for the ear and armpit as more days were added to the data set (ear F = 2574, armpit F = 755.9).With the lower back data, the accuracy continuously decreased as more days were added (F = 6869.7)(Supplementary Fig. 4A and B Individuals can be identified across days We sought to conduct an even stricter test where we asked: if we train a model on a group of days, can it identify a person on a new day of data?In this test, the model was trained and validated with leave-one-sample-per-participant-out cross validation.Importantly, a full day of left out data was used as an unseen test set to evaluate the performance of the trained model.As far as we know, this was not previously achieved in human identification by eNose.We took the last case, where the classifiers were trained and validated on the first 4 days of samples and tested on the unseen fifth day of samples, and compared the true accuracy value to a distribution of shuffled null data (Fig. 6B-D).Using uncorrected data, for the ear, training on days 1, 2, 3, and 4 and testing on day 5 yielded 22.2% accuracy (permutation P value = 0.012) (Fig. 6A).This same test yielded 44.4% accuracy for the armpit (permutation P value = 0.002), and 16.7% accuracy for the lower back (permutation P value = 0.046).These results show that for all body regions, after training on 4 days of samples, we can classify new, unseen data at above chance levels.Unlike in the within-day tests, in the across-days comparison, correcting for drift using the control odor in fact lowered rather than improved performance (Supplementary Fig. 5). Discussion In this study, we sought to determine whether we could use standard eNose technology to identify people by the smell of their ear.We also tested two other body regions, armpit, and lower back, for their efficacy in personal identification from odor.We found that sampling any of the 3 body regions can distinguish between 12 individuals significantly above chance.When we learned and tested on data from the same day, performance was remarkably good for all body regions.Ear was indeed slightly better than armpit and back when we used our a-priori selected classifier (Fig. 3A and B), but this advantage was negated when we optimized classifiers for each body region, resulting in lower back outperforming ear and armpit (Fig. 3C and D).In turn, the ear advantage reemerged as more data was added.In other words, if we combine data across days to form a virtual day of extensive testing, the ear stands out (Fig. 5A and B).This provides some support for our hypothesis of higher stability in the earbased odor source-increasing data set size reduced noise in ear more than in armpit and back.However, in contrast to the biometric-quality within-day data, performance across days was relatively poor.After 4 days of training on ear data, testing on a separate fifth and new day of data pushed performance down from ~87% to ~22% accuracy, a result still well above chance, but not biometric.In turn, armpit outperformed ear and lower back on the cross-day novel data test, achieving 44.4% accuracy.To nevertheless place these underwhelming results in a more positive light, previous efforts to identify humans by smell with an eNose could only identify when using all the available data (Wongchoosuk et al. 2009(Wongchoosuk et al. , 2011;;Zheng et al. 2019).As far as we know, identification of newly acquired data (as in Fig. 6) was not previously achieved in an effort to identify humans by smell. Although significant, this level of performance remains insufficient for a biometric tool.Why did cross-day performance deteriorate from within-day performance to this extent?Our study was conducted in a naturalistic setting.We did not collect samples from the body and submit them to testing in a controlled environment (e.g.temperature-and humiditycontrolled testing vials), but rather conducted real-time sampling in 50 s or less.As a result, the overall extent of odor, and hence eNose signal, was very low in this experiment.Moreover, the experiment was conducted in a typical room fluctuating temperature and humidity conditions that significantly impact eNose readings.Finally, even under the most controlled (non-naturalistic) conditions, there is still drift in both actual body odor and in eNose sensor performance (Supplementary Fig. 1).We made efforts to remove this drift from the signals by dividing all sample endpoints by their matched fresh control sample endpoints.This yielded a significant increase in the accumulated days' classification accuracies, improving the last case of training and testing on all 5 days for the ear from 37.9% to 70.8% on a leave-onesample-per-participant-out test.Similar increases in accuracies were also seen for the other body regions.Nevertheless, performance remained relatively poor across days (we note that we stumbled across a drift-correction method that provided for better cross-day results; specifically, dividing all data by the ear day-1 baseline.However, because we have no rationale to justify this approach, we present it as an anecdote in Supplementary Fig. 6, hinting at potential levels of improvement in performance that may be attainable with a better understanding of these signals).This level of performance may be improved in the future by better computational tools (Kermani et al. 1999;Lötsch et al. 2019) (we reiterate that the entire raw data set is available for download such that members of the community may test alternative approaches), and by better eNose hardware optimized for the very low level of VOCs involved in body odor sampling (Jha and Yadava 2011;Sabri and Alfred 2018).Taken together, we conclude that what we provide here is an affirmative proof of concept: humans can be identified by the smell in their ear, and that sampling ear odor may provide some practical advantages over other body parts. Fig. 1 . Fig. 1.The sampling cup and tip.A) The sampling cup alone.B) The sampling tip alone.C) The cup and tip combined.D) The apparatus applied to the ear with both eNose hoses attached. Fig. 2 . Fig. 2. Flow diagram of the experimental protocol for each participant. P ).A Tukey's post-hoc test on ear data revealed that the accuracy decreases when training/testing the model on 3 days versus 2 days (P < 10 −5 , d = 5.338), then rises when training/testing the model on 4 days versus 3 days (P < 10 −5 , d = 0.555) and 5 days versus 4 days (P < 10 −5 , d = 0.55).Similar to the ear, Tukey's test on the armpit showed that accuracy decreases when training/testing the model on 3 days versus 2 days (P < 10 −5 , d = 2.287), then rises for 4 days versus 3 days (P < 10 −5 , d = 0.932).However, training/testing the model on 5 days versus 4 days slightly lowers the accuracy (P < 10 −5 , d = 0.414).Contrary to the ear and armpit, Tukey's test showed that accuracy on the lower back decreases consistently when training/testing the model on 3 days versus 2 days (P < 10 −5 , d = 2.116), on 4 days versus 3 days (P < 10 −5 , d = 3.85), and on 5 days versus 4 days (P < 10 −5 , d = 2.94).When using drift-corrected data, the classification accuracy decreased for all body regions as more days were added to the data set (ear F = 6814, armpit F = 12,630, lower back F = 37,800) (Fig.5A and B).A Tukey's post-hoc test on ear data revealed that the accuracy decreases when training/testing the model on 3 days versus 2 days (P < 10 −5 , d = 4.613), on 4 days versus 3 days (P < 10 −5 , d = 2.113), and on 5 days versus 4 days Fig. 4 . Fig. 4. Within single day performance was significantly better than chance.A-C) Distributions of accuracy values on 500 iterations of training/testing with the best respective classifiers on drift-corrected real data on the right and shuffled data on the left for ear, armpit, and lower back, respectively, from the best-performing day.Median accuracy values are displayed in dashed lines.D-F) Distributions of accuracy values under the same conditions but for each body region's worst-performing day. Fig. 5 . Fig. 5.An ear advantage emerges as data is accumulated.A) Mean across accumulated days classification accuracy of individuals using the driftcorrected odor from ear, armpit, and lower back trained and tested on multiple days with each body region's best-performing respective classifier in a leave-one-out test for all body regions.Standard deviation bars shown.B) Across accumulated days classification accuracy under the same conditions but for the leave-one-sample-per-participant-out cross validation test.C-E) Distributions of accuracy values on 500 iterations of training/testing with a leave-one-value-per-participant-out test on accumulated Days 1, 2, 3, 4, and 5 with the best respective classifiers on real data on the right and shuffled data on the left for ear, armpit, and lower back, respectively.Median accuracy values are displayed in dashed lines. Fig. 6 . Fig. 6.Individuals can be identified across days.A) Across-days classification accuracy of individuals using uncorrected odor from ear, armpit, and lower back trained on the first 4 days and tested on the fifth day using each body region's best-performing respective classifier for all body regions.B-D) Distribution of accuracy values on 500 iterations of training on Days 1, 2, 3, and 4 and testing on Day 5 shuffled data with the best respective classifiers.The true accuracy value from real data is shown by the darker dashed line and the median accuracy of the shuffled data is shown by the lighter dashed line.
2024-01-20T06:17:05.134Z
2024-01-18T00:00:00.000
{ "year": 2024, "sha1": "3abb42b43d038d2432d3afe8c9e467dde913d1a1", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/chemse/advance-article-pdf/doi/10.1093/chemse/bjad053/56215703/bjad053.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b76d87d2e256e9466e7945d0eedcf4323cf7d636", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237948703
pes2o/s2orc
v3-fos-license
The Effectiveness of Chitosan and Snail Seromucous as Anti Tuberculosis Drugs BACKGROUND: Tuberculosis (TB) disease is an infection caused by Mycobacterium tuberculosis (MTB) and is transmitted through sputum droplets of sufferers or suspect TB in the air. Chitosan has been widely used in the biomedical and pharmaceutical fields because it is a biocompatible, biodegradable, non-toxic, antimicrobial, and hydrating agent with positive effects on wound healing. Seromucous of snail has anti-tumor bioactivity and is nontoxic to lymphocyte cells, and can even stimulate lymphocyte proliferation. Seromucous of snail as glycoprotein containing carbohydrates; α-1 globulin-oromucoid fraction; glycans, peptides, glycopeptides, and chondroitin sulfate. AIM: This study was to determine the effectiveness of snail seromucous and chitosan as anti TB drugs (ATD) in vitro. METHODS: The research method is based on an experimental laboratory. MTB isolates in this research from sputum samples of patients suspected of TB in Surakarta Regional General Hospital. The stages of the study were performed MTB culture and identification, management sampling, and drug susceptibility testing. RESULTS: The research results showed chitosan 5%; a combination of chitosan 9% and snail seromucous 50% (ratio 1:1) is a microbicide against MTB TB patient isolates. Snail seromucous was ineffective as a microbicide against MTB TB patients. CONCLUSION: The effectiveness as a bactericide against MTB, chitosan, and its combination with snail seromucous has the potential to be an ATD alternative. Edited by: Slavica Hristomanova-Mitkovska Citation: Harti AS, Sutanto Y, Putriningrum R, Umarianti T, Windyastuti E, Irdianty MS. The Effectiveness of Chitosan and Snail Seromucous as Anti Tuberculosis Drugs. Open Access Maced J Med Sci. 2021 Feb 05; 9(A):510-514. https://doi.org/10.3889/oamjms.2021.6466 Introduction Tuberculosis (TB) disease is an infection caused by Mycobacterium tuberculosis (MTB) and is transmitted through sputum droplets of sufferers or suspect TB in the air. TB treatment lasts quite a long time, namely, at least 6 months of treatment which results in the emergence of germ resistance so that TB treatment is not successful because patients drop out of treatment or undergo treatment irregularly resulting in Multi Drugs Resistance TB (MDR-TB). Chitosan or β-(1.4) -2 amino-2deoxy D-glucopyranose is a polysaccharide compound that can be obtained through the process of deacetylation of chitin compounds that are found in shrimp shells, crab shells. Chitosan synthesis uses samples of shrimp shells or crab shells, through the process of deacetylation with 60% NaOH at 60-100°C; deproteinization with 3.5% NaOH, decalcification with HCl 2N, and color removal with acetone and 2% NaOCl [1]. Chitosan has been widely used in the biomedical and pharmaceutical fields because it is a biocompatible, biodegradable, non-toxic, antimicrobial, and hydrating agent with positive effects on wound healing. Seromucous of snail has anti-tumor bioactivity and is non-toxic to lymphocyte cells, and can even stimulate lymphocyte proliferation. Seromucous of snail as glycoprotein containing carbohydrates; α-1 globulinoromucoid fraction; glycans, peptides, glycopeptides, and chondroitin sulfate. Chondroitin sulfate can function as an immunomodulator and immunosuppressant [2]. The content of Glycosaminoglycans, heparin, heparin sulfate, chondroitin sulfate, dermatan sulfate, and hyaluronic acid in snail seromucous can function as stabilizer cofactors and/or coreceptors for growth factors, cytokines, chemokines; enzyme activity regulator; molecular labeling in response to cellular damage in the process of wound healing, infection, tumorigenesis; targets for virulence factors of bacteria, viruses, parasites; as well as the immune system [3]. The antimicrobial activities of peptides isolated from the hemolymph of the molluscan garden snail Helix lucorum, which exhibited inhibition effects against Staphylococcus aureus, Staphylococcus epidermidis, and Escherichia coli. The achasin protein in the Achatina fulica Ferussac snail has important biological functions, including as a bacterial enzyme protein binding receptor [4]. Seromucous of snail 100% and 5% snail mucus cream preparation have an effective effect on accelerating the healing duration of second degree (A) burns. The combination of 100% snail mucus and 1.5% chitosan = 1:2 gave the optimum wound healing rate in the in vivo test. There is a synergistic effect of chitosan and seromucous of snail against S. aureus in vitro [5]. The diagnosis of TB can be performed based on clinical symptoms, chest X-ray, microscopic examination of smear sputum, smear culture on culture media as well as the sensitivity test of MTB isolates to anti TB drugs (ATD) and Drug Susceptibility Testing (DST). Until now there has been no research related to the effectiveness of snail chitosan and seromucous of snail as an alternative to ATD so that research related to this needs to be done. The purpose of this research was to assess the effectiveness of chitosan and seromucous of snail as ATD in vitro. The essence of the research results is expected to be the potential of chitosan and seromucous of snail as an alternative to ATD. Materials and Methods This study and laboratory examinations were conducted and performed at Surakarta General Center Hospital from January to March 2020. All strains were isolated from culture-positive MTB cases. The TB diagnostic criteria were based on the Ministry of Health of the Republic of Indonesia (2014) and the corresponding WHO guidelines [6]. Clinical specimens including sputum were collected from patients with suspected TB of Surakarta General Center Hospital. The screening test was performed by microscopic examination of Ziehl Nelson staining and Molecular Quick Test -Genexpert instrument by following under relevance guidelines. The positive MTB isolates were subjected to cultivation with Lowenstein-Jensen medium (HiMedia, M162 product). All the MTB isolates were validated by both the growth test on p-nitrobenzoic acid (PNB) and MPT-64 antigen detection kit. Non-TB mycobacteria were excluded. DST was performed using the MTB system. The colonies of MTB were swept from the agar plates and suspended in sterile saline containing 0.2% Tween and glass beads. After vortexing for at least 30 s to break up organisms clumps, the bacterial suspension was stayed for 15 min at room temperature to allow any remaining clumps to settle to the precipitations, and the supernatant was adjusted to then a suspension with a turbidity of 0.5-1.0 Mc. Farland standard using a nephelometer. The dilution suspension was performed to 10 -3 and 10 -5 . Stock solutions and working solutions were prepared. 100 µl suspension of the subsequent suspension was inoculated to each tube of Lowenstein Jensen's media that contained freezedried seromucous, chitosan, and ATD including SIRE. The tube culture was incubated at 37°C for 28-42 days. All steps were performed by trained and specialized persons in a biosafety cabinet by following under relevant guidelines [7]. The study results were analyzed using the statistical program of Statistical Package for the Social Sciences version 20.0. Results Seromucous of snail collected from the amount of 10-50 snails, opened the end of the shell, and given the electric shock of 5-10 volts, for 30-60 s, and the liquid that comes out is collected in the flask container. Next, the liquid is centrifuged at 3500 rpm for 10 min as hemolymph fluid or seromucous. Chitosan used in this research was dissolved in 5% acetic acid solution [8]. Discussion Based on Table 1 shows that chitosan is 5%; a mixture of chitosan 9% and seromucous of snail 50% (ratio 1:1) is a microbicide against MTB TB patient isolates. All MTB isolates of suspected TB patients were resistant to a single preparation of 100% seromucous of snail and 2% chitosan compared to ATD. Meanwhile, the most effective type of ATD as RIF is compared to other types of ATD, namely, SM, INH, and EB. https://oamjms.eu/index.php/mjms/index The ineffectiveness of the snail seromucous as ATD in vitro is due to the physicochemical properties of the preparation, namely, the solubility or polarity of the bioactive compound which is not able to penetrate the permeability of MTB cells in order that the dosage of the bioactive compound of snail seromucous is used, it not optimal as a bactericide. Furthermore, the difference within the variation in antibacterial activity is influenced by differences in the resistance level of the inoculum strain associated with the resistance gene expression results. Chitosan is a β-(1.4) -2 amino-2deoxy D-glucopyranose compound, as a product of chitin deacetylation. Chitosan has been widely utilized in the biomedical and pharmaceutical fields because its biodegradable, non-toxic, non-immunogenic, and biocompatible with animal body tissues. The effectiveness of chitosan as an antimicrobial is related to the role of the Chito-Oligosaccharide (COS) compound, a group of complex compounds glycan-binding protein that has a 1,4-b-glucosamine, which is a derivative of chitosan deacetylation of chitin. The effect of COS as the antimicrobial activity is highly dependent on the degree of deacetylation and polymerization of the types of bacteria and fungi. COS as a potential material as "alternative antibiotics" has a value of more effective without causing residue. The uniqueness of chitosan is that it's polycationic in order that it can suppress the growth rate of diarrhealgenic E. coli in vitro [9]. The seromucous of snail contains various types of achasin proteins that have important biological functions, including receptors for binding bacterial enzymes. The effectiveness of the bactericide and or bacteriostatic snail mucus against the isolates of Staphylococcus sp, Streptococcus sp, and Pseudomonas sp showed varied results [10]. The 100% concentration of snail seromucous is capable of being bactericidal against S. aureus, Candida albicans, and Pseudomonas aeruginosa [11]. The results of the Minimal Inhibition Concentration test of meat protein extract from seven, different types of snails showed varied results because they were influenced by the ecological conditions of the snails [12]. Seromucous of snail is antibacterial against Streptococcus mutans; E. coli and inhibits the growth of Methicillin-Resistant S. aureus [13]. The 100% snail mucus concentration is capable of being bacteriostatic against the growth of S. aureus and Salmonella typhosa [14]. Some of the protein lectins are known to be contained in snails, namely, selectin, galectin, C-type lectin, and fibrinogenrelated protein which are secreted by snails that plays a role within the pathogen agglutination process [15]. Based on these results, chitosan and its mixture with seromucous snail are potential candidates for ATD. The content of bioactive compounds in chitosan and seromucous snail can stimulate the function of cellular immunity, namely, lymphocyte proliferation and the production of reactive oxygen intermediated macrophages. The results of the characterization of the snail seromucous protein profile using the SDS-PAGE method showed that have been 3 protein subunits, namely the range of 55-72 kDa because the achasin sulfate group that acted as an antimicrobial and 1 specific protein subunit 43 kDa as adhesive protein. The 100% seromucous of snail and 5% snail slime cream showed optimum effectiveness against lymphocyte proliferation in vitro [16]. The research being carried out is freeze-drying seromucous of snail preparation. The factor causing the ineffectiveness of a microbicidal agent is additionally influenced by the physiological characteristics of MTB isolates which have specific characteristics compared to the physiology of other bacterial cells, namely the presence of mycolic acid in the cell walls which acts as a virulence factor for MTB cells. The effectiveness of a bactericidal or bacteriostatic drug against MTB isolates is often influenced by the physiology of bacterial cells, namely genetic factors associated with the extent of resistance or virulence of cells to an agent or the mutation process caused by mutagenic agents, physically and chemically, from environmental factors. In MTB cells have mycolic acid (trehalose dimycolate) which plays an important role in pathogenic interactions with the host. Mycolic acid has a similar function with lipopolysaccharide in Gram-negative bacterial cells. Mycolic acid affects the function of macrophages, namely inhibiting the fusion of macrophages in host cells against pathogens. The presence of mycolic acid in MTB plays a crucial role within the level of germ resistance to host immune cells and drugs. Each type of ATD contains a different mechanism of action so that it affects the effectiveness of bactericide as ATD. The mechanism of action of SM interferes with the translation process by binding to 16 s rRNA in protein synthesis. INH inhibits the synthesis of mycolic acid so INH is that the best ATD for the treatment and prevention of TB. INH resistant strains often appear with a frequency of around 90% and therefore the resistance is caused by mutations in one in every of the catalase-peroxidase (KatG), inhA, or ahpC genes. INH in cells will turn active in an oxidized form because of activation of the enzyme KatG which is encoded by the KatG gene. The KatG gene encodes the enzyme KatG which activates INH as a prodrug in order that mutations within the gene cause the enzyme to become inactive. Other mechanisms of resistance to INH and ethionamide (ETH) are changes in the expression of drug activators, redox changes, drug inactivation, and efflux pump activation [17]. Mode of action RIF a bactericide by inhibiting nucleic acid synthesis by binding to the β RNA polymerase subunit within the RNA transcription process. The resistance of MTB to RIF reaches 95% and occurs due to the mutation of the rpoB gene that encodes the β RNA polymerase subunit as a very important component within the transcription process. The transcription process is inhibited because RIF is specifically bound to the β RNA polymerase subunit. EB could be a bactericide by interfering with carbohydrate metabolism and cell wall biosynthesis. While pyrazinamide (PZA) as an analog of nicotinamide structure is bactericidal because pyrazinoic acid as a result of the activity of the PZA enzyme formed under acidic conditions. PZA resistance occurs because of mutations within the pncA gene that cause loss of PZA activity (PZAase) so that the mechanism of action of PZA becomes inactive when entering the cell of MTB resistance to EB because of mutations in the embB gene that encodes arabinosil transferase so that biosynthesis or wall polymerization is inhibited arabinan cells as arabinogalactans components and therefore the occurrence of the accumulation of decaprenol phosphoarabinose lipid carriers [18]. The level of resistance of MTB isolates to ATD can occur due to the physiological properties of cells or MTB resistant strains that are different in each region due to the occurrence of mutations that are not interdependent (independent mutation) in additional than one ATD coding gene and or gene coding enzyme for precursor activation enzymes ATD. Mutations occurred within the process of protein synthesis are transcription and or translational processes that have a control on the expression of those genes, resulting in changes within the structure of the target protein or changes in enzymatic activity needed to activate ATD compounds to function as bactericides. The immune response plays an important role in MTB infection. Most of the people are infected with Mycobacteria, 90% don't develop TB. Macrophages in host cells play a vital role within the immune system, namely phagocytosis of cellular antigens. Bacteria within the lung are phagocytes by alveolar macrophages. MTB in macrophages can change the environment by inhibiting the acidification process in phagosome maturation which ends within the phagosome maturation process being stopped. This ends up in phagosomes not having the ability to fusion with lysosomes in order that MTB cannot be destroyed and continues to replicate in macrophages [19]. Treatment of TB with ATD to date has been given the correct ATD; however, there are many strains of MTB resistant to two or more ATD called MDR-MTB strains. The prevalence of MDR-TB and extensively drug-resistant TB is higher in the case of recurrent TB treatment compared to the initial TB case and also the variation within the level of TB germ resistance to ATD is influenced by age, sex, and region [20]. There are side effects in MDR-TB therapy and also the correlation between cure rates and resistance to ATD so that a psychological social management approach is needed in MDR-TB patients and the presence of a bacterial profile related to resistance against antibiotics and TB treatment. This is often to the patient's ignorance of the disease, poor patient compliance, administration of monotherapy or effective drug regimens, inadequate doses, poor instructions, low medication regularity, poor patient motivation, irregular drug supply, poor bioavailability, and poor quality of the drug contributes to the occurrence of secondary drug resistance. Resistance encourages the employment of other more toxic alternative medicines, namely, ETH, aminosalicylic acid, cycloserine, capreomycin, ciprofloxacin, or ofloxacin. The emergence of MTB strains that are resistant to two or more ATD causes the failure rate of TB therapy to be high. Conclusion Chitosan 5%; a combination of chitosan 9% and snail seromucous 50% (ratio 1:1) could be a microbicide against MTB TB patient isolates. Seromucous of snail was ineffective as a microbicide against MTB TB patients. Given its effectiveness as a bactericide against MTB, chitosan and its mixture with snail seromucous have the potential to be an ATD alternative. Further research is required regarding the optimum dosage formulation and also the synergistic effect of seromucous and chitosan preparations with other ATD.
2021-09-27T19:53:59.812Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "cbac3e46dedcfd32cff80e16b72db7774b3144a1", "oa_license": "CCBYNC", "oa_url": "https://oamjms.eu/index.php/mjms/article/download/6466/5853", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e336de48cbe96aa81c4572d1cb87dae8107ba85b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
236098969
pes2o/s2orc
v3-fos-license
Science Goals and Objectives for the Dragonfly Titan Rotorcraft Relocatable Lander NASA’s Dragonfly mission will send a rotorcraft lander to the surface of Titan in the mid-2030s. Dragonfly's science themes include investigation of Titan’s prebiotic chemistry, habitability, and potential chemical biosignatures from both water-based “life as we know it” (as might occur in the interior mantle ocean, potential cryovolcanic flows, and/or impact melt deposits) and potential “life, but not as we know it” that might use liquid hydrocarbons as a solvent (within Titan’s lakes, seas, and/or aquifers). Consideration of both of these solvents simultaneously led to our initial landing site in Titan’s equatorial dunes and interdunes to sample organic sediments and water ice, respectively. Ultimately, Dragonfly's traverse target is the 80 km diameter Selk Crater, at 7° N, where we seek previously liquid water that has mixed with surface organics. Our science goals include determining how far prebiotic chemistry has progressed on Titan and what molecules and elements might be available for such chemistry. We will also determine the role of Titan’s tropical deserts in the global methane cycle. We will investigate the processes and processing rates that modify Titan’s surface geology and constrain how and where organics and liquid water can mix on and within Titan. Importantly, we will search for chemical biosignatures indicative of past or extant biological processes. As such, Dragonfly, along with Perseverance, is the first NASA mission to explicitly incorporate the search for signs of life into its mission goals since the Viking landers in 1976. Introduction One of the most important discoveries from the last 20 years of planetary exploration has been the astrobiological potential of icy moons. Many of these moons harbor liquidwater reservoirs beneath their crusts, comprising a new class of solar system body: ocean worlds (e.g., Nimmo & Pappalardo 2016). If biology requires carbon, water, and energy, then ocean worlds may offer the solar system's best chance for life beyond Earth (Chyba & Hand 2005;Lazcano & Hand 2012;McKay 2016;Lunine 2017;Hendrix et al. 2019;Hand et al. 2020). Titan is unique among ocean worlds in that carbon, water, and energy interact on the surface ( Figure 1). Complex, potentially tholin-like organic compounds cover most of the surface (Janssen et al. 2016). Crustal ice (Coustenis 1997;Griffith et al. 2003) can be melted by impacts (Artemieva & Lunine 2003), and liquid water from the subsurface ocean may erupt in cryovolcanic flows (Lopes et al. 2013). Solar (photolytic) and chemical energy could power biochemistry (Raulin et al. 2010). Titan's profusion of organic riches, especially when exposed to transient liquid water, has created potentially habitable environments, the remnants of which are available on Titan's surface today. Dragonfly will explore some of these environments to address fundamental questions regarding prebiotic chemistry, habitability, and the search for biosignatures. The vehicle is a single half-ton X8 octocopter. We think of it as a rotorcraft relocatable lander: we spend nearly all of our time on the ground doing science and uplinking data, only flying for around half an hour to a new landing site once every 2 Titan days (32 Earth days). Dragonflyʼs targeted landing site is near Titan's equator, 700 km north of Huygens, in the Shangri-La sand sea and within traverse distance of Selk impact crater ). Our prime mission will take place during northern hemisphere winter. We include a brief description of Dragonflyʼs science payload in Table 1, and see also Lorenz et al. (2018b) for further description of the Dragonfly mission concept. While Lorenz et al. (2018b) described the mission implementation, here we provide a complementary focus on the science goals and objectives of the mission. Section 2 describes prebiotic chemistry. Section 3 addresses habitability in both liquid water and liquid hydrocarbon solvents, including mission goals related to the methane cycle, surface geology, and geophysics. We discuss Dragonflyʼs search for chemical biosignatures in Section 4. We then specify Dragonflyʼs landing site and traverse strategy as it relates to science in Section 5, before concluding in Section 6. Prebiotic Chemistry Titan is a carbon-and nitrogen-rich natural laboratory for prebiotic chemistry. No Earth-based experiment can reproduce the long time periods over which appropriate conditions existed prior to the formation of terrestrial life. However, we can investigate analogous conditions on Titan, thereby empirically constraining the degree of molecular complexity that can be achieved with prebiotic chemistry. Titan's dense atmosphere of nitrogen and methane supports rich organic photochemistry. Radiolysis and ultraviolet (UV) photolysis dissociate atmospheric components to produce a suite of carbon-hydrogen-nitrogen (C x H y N z ) compounds. Voyager observed these products in Titan's atmosphere ; Kunde et al. 1981;Maguire et al. 1981), and Cassini-Huygens and Earth-based observations have shown organics to be diverse and bounteous in the atmosphere and on the surface (Niemann et al. 2005;Cordiner et al. 2015Cordiner et al. , 2018Janssen et al. 2016;Hörst 2017;Lai et al. 2017;Thelen et al. 2019Thelen et al. , 2020Nixon et al. 2020). Surface detections have been limited to the relatively simple species CH 4 , C 2 H 2 , CO 2 , C 2 N 2 , C 6 H 6 , C 2 H 6 , and HC 3 N ( Barnes et al. 2005;Niemann et al. 2005Niemann et al. , 2010Brown et al. 2008;McCord et al. 2008;Clark et al. 2010;Mastrogiuseppe et al. 2014). However, Cassini observed the production of more exotic species in the upper atmosphere, including propane, butane, and polycyclic aromatic hydrocarbons (PAHs; Waite et al. 2007;Cui et al. 2009;Magee et al. 2009;Dinelli et al. 2013;López-Puertas et al. 2013). Even larger and more complex atmospheric molecules, with molecular weights of thousands of Daltons (Da), have been detected but not resolved (Coates et al. 2007(Coates et al. , 2009Waite et al. 2007) due to instrument limitations. These atmospherically produced organics coalesce into haze particles that then settle out to cover much of Titan's water-ice bedrock (Rodriguez et al. 2006; Barnes et al. 2007a;Soderblom et al. 2007;Le Mouélic et al. 2008;Janssen et al. 2009Janssen et al. , 2016Hayne et al. 2014;Neish et al. 2015;Lopes et al. 2019). But at what point does organic chemistry become prebiotic chemistry? It has long been hypothesized that Earth's prebiotic chemistry could have been initiated via atmospheric synthesis (Miller & Urey 1959;Trainer 2013;Rapf & Vaida 2016). Organic haze, like that on Titan today, may have played an integral role in the development of Earth's earliest biosphere (Trainer et al. 2004(Trainer et al. , 2006Arney et al. 2016). On Earth, the transition from organic to prebiotic chemistry may have occurred as photochemically generated organics mixed into the primitive water ocean. On Titan, surface liquid water, e.g., impact melt, could potentially play the same astrobiological role as Earth's early oceans, providing an environment for the organic haze products that accumulate on Titan's surface to progress toward more complex molecules, possibly with biological potential (Neish et al. 2010(Neish et al. , 2018. Laboratory experiments show that Titan haze analogs produce biological molecules, such as amino acids, when mixed with liquid water (Neish et al. 2008(Neish et al. , 2009(Neish et al. , 2010Ramírez et al. 2010;Poch et al. 2012;Cleaves et al. 2014). Reactions occurring on Titan in Figure 1. Dragonfly will image from the surface to provide context for sampling and measurements, as well as in flight to identify sites of interest at a variety of locations. (Left) Huygens image of Titan's surface; cobbles are 10-15 cm across and may be water ice (Tomasko et al. 2005;Keller et al. 2008;Karkoschka & Schröder 2016a). (Right) Huygens aerial view of terrain akin to the diverse equatorial landscapes that Dragonfly will traverse and image at higher resolution. transient liquid-water environments provide a natural experiment in the transition from organic to prebiotic to biological chemistry, perhaps paralleling the transition on early Earth. Titan has been conducting such experiments over millions of years; Dragonfly is designed to collect the results. Goal A: Prebiotic Chemistry Science question: What chemical components and energyproducing chemical pathways exist on Titan that could drive prebiotic chemistry?-Complex C x H y N z molecules, the main ingredients for prebiotic chemistry, are abundant on Titan's surface, where they have the potential for further chemical evolution when dissolved in a solvent (Neish et al. 2010;Rahm et al. 2016). However, it is not known how far organic synthesis has progressed in complexity and whether abiotic processes have produced chemical gradients that might be utilized by organisms. Designed primarily as an atmospheric probe, Huygens could not conduct a full inventory of Titan's surface organics. Identification of a full suite of organics, including those most relevant to biology, requires a dedicated surface mission. Dragonfly will document the complexity of Titan's surface organics to assess the extent of prebiotic chemistry in a carbon-rich environment. Science goal A: determine the inventory of prebiotically relevant organic and inorganic molecules and reactions on Titan. Elemental availability-Dragonfly will sample Titan's organic sediments to determine the abundance and distribution of carbon, hydrogen, nitrogen, oxygen, and possibly phosphorous and sulfur (known collectively as CHNOPS). Known biological processes preferentially use these specific elements, but not all CHNOPS-bearing species are biochemically useful (Bains & Seager 2012). Identifying the relative abundance and oxidation states of precursors like hydrogen cyanide (HCN), hydrogen sulfide (H 2 S), and formaldehyde (CH 2 O;Miller 1957;Oró & Kimball 1961;Orgel 2004) will reveal how CHNOPS might build functional molecules on Titan. The elements C, H, and N are known to exist on Titan's surface, but the case for chemically accessible O, P, and S is less clear. A small amount of oxygen is incorporated into organics in the upper atmosphere (Lutz et al. 1983;Samuelson et al. 1983;Coustenis et al. 1998;Baines et al. 2005;de Kok et al. 2007;Hörst et al. 2012), ultimately derived from exogenic water arriving at Titan from Saturn's E-ring (Hörst et al. 2008). However, a larger amount of oxygen could easily be incorporated into Titan's surface organics through reactions with liquid water to produce a range of biomolecules (Neish et al. 2008(Neish et al. , 2009(Neish et al. , 2010Poch et al. 2012;Cleaves et al. 2014). Such reactions are likely to occur in melt produced by impacts and could potentially occur in cryovolcanic flows, although impacts produce higher-temperature melt that would persist for longer durations than individual flows (Neish et al. 2018). P and S are present at most at the part-perbillion level in Titan's stratosphere but are expected based on relative abundance in the Saturn system (Nixon et al. 2013) and could be delivered to the surface through other means (Fortes et al. 2007;Pasek et al. 2011). Building blocks-Dragonfly will also look for biologically relevant compounds-amino acids, lipids, and sugars-and their precursors. Terrestrial biology uses amino acids to build proteins that form cell structures and catalyze reactions. They range in size from 75 Da (glycine; C 2 H 5 NO 2 ) to more than 204 Da (tryptophan; C 11 H 12 N 2 O 2 ). Amino acids can form abiotically and have been detected in meteorites (Kvenvolden et al. 1970), on comets (Elsila et al. 2009), and in interstellar space (Kuan et al. 2003). They are also produced with ease in Titan analog laboratory experiments (Neish et al. 2010;Poch et al. 2012;Cleaves et al. 2014). Dragonfly will identify the abundance, (Trainer et al. 2018). DraGNS Dragonfly Gamma-Ray and Neutron Spectrometer Using a pulsed neutron generator, DraGNS interrogates Titan within 2 m of the lander to measure bulk elemental composition in the shallow subsurface, particularly C, H, N, O, Na, Mg, P, S, Cl, and K (Parsons et al. 2018;Peplowski et al. 2021). DraGMet Dragonfly Geophysics and Meteorology package DraGMet is an extensive set of instruments measuring 11 distinct properties: atmospheric temperature, pressure, wind speed and direction, methane humidity, hydrogen partial pressure, crustal seismicity, electric field, surface dielectric properties, surface temperature, and ambient sound (Lorenz et al. 2018aPanning et al. 2020 variety, and spatial distribution of amino acids within Titan's surface materials, allowing us to map out the degree to which prebiotic chemistry has progressed. Dragonfly will also identify sugars and lipids. More than just an energy source, sugars also form the backbone of RNA (ribose; C 2 H 10 O 5 ) and DNA (deoxyribose; C 5 H 10 O 4 ). One idea for the formation of life on Earth posits that RNA was first produced abiotically, after which it gained the ability to self-replicate and manipulate its environment (Woese 1967;Crick 1968;Orgel 1968Orgel , 2003Gilbert 1986;Robertson & Joyce 2012;Neveu et al. 2013). By identifying sugars on Titan, we can assay the possible inputs for building RNA and/or potential alternate biomolecules. Lipids (e.g., fatty acids) are organic molecules that terrestrial life uses to build structures, such as membranes and cell walls needed to isolate cells from their environment. Isolation may not even require oxygen; Stevenson et al. (2015) suggested that an azotosome might serve for isolation in liquid hydrocarbons. Identifying simple forms of lipids (e.g., C 16 H 30 O 2 ; 254 Da) on Titan would reveal the potential for compartmentalization. Energy-producing pathways-Although a thousand times less intense than at the surface of the Earth (Barnes et al. 2018), the best continuous source of energy on Titan is still sunlight (McKay 2016). However, transient habitable environments or opportunistic biota might alternatively derive metabolic energy from chemistry alone, similar to chemosynthesis-driven systems found near seafloor hydrothermal vents on Earth (Corliss et al. 1979;Kelley et al. 2005). Potential hydrocarbonbased life might derive energy chemically from hydrogenation of hydrocarbons on Titan's surface (McKay 2016). For example, acetylene (C 2 H 2 ) could react with hydrogen gas (0.1% of Titan's atmosphere) to generate 334 kJ mole −1 (McKay 2016; Table 2). We will constrain the potential energy that could be derived from this and other chemical reactions common enough to be biologically useful by inventorying abundant compounds. In addition to the hydrogenation of acetylene, Table 2 lists other examples of reactions thought to be common or biologically useful on Titan's surface. To be most useful, target molecules will occur in abundance; so, based on Titan's upper atmospheric chemistry, they will likely be relatively simple (20-100 Da). Sampling Targets for Goal A Dragonfly will measure concentrations of chemical constituents in organic dune sands (α) and material with a water-ice component (β, γ, or δ; Table 3). Comparison of these materials can show how far prebiotic chemistry can progress in different environments. For example, mixing of organics with transient liquid water on Titan's surface could advance chemistry by offering a solvent in which chemical reactions can occur, increasing reaction rates, and allowing for incorporation of oxygen into Titan's organic inventory (Neish et al. 2008(Neish et al. , 2009(Neish et al. , 2018. Habitability An environment's habitability depends on conditions including the abundance and distribution of chemical nutrients, as well as potential solvents. Titan's surface hosts two possible liquid solvents: water and methane. Although the surface temperature is 93.6 ± 0.2 K (Fulchignoni et al. 2005;Lebreton et al. 2009;Jennings et al. 2019), transient oases of liquid water have existed, for example, in melt generated in impact events (Artemieva & Lunine 2003;Section 5). On Titan, methane acts both as a source of carbon and as a solvent. The equatorial regions provide ample organic solids (e.g., dune sands). And, although rainfall is infrequent, the regolith may serve as a liquid-methane reservoir in the near subsurface (Zarnecki et al. 2005;Lorenz et al. 2006a;Hayes et al. 2008;Niemann et al. 2010;Lorenz 2014;Turtle et al. 2018;Faulk et al. 2019). The persistence of liquid methane, the rate of introduction of organic solids and their concentration by evaporation, and the prospects for mixing of organics with liquid water at, near, or below the surface determine the timescales and abundances with which biochemistry can operate, thus defining Titan's habitability. Goal B: Methane Cycle Science question: What methane sources and sinks exist in Titan's equatorial regions, and what are the implications for methane transport?-By influencing the spatial and temporal availability of organic material and hydrocarbon solvents, Titan's methane cycle drives its potential as a habitable world. Cycles operate on two timescales: a faster, closed hydrologic cycle of methane precipitation, transport, and evaporation (Mitchell & Lora 2016) and a slower, open cycle of methane production, loss, and potential resupply from the interior (Tobie et al. 2006). Both affect the capacity for prebiotic chemistry. Dragonfly will constrain Titan's methane cycle and conditions for habitability by recording atmospheric and shallow subsurface conditions in the equatorial regions to tie to global atmospheric circulation and the history of atmospheric methane. Science goal B: determine the role of Titan's tropical atmosphere and shallow subsurface reservoirs in the global methane cycle. Dragonfly meteorological measurements will be used to derive constraints on Titan's equatorial moisture budget (Mitchell 2008;Mitchell & Lora 2016) to anchor models of the global hydrologic cycle. Despite periodic rainfall Surface-atmosphere exchange-Evaporation is controlled by wind speed, surface texture, the humidity gradient between surface and atmosphere, and atmospheric mixing, which is a function of atmospheric stability. Volatile exchange leaves a signature in the diurnal variations of near-surface temperature and humidity. To assess the potential evaporation as a function of time and location, Dragonfly will monitor methane humidity, temperature, and atmospheric winds at the surface at each site. We will use this information, along with knowledge of surface moisture and porosity (science objective B2), to model and constrain actual evaporation rates (Gloesener et al. 2016;Martínez et al. 2016;Savijärvi et al. 2016;Farris et al. 2018). Moisture transport-To constrain local atmospheric moisture transport in the boundary layer, Dragonfly will measure methane humidity and winds. By monitoring the diurnal methane cycle and meteorology at multiple locations over the mission, which will take place within a single Titan season, we will provide constraints on regional variations in humidity and transport. While Huygens provided a snapshot of local atmospheric conditions, Dragonfly will put these measurements and groundbased observations (Lora & Ádámkovics 2017) in the context of Titan's diurnal and longer-term cycles, as well as regional weather variations. These constraints will then be used to evaluate atmospheric circulation modeling of the global methane cycle Atmospheric stability and precipitation-Rainstorms are infrequent at low latitudes on Titan and are expected to be clustered near equinox (∼5 yr after Dragonflyʼs expected arrival), so Lora et al. (2019) showed that Dragonfly is unlikely to experience rain (Lorenz 2000;Tokano et al. 2006;Schaller et al. 2009;Turtle et al. 2011b;Mitchell 2012;Mitchell & Lora 2016;Newman et al. 2016). Nevertheless, Dragonfly will acquire vertical atmospheric profiles of methane content and temperature, allowing us to constrain atmospheric (in)stability, characterize the cloud-base altitude and energy available for convection, and assess the likelihood and intensity of future precipitation events (Hueso & Sánchez-Lavega 2006;Barth & Rafkin 2007). Science Objective B2: Abundance of Stored Liquid Methane Dragonfly measurements will determine if Titan's near subsurface can act as a liquid reservoir for hydrocarbons in the tropics (within 26°of the equator), where precipitation is infrequent and surface liquids evaporate quickly (Mitchell 2008;Turtle et al. 2011b;Barnes et al. 2013). Huygens' detection of methane and ethane moisture in the regolith at 10°S latitude (Niemann et al. 2010) is consistent with the presence of porous water ice and organic materials, as also suggested by Cassini data (Elachi et al. 2005;Janssen et al. 2016). For infiltration to be effective, the upper crust must have a high porosity. On Earth, the void space between sand grains inside dunes acts as a water reservoir, retaining humidity levels high enough to support microbial communities beneath the surface, even in hyperarid deserts (Louge et al. 2013). Impacts have Note. a A global water ocean, cooled from above, would drain solute-rich liquid to the ocean, leaving surface ice relatively pure. The surface presence of solutes in ice implies deposition of liquid above the crust so solutes can be frozen in. been shown to increase the near-surface porosity within and surrounding their resulting craters via fracturing and dilatancy (Pilkington & Grieve 1992;Alejano & Alonso 2005;Collins 2014), with the largest impacts creating higher porosity and to greater depths (Soderblom et al. 2015). Dragonfly will constrain near-subsurface liquid content and porosity by measuring the electric permittivity (complex dielectric constant ò) and thermal diffusivity (κ in m 2 s −1 ) of the surface. Electric and thermal responses of the surface are functions of the amount of pore space and the bulk composition. Analysis of the thermal diffusivity at the Huygens GCMS inlet confirmed that the surface was damp (Lorenz et al. 2006a), and changes in the dielectric response (Hamelin et al. 2016) showed evidence of surface devolatilization minutes after Huygens landed. Although interpretation of a single physical property is not unique, Dragonfly can resolve this ambiguity by bulk elemental compositions from gamma-ray spectroscopy. Thus, we will be able to detect whether nearsubsurface liquids are present and map their spatial distribution from site to site, determining how dry Titan's equatorial region is and its role in the global methane cycle. Science Objective B3: History of Titan's Atmospheric Methane Dragonfly will test different hypotheses to address outstanding questions regarding the formation and evolution of Titan's atmosphere. Despite the abundant hydrocarbons on Titan's surface, the total inventory is lower than the amount predicted if current photolytic processes have operated throughout Titan's history (Yung et al. 1984;Lorenz & Lunine 1996;Lorenz et al. 2008c). Furthermore, Cassini isotopic measurements are consistent with primordial methane, implying replenishment from the interior within the last few hundred megayears (Wilson & Atreya 2004;Lavvas et al. 2008;Mandt et al. 2012;Nixon et al. 2012). While outgassing (e.g., due to methane clathrate destabilization; Tobie et al. 2006) could have supported a long-term methane cycle, extended periods without methane might also have been possible. Such scenarios affect the carbon supply and duration of availability for prebiotic chemistry. Dragonfly will measure Ar and Ne isotopic distributions to constrain how much outgassing has occurred in Titan's history. These elements are poorly soluble in potential subsurface clathrate reservoirs and are therefore the most likely markers of outgassing. Huygens' brief mission detected more radiogenic Ar in Titan's atmosphere than expected (Niemann et al. 2010), evidence of chemical interactions with the rocky core and interior ocean. Huygens also made a tentative detection of 22 Ne (Niemann et al. 2010), which, if confirmed, would suggest the release of 20-30× Titan's atmospheric mass over its history (Tobie et al. 2012). The abundance of 22 Ne and its isotopologs traces the evolution of Titan's atmosphere (Glein 2015(Glein , 2017. Dragonfly will also aim to measure or place improved upper limits on the relative Xe, Kr, and Ar abundances to test the hypothesis that a significant amount of Titan's volatiles, including CH 4 , could be trapped in clathrates in the interior . Measurement of significant depletion among 132 Xe, 84 Kr, and 36 Ar compared to carbonaceous chondrites and comets, possible sources of Titan's volatiles (Néri et al. 2020), would strongly indicate retention of volatiles in clathrates rather than early outgassing and subsequent atmospheric loss as on terrestrial planets. Sampling Targets for Goal B Dragonfly will acquire meteorological measurements at the surface in multiple geologic settings to determine evaporation potential and atmospheric humidity in Titan's arid equatorial region in northern winter. We will fly vertical atmospheric profiles extending from the surface up to 3.5 km altitude to encompass the planetary boundary layer detected by Huygens on its descent , sampling temperature, pressure, and methane abundance at 20 m intervals to resolve the altitude of the boundary layer ( Figure 2). For objective B3, DraMS will acquire atmospheric samples. Redistribution of material governs Titan's habitability by regulating the local availability of organic solids. Winds gather organic sand into vast dune fields (Lorenz et al. 2006b;Barnes et al. 2008;Radebaugh et al. 2008;Le Gall et al. 2011;Rodriguez et al. 2014), but the mechanisms that manufacture the organic sand particles are not understood . In addition, wind models of net transport have not been directly verified (Tokano 2010;Radebaugh 2013;Malaska et al. 2016). Rounded cobbles suggest fluvial transport (Figure 1; Tomasko et al. 2005;Le Gall et al. 2010), but it is not known how far they were transported (Burr et al. 2006) or from what source material they might have eroded ( Table 3). Understanding the provenance of surface clasts and transport rates as context for sampled materials will address these unknowns. Dragonfly will constrain the dominant transport processes, active transport rates, and sources of material in the equatorial region to understand how local availability of organic material controls Titan's habitability. Science goal C: determine the rates of processes modifying Titan's surface and rates of material transport. Science Objective C1: Determine Conditions for Aeolian Transport Dragonfly will study aeolian transport via passive and active experiments to quantify the importance of winds in mobilizing organic material on Titan's surface. Individual sand grains on Titan are expected to be 300-700 μm in diameter, and the estimated threshold wind speed for saltation is ∼1 m s −1 (Burr et al. 2006(Burr et al. , 2015Lorenz & Zimbelman 2014); however, both values are based on theoretical studies and laboratory analogs, not empirical Titan data. Passive measurements-Measuring wind speed at the surface and correlating with images of surface changes (e.g., translation of ripples) will determine the conditions necessary for saltation, similar to experiments conducted by Curiosity on Mars (Bridges et al. 2017). Simultaneous measurement of wind speed and direction will determine the net sand flux vector; sand transport only occurs above the saltation threshold and scales as v 3 , so the highest wind speeds are of greatest significance. (Imaging before and after landing can provide context regarding the disturbance of surface material by Dragonfly, expected to be minor, as shown in Lorenz et al. 2018b.) Active measurements-Dragonfly will also conduct controlled saltation experiments while on the surface by monitoring the response of surface particles while spinning one or more rotors at different settings to generate known wind conditions to measure threshold wind speeds for saltation. Science Objective C2: Determine the Transport Mode and History of Clastic Materials Dragonfly will also investigate the role of other modes of material transport by constraining the local provenance of sampled materials. For example, interdune areas on Earth vary: many are covered in gravel derived from bedrock, but others consist of evaporite cement cobbles. Cassini evidence hints that Titan's interdunes also vary (Barnes et al. 2008;Radebaugh et al. 2008;Bonnefoy et al. 2016). In addition, at Selk Crater, rocks from deeper layers may be exposed by mass wasting of the crater walls, potentially affording a unique opportunity to dive into Titan's geologic past. We will measure the size, frequency, and spatial distribution of cobbles (defined as >6.4 cm) and larger rocks from surface panoramas. High-resolution images will be used to characterize the size distribution (quantified by the Inclusive Graphic Standard Deviation metric; e.g., Blott & Pye 2001), shape, and roundedness of grains within individual clasts. Grain shape primarily relates to lithology, as well as physical erosion and weathering; rounding primarily indicates greater transport distance and/or duration. Clast size distributions are also an indicator of transport velocity and vigor, thereby constraining the depositional regime of the grains (i.e., fluvial, alluvial, aeolian, and mass wasting; e.g., Krumbein & Sloss 1951;Ibbeken 1983;Yingst et al. 2007Yingst et al. , 2008. Thus, by combining these data with material transport rates on Titan, we can constrain the provenance of transported sediments (Collins 2005;Perron et al. 2006). This strategy is derived from those employed by in situ exploration on Mars. For example, textural analyses conducted for Martian soils with microscopic images from Spirit, Opportunity, and Curiosity (e.g., McGlynn et al. 2011;Cousin et al. 2017) have facilitated the identification of transport processes and postulation of formation models. Science Objective C3: Determine the Geologic Context of Sampled Materials Cassini and Huygens revealed a wide range of aeolian and fluvial processes at work on Titan's surface (Tomasko et al. 2005;Lorenz et al. 2006bLorenz et al. , 2008bBarnes et al. 2007bBarnes et al. , 2008Barnes et al. , 2015Jaumann et al. 2008;Keller et al. 2008;Radebaugh et al. 2008Radebaugh et al. , 2018Le Gall et al. 2010;Burr et al. 2013;Radebaugh 2013;Lorenz 2014;Birch et al. 2016;Bonnefoy et al. 2016;Cartwright & Burr 2017). Dragonfly will place sampled material into compositional context with local geology by documenting the morphology, color and texture, and characteristics of geologic features and surface materials ( Table 3), comparing clast color to that of observed landforms and surface materials to constrain the provenance of samples. Huygens' spectrometer indicated color variability of the surface at visible wavelengths (Karkoschka & Schröder 2016a), sufficient for visible color imaging to recognize commonalities between rocks, sediments, and outcrops. Imaging of fluorescence (Hodyss et al. 2004;Lorenz et al. 2017) under UV illumination will reveal certain organics. We can thereby extrapolate the compositions of sampled material to features around the rotorcraft lander. Sampling Targets for Goal C These objectives constrain the modes and rates of transport and thus accumulation of surface materials, thereby providing crucial context for all compositional measurements. Surface imaging by Dragonflyʼs cameras (DragonCam Lorenz et al. 2018b) will be acquired at each science sampling site. Dragonflyʼs mobility is key for this investigation, affording the opportunity to explore a variety of geologically diverse locations. Goal D: Mixing Water and Organics Science question: Where and how has liquid water been in contact with organic material?-An ice shell separates Titan's organic-rich surface from the liquid-water subsurface ocean (Iess et al. 2012), but the extent of that separation remains unknown. Cassini-Huygens' geophysical, gravitational, and electric-field data and models constrain the ice-shell thickness Organic material could also be brought downward through the crust by burial and subsidence, seeding prebiotic chemistry in the water ocean, as well as possible perched liquid-water sills or magma chambers. Bulk downward transport of organics could also be driven by crustal tectonism, perhaps driven by convection in the lower crust. There is evidence for tectonics on Titan in the form of mountain ridge belts (Radebaugh et al. 2011;Cook-Hallett et al. 2015;Liu et al. 2016), but without adequate understanding of the thickness and nature of the lithosphere and the frequency of tectonic activity, the potential for organic exchange cannot be determined. To assess the habitability potential of liquid-water environments on Titan, Dragonfly will constrain which mechanisms operate to mix liquid water with organic compounds. Science goal D: constrain what physical processes mix surface organics with subsurface ocean and/or melted liquid-water reservoirs. Science Objective D1: Measure Current Lithospheric Activity and Constrain Past Processes Dragonfly will reveal the seismic activity of an ocean world by listening for quakes generated by the tidal deformation of Titan's ice crust (Mitri & Showman 2008). Cassini gravity measurements showed that Titan's crust deforms significantly over the course of its tidal cycle (Iess et al. 2012). Such deformation controls significant tectonic activity at Europa (Hoppa et al. 1999;Greenberg et al. 2003;Hand 2017;Vance et al. 2018) and Enceladus (Hurford et al. 2007), and Titan's unusually high eccentricity could allow tidal forces to drive tectonism despite Titan's longer orbital period than those other moons (Sagan & Dermott 1982;Sohl et al. 2014). Temporal clustering of seismic events with orbital phase would reveal the extent to which tidal forces control cracking and faulting across Titan. Identifying spatial trends in activity through seismic monitoring along the traverse toward Selk Crater could reveal whether tidal forces activate regional faults. Different properties of the near-surface material could also be determined based on differences in transmission of seismic signals (Stähler et al. , 2019Lognonné et al. 2020). Tectonic activity can also manifest as surface features like joints and faults. Imaging of the spacing and orientation of such features would provide inputs to models of local stress fields, rheological properties, and erosion rates to constrain crustal processes that can exchange material between the surface and subsurface (Litwin et al. 2012). Identification of faults expressed as surface lineations and/or scarps would be indicative of the regional stress state and help to constrain modes of stress generation and release, as has been done for other icy satellites (e.g., Patthoff et al. 2019 Numerical models demonstrate how crustal thickness affects convective and conductive layering within the ice shell and on the interior structure (Mitri & Showman 2008). The current range of possible thicknesses derived from Cassini data, however, cannot uniquely determine the bulk physical properties of or structure within Titan's lithosphere. Seismic activity-Modeling of seismic waveforms will allow us to infer the thickness of the ice shell based on reflection and transmission of the seismic source signal. If Dragonfly detects tectonic cracking events, then we can determine the ice-shell thickness from the timing of reflected P-wave phases, similar to the technique proposed for Europa lander missions (e.g., Lee et al. 2003;Panning et al. 2006;Vance et al. 2018). Even in the absence of tectonic events large enough to rise above ambient noise, the ambient noise itself can be used to extract a P-wave reflectivity response via autocorrelation of the noise, which can constrain shell thickness and properties, as demonstrated in terrestrial studies on the Antarctic Amery Ice Shelf (Zhan et al. 2013). A significant new body of work on ocean world seismology has been motivated by the prospects of seismic measurements on a Europa lander (e.g., Panning et al. 2018;Hurford et al. 2020). Recent simulations of seismic propagation in Titan interior models (Stähler et al. , 2019Figure 3) show that the ice layer structure retains more seismic energy near the surface than is typical for terrestrial planets. Indeed, the rich array of waves in icy moons demands a new taxonomy to identify the various modes, e.g., Scholte waves at the ocean/mantle interface. Large events can even provide information about deeper structure, for example, signatures diagnostic of an Ice-VI layer at the base of the ocean , as well as the crustal thickness. Quantitative analysis suggests an expected generation rate of Titanquakes by scaling lunar seismicity by Titan's tidal dissipation (e.g., a 3.8 mag event expected to produce signals comparable to those shown in Figure 4 should occur 0.02-10 times per Titan day (Tsol); see also Hurford et al. 2020). Analyses of wind and pressure noise for Mars (Murdoch et al. 2017a(Murdoch et al. , 2017b can be scaled to Titan to show that these should not be limiting factors for a ground-deployed instrument. Single-station techniques have matured in recent years, partly in connection with the InSight mission. While longerperiod surface wave techniques for single stations were expected to be a powerful tool for Mars structure based on pre-mission expectations (Panning et al. 2015(Panning et al. , 2017, results have instead come from observations of body waves. These have been used for detection and distance estimation for hundreds of events (Clinton et al. 2020), including precise locations and source mechanism estimates for the clearest events (Giardini et al. 2020;Brinkman et al. 2021) and determination of subsurface structure . While long-period surface wave energy appears promising for ocean world structure determination due to energy trapped within the ice shell (e.g., Panning et al. 2006;Stähler et al. 2018), multiple single-station techniques are available using body waves as well . The ability of DraGMet to perform low-power continuous monitoring with event detection means it is realistic to anticipate detection of events with recurrence intervals of <0.1 per Tsol, such that even nondetections become scientifically significant. Schumann resonance-On Earth, the conductive ionosphere and ocean surface act as waveguides, forming a resonant cavity for electromagnetic waves. During its descent, Huygens observed electric-field signals with a frequency near ∼36 Hz (Béghin et al. 2007) that were interpreted as the signature of such a Schumann resonance cavity. However, it has also been suggested that the signals were artifacts (Lorenz & Le Gall 2020) of mechanical vibrations during Huygens' parachute descent. The Schumann interpretation differs from the classic terrestrial paradigm, where lightning discharges excite the signals, which manifest predominantly in vertically polarized electrical waves. Instead (since lightning has not been observed on Titan), interaction with Saturn's magnetosphere could stimulate the resonance, seen by Huygens as horizontally polarized electric fields. As on Earth, the lower boundary of Titan's ionosphere serves as the top of the resonant cavity, but whereas the lower boundary on Earth is the (conductive) land and sea, on Titan, it is the internal salty or ammoniacal water ocean. One interpretation of the Huygens measurement is that the lower boundary lies 55-80 km beneath Titan's surface (Béghin et al. 2012). However, this estimate does not include uncertainties associated with the Schumann resonance model (e.g., ionospheric structure; Lorenz & Le Gall 2020), and the very limited spectral resolution of the Huygens measurements limits the measurements' ability to constrain the model. Despite the challenges of this arena, Dragonfly aims to detect natural time-varying electric fields, if they exist. By making measurements for extended periods on the surface (without the confounding noise of Huygens' descent), more sensitive spectral analysis can be applied; if multiple Schumann frequencies are detected (often five or more peaks are seen at Earth), then their relative amplitudes may constrain their generation mechanism. Variation in their character with ionospheric conditions (e.g., local solar time) may help reduce systematic uncertainties in model interpretation as crustal thickness. Science Objective D3: Determine Availability of Water Ice Dragonfly will seek to detect water ice on and near Titan's surface in different geologic settings to determine the potential Figure 3. Simulated stacked seismograms for two Titan interior models: 46 km thick ice crust (left) and 124 km thick ice crust (right) over 410 km thick water ocean with 3.3% NH 3 Stähler et al. 2018). Colors show the vertical (Z) and horizontal (radial and tangential) components of ground motion. The arrival times of Rayleigh and Love waves are a measure of source distance, while trapped S-waves indicate crustal thickness. The vertical gray lines at ∼20°indicate the approximate distance corresponding to the records shown in Figure 4. Figure 3 at a distance of 18°(∼800 km). The train of P-wave arrivals (200-350 s) is another straightforward diagnostic of ice crust thickness. Rayleigh and Love wave amplitudes, ∼20 and >100 μm s −1 , respectively, may be detectable even with Dragonflyʼs skidmounted geophones (Lorenz et al. 2018b). for mixing with organics. Cassini has documented localized areas of water-ice enhancement on Titan (Griffith et al. 2019), and Dragonfly will target such areas, including interdune areas (Barnes et al. 2008), ejecta deposits south of Selk Crater, the interior margins of the crater floor (Solomonidou et al. 2020), and potential melt flow features east of the crater (Soderblom et al. 2010a;Neish et al. 2015;Janssen et al. 2016;Werynski et al. 2019;Lorenz et al. 2021). DraGNS will measure the bulk elemental content of each science landing site to constrain the water content. Ice provenance (e.g., crustal ice, former impact melt, or ejecta) will be interpreted from landscape morphology and abundances of certain organic and inorganic compounds, making it possible to test ice-origin hypotheses ( Table 3) and constrain exchange mechanisms with the subsurface ocean. Goal D Sampling Targets Compositional investigations (science objective D3; Table 3) require measurements of (α) predominately organic sediments and (β, γ, or δ) material with a water-ice component ( Table 3). Composition at depth can differ from surface material (Janssen et al. 2016), so to be able to detect the presence of near-surface water ice beneath an organic veneer, DraGNS will be sensitive to bulk composition deeper than ∼10 cm below the surface. Measurements will be informed by DragonCam imaging of geologic structures and relationships. Goal E: Chemical Biosignatures Our understanding of biology remains based on a single sample set: life on Earth. Therefore, a main goal of solar system exploration is to ascertain whether life has originated separately from that on Earth: a "second genesis." Detection of extraterrestrial life in our solar system would be a revolutionary discovery, further suggesting that life may arise readily in diverse planetary environments throughout the universe. As is the case for all potentially habitable bodies, whether Titan has supported the development of biological systems is currently unknown, but the possibility of past, or even extant, life cannot be ruled out. With the known necessary ingredients present on its surface-energy, solvents, and essential elements such as carbon, hydrogen, nitrogen, oxygen, and possibly phosphorus and sulfur (CHNOPS)-Titan is one of the best places in the solar system to search for such life (Simakov 2000(Simakov , 2004(Simakov , 2012Sarker et al. 2003;McKay 2004McKay , 2016Schulze-Makuch & Grinspoon 2005;Shapiro & Schulze-Makuch 2009;Raulin et al. 2010;Neish et al. 2018). Were life to have existed in a transient Titan melt pool, it would have left compositional biosignatures in the now-solidified ice. Such biosignatures would be protected from degradation by galactic cosmic rays due to Titan's thick atmosphere and from chemical weathering due to the insolubility of water ice in liquid hydrocarbons (Lorenz & Lunine 1996). Biological processes produce compounds with distinct abundance patterns compared to abiotic processes (McKay 2004; Figure 5). Biology also prefers molecules of a single handedness, or chirality (Halpern 1969; Bada & McDonald 1996;Bada 1997;Aubrey 2008). These and other measurable chemical clues could represent signs of past or extant life (Figure 6). Titan also offers the opportunity to more broadly examine assumptions about habitability by exploring whether life can form or exist in a solvent other than water. Life, but not as we know it, might have existed or exist today within Titan's liquid hydrocarbons (McKay 2004(McKay , 2016Stevenson et al. 2015). Low solubility and slow reaction rates challenge terrestrial expectations for biochemical mechanisms of hydrocarbonbased life, but under Titanian conditions, alternate pathways could permit parallel processes in liquid hydrocarbon media (Stevenson et al. 2015;Rahm et al. 2016;Lv et al. 2017). Signatures of such life, were it to exist, could consist of abundance patterns of useful compounds or of spatial gradients in the abundance of H 2 , which could serve as a metabolic input (McKay & Smith 2005). Science question: Are there chemical signatures of water-or hydrocarbon-based life on Titan?-Although life as a concept continues to elude definition (Cleland & Chyba 2002), life on Earth exhibits common chemical characteristics (McKay 2004). Terrestrial biology, or life as we know it (LAWKI), relies on specific molecules like amino acids, lipids, and nucleic acids. Other biochemistries might use different functional molecules. Therefore, patterns of molecular abundances in comparison to abiotic abundances, distributions, and complex pathway assessments can be a powerful class of biomarkers (Benner & Hutter 2002;McKay 2016;Marshall et al. 2017). Evaluating the relative abundance and distribution of organic compounds ( Figure 5) is diagnostic for biology that uses liquid water as a solvent (water-based life), as well as biology that might use Figure 5. Comparison of biogenic with nonbiogenic distributions of organic material. Abiotic processes typically produce smooth distributions of organic material (brown). Biology, in contrast, selects and uses a highly specific set of molecules (green), e.g., chiral amino acids on Earth. After Lovelock 1965;McKay 2004). A broad-based search for multiple chemical biosignatures ( Figure 6) minimizes assumptions about the nature of potential life, a valuable lesson from Viking lifedetection experiments (Klein et al. 1976;Ballou & Wood 1978;Klein 1979) and the same strategy proposed for a Europa lander (Hand 2017). Dragonfly will perform a broad, multifaceted search for chemical signatures that would be indicative of past or extant biological processes on Titan, not only climbing the Ladder of Life Detection 26 (Figure 6; Neveu et al. 2018) but also building wide rungs on which future ocean world missions can stand. Crucially, this strategy can identify several different potential biosignatures, as no single observation in isolation can be considered definitive evidence. Science goal E: perform a broad-based search for signatures indicative of past or extant biological processes. Science Objective E1: Determine Enantiomeric Abundance of Chiral Molecules Dragonfly will identify whether biologically useful molecules demonstrate a distinct handedness consistent with biofunctionality. The premier example of biochemical selectivity is homochirality: life on Earth uses only left-handed (L) versions of amino acids in proteins and only right-handed (D) versions of sugars, but not their mirror images (Aubrey 2008). For molecules that do not exhibit such preferences in abiotic systems, detection of homochirality would be a powerful indication of biological activity, regardless of whether the solvent is water or hydrocarbon (Creamer et al. 2016). We will assess chirality via chromatographic methods (Goesmann et al. 2017). Science Objective E2: Determine if Patterns Exist in Molecular Masses and Distribution Dragonfly will identify the molecular motifs of functional molecules, if present, by determining the composition of organic and water-ice materials on the surface of Titan. Compositional assays of both organic and water-ice materials will allow us to look for patterns in abundance and/or structure. Abundance patterns-A key feature of life on Earth is that it is selective in the basic molecules it uses, e.g., the "Lego Effect" (Lovelock 1965;McKay 2004;Marshall et al. 2017). The blocks used can be formed abiotically, but selective use in and production by biological processes creates a distribution distinct from abiotic processes ( Figure 5). For example, of ∼500 known amino acids, just 22 are coded for by eukaryotic DNA (Gutiérrez-Preciado et al. 2010). Similarly, abiotic synthesis creates carbon chains of random length, while eukaryotic carboxylic acid chain lengths occur with a strong even-over-odd preference and a restricted range (Balkwill et al. 1988;McCollom 1999;Costello et al. 2002;Lester et al. 2007;Steger et al. 2011). Furthermore, while geochemical methane sources also produce nonmethane hydrocarbons in amounts that decrease smoothly with increasing mass, methanogens produce methane but very few nonmethane hydrocarbons (McKay 2008). Structural patterns-The utility of a molecule's structure drives the biological synthesis of certain structural isomers over others. For example, terrestrial metabolism primarily uses glucose; although galactose has the same molecular composition, a structural difference (relative positions of hydroxyl groups) makes glucose preferable. Dragonfly will identify potentially isomeric molecules and conduct analyses to isolate any structural patterns. Dragonfly can identify gradients in the relative abundances of potential consumables in Titan's lower atmosphere that could signify active metabolic processes. Life that is widespread affects its environment: the majority of O 2 , CO 2 , and CH 4 in Earth's atmosphere are biological products. On Titan, because of its availability and potential as a reactive fuel source, H 2 is the most promising atmospheric constituent for showing a biological effect. Intriguing and controversial models based on low-precision Huygens data suggest a flux of hydrogen into Titan's surface (Strobel 2010). If life were consuming atmospheric hydrogen, it would have a measurable effect on the hydrogen mixing ratio in the troposphere (McKay & Smith 2005), depending on the consumption rate. Dragonfly will construct a vertical profile of H 2 abundance to determine if there is, indeed, a net flux into the surface ). Science Goal E Sampling Targets The expectation is that biologically relevant monomers are more likely to be present in samples from water-ice material. However, establishing whether a pattern is consistent with biological control necessarily requires a broad contextual baseline (Hand 2017). Dragonfly is therefore required to evaluate the abundance and distribution of compounds in organic sediments, as well as in material with a water-ice component. Biosignature Search Discussion Titan's high potential for prebiotic chemistry and astrobiologically interesting materials includes the following factors: 1. opportunities for mixing of organic material with liquid water at Titan's surface in the past ), 2. the possibility of material transfer from the surface to the liquid-water ocean (and possibly vice versa), and 3. the potential for liquid methane to function as a solvent in the development of a hydrocarbon-based biological system. Here we elaborate on possible scenarios for the development of biological systems on Titan and accessibility to Dragonfly. These include Similar to other targets in the outer solar system, LAWKI is not expected to be extant in surface deposits, but chemical biosignatures could be preserved (scenario 1) or admit the possibility of past or extant life in the subsurface ocean (scenario 2) if material is brought to the surface (e.g., via cryovolcanism). As a surface-sampling mission, Dragonfly will be able to assess prebiotic chemistry and search for evidence of water-based biosignatures formed at the surface (scenario 1) or deposited on the surface (scenario 2). Cryovolcanic features have proven challenging to identify definitively on Titan, however, and the degree of connectivity to the subsurface ocean (scenario 2) is unknown. Selk Crater (Soderblom et al. 2010a) is targeted by Dragonfly as a site where liquid water is known to have been present on the surface for an extended period of time in the past, an environment conducive to the formation of molecules of biological interest (Neish et al. 2008(Neish et al. , 2009(Neish et al. , 2010(Neish et al. , 2018Poch et al. 2012; and potentially scenario 1). In addition to sampling materials with a water-ice component, Dragonfly will measure the equatorial sands where geological processes have transported and collected organic products (Lorenz et al. 2006b;Rodriguez et al. 2014;Malaska et al. 2016), making the dunes a good location to potentially constrain the possibilities for hydrocarbon-based biosignatures (scenario 3). Dragonfly science measurements are generally agnostic to water-or hydrocarbon-based biologies (as recommended by the Europa Lander science definition team; Hand 2017), relying on the results from the compositional survey of goal A and subsequent modeling efforts to put potential biosignatures into the proper context. This approach is summarized in Table 4. Landing Site 1 in Shangri-La Provides Access to Organic Sediments in Water Ice Geological processes concentrate the products of Titan's atmospheric chemistry into organic sediments that are transported by aeolian and fluvial activity. Sand particles collect in the extensive equatorial dune fields (Lorenz et al. 2006b;Rodriguez et al. 2014;Malaska et al. 2016). These vast carbon sinks are thus an ideal location to assess prebiotic chemistry and search for potential signatures of past biological activity. Landing site 1 (LS1) is targeted within a portion of the Shangri-La sand sea, part of Titan's best-characterized geomorphologic unit (Figures 7 and 8). Titan's organic sand seas comprise longitudinal dunes hundreds of kilometers long spaced 2.0-3.5 km apart (Savage et al. 2014) and resemble terrestrial silicate longitudinal dunes in morphology and extent (Lorenz et al. 2008c;Radebaugh et al. 2008). In addition to being an ideal location to achieve high-priority science, Titan's dunes have been well characterized by Cassini and provide safe conditions for first landing (Lorenz et al. 2018b). Despite the name, sand seas are typically not completely covered by sand; instead, dunes can be separated by flat, sandfree interdunes 1.0-2.5 km wide (Barnes et al. 2008;Savage et al. 2014). Sand covers only 40% of the Namib sand sea in SW Africa, with sand-free, gravel interdunes comprising the remaining 60% (Lancaster 1989). Titan's interdunes have been resolved spatially (Barnes et al. 2008;Le Gall et al. 2011) and spectrally (Bonnefoy et al. 2016), revealing predominately icy interdunes that match the spectral properties of the Huygens landing site (HLS). This correlation suggests that the Shangri-La interdunes are likely to include water-ice gravels, potentially a fine-grained layer damp with condensed methane (Figure 1; Niemann et al. 2005;Tomasko et al. 2005;Zarnecki et al. 2005;Lorenz et al. 2006b;Keller et al. 2008;Williams et al. 2012;Lorenz 2014;Karkoschka & Schröder 2016b). Targeting LS1 in an interdune area provides proximity to both organic sands and material with a likely water-ice component. Traverse to Selk Impact Crater to Access Previously Melted Water Ice To be sure of sampling previously melted water ice, Dragonfly will traverse to Selk Crater (Figure 8; Lorenz et al. 2021), documenting terrain and compositional variations along the way. This traverse will cross several distinct surface units identified in Cassini data, including the edge of Selk's proximal ejecta deposits. This material is similar in average composition to the HLS (Soderblom et al. 2010b) and therefore represents another prime target for sampling material with a water-ice component. Selk itself is a relatively young, 80 km diameter impact crater, the interior of which shows the spectral signature of organic sand, with water-ice material near the edges of the crater floor . Hydrocode simulations of the formation of an ∼80 km diameter crater on Titan generate ∼100 km 3 of melt, 70%-90% of which (depending on impact angle) is deposited within the crater; the rest is entrained in ejecta (Artemieva & Lunine 2003). Models, spectral data, and Titan's active fluvial erosion suggest exposures of water ice amid partial organic sediment cover, making Selk one of the best locations to find previously melted water ice to sample. Conclusion Dragonfly was officially selected for flight by NASA as the fourth New Frontiers mission on 2019 June 27. Launch is planned for 2027, with Titan arrival in the mid-2030s, during the local northern hemisphere winter. We designed the science of Dragonfly around the themes of prebiotic chemistry, habitability, and the search for biosignatures, with an explicit consideration of both water and hydrocarbon solvents. To address prebiotic chemistry, we will determine the inventory of prebiotically relevant organic and inorganic molecules and reactions on Titan. In the realm of habitability, we will determine the role of Titan's tropical atmosphere and shallow subsurface reservoirs in the global methane cycle, determine the rates of processes modifying Titan's surface and rates of material transport, and constrain what physical processes mix surface organics with subsurface ocean and/or melted liquid-water reservoirs. Our search for biosignatures will entail a broad-based search for signatures indicative of past or extant biological processes. Our science goals led us to landing within the Shangri-La sand sea, where both organic sediments from the sand dunes and water ice of the interdunes would be available within a few kilometers of one another. To achieve the surface mobility necessary to traverse between the dunes and interdunes, we developed a system whereby the entire lander uses rotors to fly with vertical lift (Lorenz et al. 2018b). Calculation of the energetics of such a system led us to realize that large-distance traverses would be possible (Langelaan et al. 2017), greatly in excess of any previously achieved by planetary spacecraft. That additional range allowed us to target an impact crater, Selk, where previously liquid impact melt could be sampled. The Dragonfly mission capabilities achieve many of the objectives previously derived for both surface landers and aerial vehicles (Lorenz 2008(Lorenz , 2009Coustenis et al. 2011;Hall et al. 2011;Barnes et al. 2012) in flagship mission studies (Levine et al. 2005;Lunine et al. 2005;Lorenz et al. 2008a;Coustenis et al. 2009;Reh et al. 2009). The major outstanding unaddressed questions after Dragonflyʼs selection also require the global scope that can best be achieved with an orbiter (Stiles et al. 2009). The initial landing ellipse is indicated by the solid yellow oval on each map, and the dotted yellow line indicates a linear traverse for which geologic units (map at top right) and representative topography with >200× vertical exaggeration are shown (bottom right). VIMS illustrates the spectral diversity of Titan's surface: red, green, and blue channels are assigned to 5, 2, and 1.3 μm atmospheric windows, respectively. The spectral unit mapped in blue at top right has an enhanced water-ice spectral component and corresponds to materials labeled β and δ (bottom right; Table 3; additional blue areas may exist but are not resolved by VIMS). Cassini radiometry data indicate a low-emissivity deposit (purple star along traverse), which is also suggestive of water ice. The unit mapped in brown at top right correlates uniquely with organic sand dunes, material labeled α (bottom right; Table 3). And the unit mapped in green (bottom right; material labeled γ) corresponds to highlands terrain, like the channeled terrain near the HLS (Rodriguez et al. 2006;Barnes et al. 2007a;Soderblom et al. 2007). Areas of brighter radar return (bottom left) have higher roughness at the 2.2 cm scale, greater volume scattering, or greater reflectivity. The landing ellipse is dark in SAR because the area is both smooth and radar-absorptive. Astrodynamic and other factors that influenced the selection of the landing site and a review of available data sets on Selk are given in Lorenz et al. (2021). mission. Ultimately, complementing Dragonfly with a small-or medium-class orbiter might achieve science at a scope comparable to that of three-element flagships (like those studied for the 2013-2023 Decadal Survey; Squyres & Soderblom 2011) without requiring the complexity or cost of a flagship mission. Since selection, we have worked with NASA headquarters to establish a finalized list of level 1 science requirements for the Dragonfly project. The level 1 requirements serve to guide both mission development and operation but also to verify mission success after its execution. We include our list of level 1 requirements below both to further understanding of Dragonfly and its mission and to serve as a reference for future missions and proposers. We hasten to add that there are various ways to successfully write level 1 requirements, and that the approach that we use may or may not be applicable to or optimal for other mission concepts. The authors acknowledge support for Dragonfly from the NASA New Frontiers Program. Some of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004).
2021-07-20T20:04:45.781Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "6ad628b8acc394d05c904919ff2b3e1821ce96f8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3847/psj/abfdcf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "6ad628b8acc394d05c904919ff2b3e1821ce96f8", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Physics" ] }
12501563
pes2o/s2orc
v3-fos-license
Adult Stem Cell as New Advanced Therapy for Experimental Neuropathic Pain Treatment Neuropathic pain (NP) is a highly invalidating disease resulting as consequence of a lesion or disease affecting the somatosensory system. All the pharmacological treatments today in use give a long lasting pain relief only in a limited percentage of patients before pain reappears making NP an incurable disease. New approaches are therefore needed and research is testing stem cell usage. Several papers have been written on experimental neuropathic pain treatment using stem cells of different origin and species to treat experimental NP. The original idea was based on the capacity of stem cell to offer a totipotent cellular source for replacing injured neural cells and for delivering trophic factors to lesion site; soon the researchers agreed that the capacity of stem cells to contrast NP was not dependent upon their regenerative effect but was mostly linked to a bidirectional interaction between the stem cell and damaged microenvironment resident cells. In this paper we review the preclinical studies produced in the last years assessing the effects induced by several stem cells in different models of neuropathic pain. The overall positive results obtained on pain remission by using stem cells that are safe, of easy isolation, and which may allow an autologous transplant in patients may be encouraging for moving from bench to bedside, although there are several issues that still need to be solved. Introduction Neuropathic pain (NP), currently defined as "pain arising as a direct consequence of a lesion or disease affecting the somatosensory system" [1], represents the most severe form of chronic pain considering its capacity to affect both physical and mental patient's condition. The nature of NP is extremely heterogeneous and four main categories of neuropathic lesions have been recognized: focal or multifocal lesions of the peripheral nervous system (PNS), lesions of the central nervous system (CNS), polyneuropathies, and complex neuropathic disorders [2]. Regardless of the primary etiology, NP can present itself as spontaneous pain sensations such as paroxysmal pain (shooting pain) and superficial pain (burning sensation) or as evoked pain: mechanical/thermal allodynia (pain caused by normally nonpainful mechanical or thermal stimuli), hyperalgesia (increased sensitivity to a normally painful stimulus), or temporal summation (increasing pain sensation from repetitive application of the identical stimulus) [3]. It has recently been pointed out that neuropathic pain pathogenesis and maintenance involve interactions among neurons, inflammatory immune cells, glial cells, and a wide cascade of pro-and anti-inflammatory cytokines [4][5][6][7]. One of the main problems concerning NP regards its scarce response to the conventional analgesic therapy. Drugs, mainly represented by tricyclic antidepressant, calcium channel ligands, SSNRI, and opioids, are in fact not fully effective and their efficacy decreases over time with development of tolerance in long term use [8,9]. It is therefore mandatory to identify and propose novel approaches to NP 2 BioMed Research International treatment that could overcome many of the limitations of the available strategies. In the last years many researchers, including us, have tried to relieve neuropathic pain by using stem cells of different origin. The first moving idea was based on the capacity of stem cell to offer a multipotent cellular source for replacing injured or lost neural cells and for delivering trophic factors to lesion site; in this way, stem cells can represent not only a pain treatment but a way for repairing the damaged nervous system at the basis of NP development. Soon we and others realized that the capacity of stem cells to contrast experimental neuropathic pain was not completely dependent upon their regenerative effect; in fact many research papers described an antinociceptive effect of the stem cell achieved before the appearance of regenerative effect [10]. In this paper we review the literature in which stem cells of different origin and species were used to treat neuropathic pain induced in experimental animal models. We divide the published papers according to the type of stem cell used, independently of the experimental NP model. We do not report the studies with embryonic stem cells considering the associated ethical problem and the major risk of tumors correlated to them. Moreover, we considered only papers in which the effect of stem cells on pain behaviour has been specifically evaluated. Today there are three main types of stem cells used for neuropathic pain: neural stem cells, mesenchymal stem cells, and bone marrow mononuclear cells. Neural Stem Cells Considering the nature of the lesion at the basis of NP development that takes place in PNS or CNS, neural stem cells (NSCs) seem to be the most appropriate type of cells to prompt a physiological repair of the lesion, due to their capacity to differentiate into neurons, astrocytes, and oligodendrocytes, even though it was suggested that also mesenchymal stem cells, under particular conditions, can originate cells of the neural lineage [11][12][13]. Neural stem cells were identified for the first time and isolated from the subventricular zone of adult mammalian brain in 1992 [14,15]. They are multipotential precursors that grow and self-renew in culture for an extensive period of time as neurospheres, while retaining a stable capacity to generate mature functional brain cells. So far, NSC lines have been derived from the hippocampal dentate gyrus, the olfactory bulb, the SVZ surrounding the ventricles, the subcallosal zone underlying the corpus callosum, and the spinal cord of the embryonic, neonatal, and adult rodent CNS [15], as well as from human fetal CNS [16][17][18]. Our group [10] described for the first time the use of intravenous murine neural stem cells, NSCs, to treat neuropathic pain which develops as consequence of a lesion of the peripheral nervous system, that is, sciatic nerve chronic constriction injury (CCI). Cells, isolated from the subventricular zone using the neurosphere technique [19], were treated to express GFP gene thus allowing their localization after transplant. Even though efficiency of the transplant is low, we described the rapid and specific homing of NSC to the injured nerve, since these cells were present at lesion site starting from day 1 to day 7 after injection. Their short time presence at lesion site was, however, able to start a cascade of events in the main sites of pain transmission, which contributed to pain reduction. Regarding their effects on pain relief, NSC, injected when the pathology was already established, induced a significant reduction in allodynia and hyperalgesia already 3 days after administration, demonstrating a therapeutic effect that lasted for at least 28 days. Responses changed with the number of administered NSCs and the effect on hyperalgesia could be boosted by a new NSC administration. Treatment induced changes in cytokine profile at lesion site, decreasing significantly the proinflammatory cytokine Interleukin-1 both as mRNA and protein, while cells were unable to normalize the levels of the anti-inflammatory cytokine IL-10 decreased by CCI. The effect on pain relief was also demonstrated by a reduction of spinal cord Fos expression in laminae I-VI. Moreover we observed a reparative process and an improvement of nerve morphology, due to NSC treatment, which was present at a later time, when pain was already controlled by NSC treatment. Since NSC effect on pain symptoms preceded nerve repair and was maintained after cell disappearance from the lesion site, we believe that the regenerative, behavioral, and immune NSC effects are largely due to microenvironmental changes that they might induce in the lesion. Our results support the idea of a general bystander effect exerted by transplanted NSC [20]. These positive results on neuropathic pain relief were supported by Xu and colleagues [21] by using another route for NSC administration; the authors described that an intrathecal administration of neural stem cells, 3 days after CCI injury in rat, was able to significantly attenuate mechanical and thermal hyperalgesia with a marked increase of protein and mRNA levels of glial cell line derived neurotrophic factor (GDNF) in the spinal dorsal horn and dorsal root ganglia (DRG). So far we have considered the use of NSC for treating NP which follows a peripheral lesion of the nervous system; however, neural progenitors/stem cells were also used to treat lesions at spinal cord level. One of the main problems concerning their use in these conditions is represented by their low survival in the host damaged spinal cord. For this reason combinatorial strategies were developed to try to improve their transplant efficiency but the final outcome on NP is questionable. Positive results on pain were obtained by the group of Luo [22] investigating the efficacy of a cotransplantation of NSC and OECs (olfactory ensheathing cells) in a rat spinal cord transection injury model. They found that the transplantation of NSC together with OEC could improve the sensory function to mechanical and thermal stimuli after SCI; the authors suggested that OECs can promote the NSC survival and the cotransplantation downregulates the expression of NGF. Karimi-Abdolrezaee et al. [23] instead developed a combinatorial strategy that allows the successful application of neural progenitor cells (NPC) based therapies for the treatment of chronic spinal cord injury. The authors showed that chondroitin sulfate proteoglycans (CSPGs) in the glial scar around the site of chronic SCI negatively influences the long-term survival and integration of transplanted NPC and their therapeutic potential. For this reason they targeted CSPGs and one week later treated the same rats with transplants of NPC and transient infusion of growth factors (EGF, bFGF, and PDGF-AA). This combinatorial approach markedly increased the long-term survival of NPC and greatly optimized their migration and integration in the chronically injured spinal cord. Furthermore, this combined strategy promoted the axonal integrity and plasticity of the corticospinal tract and enhanced the plasticity of descending serotonergic pathways. These neuroanatomical changes were also associated with significantly improved neurobehavioral recovery after chronic SCI. However, cells were unable to modify the development of allodynia which follows the thoracic spinal cord injury. It is important to report that the first papers trying stem cell approaches in SCI models described negative results for pain relief. Hofstetter and colleagues [24] suggested a correlation between induction of allodynia after SCI and the transplantation of NPC. They reported that transplanted naive NPCs primarily differentiate into astrocytes and this was associated with induced aberrant sprouting of Calcitonin gene related peptide fibers rostral to the injury, leading to increased allodynia. In the same years, Macias et al. described that NSC primarily differentiated into astrocytes when transplanted into the injured spinal cord which resulted in thermal and mechanical forelimb allodynia [25]. All the papers mentioned above described the use of neural precursors/stem cells isolated from rodents; in literature, to our knowledge, there is only one paper which showed the results of using human neural stem cells in experimental animal models of NP. In this paper human neural stem cells are shown to be capable of surviving and differentiating in a traumatically injured environment improving the locomotor recovery [26]. However, in experimental paradigms of other pathologies, human neural stem cells (hNSC) have revealed anti-inflammatory and therapeutic abilities analogous to their murine counterpart [27][28][29]. Moreover, the possibility to isolate and expand hNSC lines of clinical grade [18] has allowed evaluating the safety of these cells in a phase I clinical trial in amyotrophic lateral sclerosis patients, which is currently underway. Mesenchymal Stem Cells (MSC) MSC are a heterogeneous subset of stromal stem cells which can be isolated from different sources: bone marrow [30], umbilical cord (UC) [31,32], placenta [33], adipose tissue [34], dental pulp [35], and even the fetal liver [36] and lungs [37]. These cells express typical surface markers such as CD73, CD44, CD90, and CD105. Among MSC, the most representative ones are bone marrow MSC (BMSC), purified from bone marrow, and adipose tissue derived MSC (ASC), isolated from adipose tissue. ASCs are described to be BMSC migrated into the adipose tissue; hence there are no marked phenotypic differences between these two cell types [34,38]. However, in recent years, other types of MSC, such as those derived from umbilical cord blood (UCB-MSC) and amniotic mesenchymal stem cells, have begun to attract researchers' attention for their therapeutic use. A basic description of bone marrow may help clarify the origin of bone marrow derived mesenchymal stem cells. Bone marrow consists of a hematopoietic component (parenchyma) and a vascular component (stroma). The parenchyma includes hematopoietic stem cells and hematopoietic progenitor cells while bone marrow stroma contains multipotent nonhematopoietic progenitor cells, bone marrow stromal cells (MSC) that are known as multipotent cells capable of differentiating under specific experimental conditions into several types of cells, for example, osteoblasts, chondrocytes, adipocytes, and myocytes [30]. Moreover, some papers described the capacity of MSC to transdifferentiate also into neurons or astrocytes [11][12][13]. Both rodent and human MSC and bone marrow mononuclear cells were used for treating experimental neuropathic pain. Bone Marrow MSC (BMSC) 3.1.1. Rodent BMSC. One of the first groups to assess the effect of rat bone marrow stromal cells in an experimental rat model of peripheral neuropathy was the group of Musolino [39]. They demonstrated that an ipsilateral intraganglionic injection of rat bone marrow stromal cells was able to prevent the generation of mechanical allodynia and to reduce the number of allodynic responses to cold stimuli in rats that underwent a single ligature sciatic nerve constriction [39]. One of the possible mechanisms involved in such effect was the capacity of BMSC to partially prevent the injury-induced changes in galanin, Neuropeptide Y and Neuropeptide Y receptor expression in DRG [40]. The authors compared the effect of MSC on pain relief and biochemical changes to that of bone marrow nonadherent mononuclear cells (BNMCs), but these latter stem cells were, in that case, unable to reduce pain [39]. Rat bone marrow MSC has also been used in another type of neuropathic pain treatment, not derived from a direct nerve lesion, but consequence of the metabolic dysfunction present in diabetes which is one of the main causes of painful neuropathy in human. Shibata and colleagues tried in fact to improve diabetic polyneuropathy induced in rat by using Streptozotocin (STZ) [41]. MSC (1 × 10 6 ) were therapeutically injected into the hind limb muscle 8 weeks after diabetes induction. The authors described an increase in VEGF and bFGF mRNA expression in MSC-injected diabetic rats and colocalized VEGF and bFGF in MSC in the transplanted site thus suggesting that MSC are responsible for growth factors secretion at the injected site. MSC were able to ameliorate all the alterations induced by diabetes such as hypoalgesia, delayed nerve conduction velocity, and decreased sciatic nerve blood flow. Moreover, MSC transplantation was able to normalize sural nerve morphometry restoring the axonal circularity, decreased in diabetic rats. The same positive effect on nerve conduction velocity amelioration was also reported by Kim and Jin [42], using the same model of diabetic neuropathy in mice, by injecting murine MSC into the hind limb muscle percutaneously along the course of the sciatic nerve at 4 sites. The improvement in nerve conduction velocity was attributed to the ability of MSC to increase trophic factors specific for neuronal populations in the PNS such as nerve growth factor (NGF) and neurotrophin-3 (NT-3). The authors did not directly assess pain. NSCs/ASCs were injected intravenously 7 days after mice chronic constriction injury; their effect on pain was measured 3, 7, 14, and 21 days after the administration. Data represent mean +/− SEM of 7 mice. The statistical analysis was performed by using the two-way ANOVA analysis of variance followed by Bonferroni test. * < 0.001 versus Sham, ∘ < 0.001 versus CCI, and # < 0.001 versus hASC. h(Human)BMSC. The Maione's group is the main user of human BMSC for treating experimental neuropathic pain. The authors use, as model of NP, the spared nerve injury (SNI) model in mice and administer hBMSC therapeutically, that is, 4 days after the surgery, injecting them either in the mouse lateral cerebral ventricle [43] or systemically into the caudal vein [44]. When intravenously injected, cells were able to home into the spinal cord and prefrontal cortex of SNI neuropathic mice. In both papers, hBMSC reduced painlike behaviors, such as mechanical allodynia and thermal hyperalgesia, with an effect which was evident one week after cell transplantation and was long lasting. Indeed, when cells were injected into the caudal vein, their effect on pain relief was still present three months after transplant. The authors described the capacity of these cells to reduce glial [43] and macrophage activation [44] switching to an antiinflammatory phenotype by decreasing the proinflammatory cytokines (IL-1 beta and IL-17) and increasing the antiinflammatory cytokine IL-10 [43,44]. The group of Waterman [45] developed a method to optimize the anti-inflammatory effects of human bone marrow MSC, skewing them in vitro, before their injections, towards a protective MSC2 phenotype. These MSC demonstrated a higher capacity to counteract mechanical allodynia and heat hypoalgesia induced in mice by STZ treatment. These cells were also able to decrease the serum level of proinflammatory cytokines and were described to be safe. Adipose Tissue Derived MSC (ASC). The great advantage of these cells, over the other kinds of MSC, is given by the possibility of isolating them by using low invasive procedures. These cells are in fact located in mature subcutaneous adipose tissue and can be obtained as litter of the fatty tissue after liposuction; the use of this tissue allows to obtain a large amount of MSC thus reducing, in some cases, the need of ex vivo culturing, leading eventually to lower the risk of developing chromosomal abnormalities due to the culture itself. Moreover, these cells are characterized by low immunogenicity and by high immunomodulatory properties which make them suitable for treating diseases in which the neuroinflammatory component plays a crucial role, such as NP. Not least these cells might be easily used for autologous transplant. Despite the high potential of these cells, their use for experimental neuropathic pain treatment is still limited. Our paper, recently published [46], was the first to assess the antinociceptive effect of hASC isolated from human adipose tissue of female donors undergoing plastic surgery. This paper is a complete work in which safety, antinociceptive effects, and biochemical changes induced by these cells were assessed. hASC were in vitro expanded [47,48] and, after karyotype assessment, were injected into the caudal vein of neuropathic mice (CCI mice). Cells were injected, with a therapeutic intent, seven days after the surgery, in presence of a fully developed thermal hyperalgesia and mechanical allodynia. We clearly demonstrated a rapid, long lasting, and dose dependent antihyperalgesic and antiallodynic effect which could be reestablished with a second dose of cells when it began to vanish. The intravenous injection of 1 × 10 6 hASCs was able to completely abolish thermal hyperalgesia starting one day after the injection [46]. The effects of hASCs on thermal hyperalgesia seem to be more potent than those of NSC [10]. In fact, as shown in Figure 1(a), the withdrawal thresholds of hASC treated mice were overall higher than those of NSC treated mice, and 7 days after hASC injection thermal hyperalgesia was completely abolished, BioMed Research International 5 while, for allodynia, a comparable effect of the two cells is evident (Figure 1(b)). The effect on pain relief well correlates with a general systemic and injured nerve localized antiinflammatory effect of hASC. In fact, a significant increase of IL-10 serum concentration is already evident 1 day after hASC treatment; moreover at nerve site, the protein levels of IL-1, increased by the pathology, appeared normalized 1 day after the hASC injection, while the anti-inflammatory cytokine IL-10, decreased by CCI, gradually increased until reaching levels 3 times higher over control group [46]. The dose response effect, described for pain, was also evident for cytokines, indicating a clear correlation between pain relief and anti-inflammatory effect of hASCs. If we compare the effect on cytokines of hASC versus NSC, it is clear that the big difference between these two cell types regard their effect on IL-10. No changes at nerve site on IL-10 protein is evident seven days after NSC injection while, at the same time, IL-10 is strongly increased by hASC [46]. We assume that this effect, together with the general systemic anti-inflammatory one, could be responsible of the stronger antihyperalgesic effect of hASC. Besides the effects induced by hASC at nerve site we described also a normalization of the spinal cord iNOS protein level which is evident with a full neuropathic pain recovery. This paper clearly suggests a possible therapeutic use of hASC for neuropathic pain treatment. These same cells and hATSCs, human adipose tissuederived stem cells treated in vitro with ZnO shell nanoparticles in order to improve stem cell function, were recently used by In Choi et al. [49]; these cells, intrathecally injected, were able to reduce the pain consequent to a spinal cord injury by increasing the paw withdrawal thresholds to mechanical and thermal stimuli. Umbilical Cord-Derived Mesenchymal Stem Cells (UC-MSC). Human umbilical cord (UC) is a promising source of mesenchymal stem cells (MSC) and is nowadays under researchers' investigation. UC contains two umbilical arteries (UCAs) and one umbilical vein (UCV), both embedded within a specific mucous connective tissue, known as Wharton's jelly (WJ), which is covered by amniotic epithelium. MSC can be isolated from all these compartments by using different techniques; today it is still unclear which one is the best compartment in UC for clinical use. UC-MSC possess a gene expression profile similar to that of embryonic stem cells, but their collection procedure is considered ethically correct, and they are characterized by a faster self-renewal rate than MSC isolated, for example, from bone marrow. Moreover they have other attractive advantages which are summarized here: (1) a noninvasive collection procedure for autologous or allogeneic use; (2) a lower risk of infection; (3) a low risk of developing teratoma; (4) multipotency, and (5) low immunogenicity with a good immunosuppressive ability [50]. Roh and colleagues [51] recently investigated the therapeutic effect of transplanting human umbilical cord blood-derived mesenchymal stem cells (hUCB-MSC) or amniotic epithelial stem cells (hAESCs) on SCI-induced mechanical allodynia and thermal hyperalgesia in T13 spinal cord hemisected rats. Two weeks after SCI, hUCB-MSC or hAESC were transplanted around the spinal cord lesion site, and behavioral tests were performed; moreover, immunohistochemical and Western blot analyses were performed to evaluate possible therapeutic effects on SCI-induced inflammation and the nociceptive-related phosphorylation of the NMDA NR1 receptor subunit. The authors described only a weak antiallodynic effect of hUCB-MSC if compared to that of hAESCs and no effect on thermal hyperalgesia of either cell type. The antiallodynic effect of hAESCs is associated with a decrease in spinal cord microglia activity and NMDA receptor NR1 phosphorylation. In contrast to the weak efficacy of hUCB-MSC on pain symptoms, the group of Yang [52] using HUMSCs from Wharton's jelly of the umbilical cord transplanted into the spinal cord described a beneficial effect for wound healing and locomotor recovery after spinal cord injury in rats suggesting a potential use of these cells if not for pain at least for motor recovery. Bone Marrow Derived Mononuclear Cells An improvement in experimental neuropathic pain treatment was also obtained using other types of cells isolated from bone marrow and in particular by using bone marrow derived mononuclear cells. A paper of Klass et al. [53] described that the infusion (1 × 10 7 , i.v.) of rat marrow mononuclear cells, containing mixed stem cell populations, 10 days after rat CCI, was able to induce neuropathic pain recovery (both hyperalgesia and allodynia). The authors did not investigate into the mechanisms involved in such modulations. Freshly isolated rat bone marrow-derived mononuclear cells (BM-MNCs) were also used for contrasting diabetes neuropathy induced in rats by STZ [54]. Cells injected into the hind limb skeletal muscles two weeks after STZ were able to ameliorate mechanical hyperalgesia and cold allodynia in the BM-MNC-injected side. Furthermore, the slowed sciatic nerve conduction velocities (MNCV/SNCV) and decreased sciatic nerve blood flow in diabetic rats were improved in the BM-MNC-injected side. BM-MNC transplantation further decreased mRNA expression of NT-3 and number of microvessels in the hind limb. Conclusions In recent years, the possibility to apply stem cells for the treatment of neuropathic pain has attracted much attention, as demonstrated by the increasing number of preclinical studies in the literature (Table 1). In whole the preclinical data here reported suggest positive effects of stem cells for relieving experimental neuropathic pain. An interesting point that emerges from the detailed analysis of the preclinical data is that peripheral neuropathic pain seems to be more responsive to stem cell treatment than pain arising from central lesion such as spinal cord injury. Moreover in SCI, stem cell treatment is not always able to positively and contemporarily affect both pain symptoms and motor recovery, indicating that different mechanisms can underlie the different effects. It is important to underline that one of the main aspects concerning stem cells usage is both their fast onset and long lasting effect on pain relief; a single administration of cells is [54] in fact able to induce an antiallodynic and antihyperalgesic effect which persists for long time, as it is still present up to 90 days after injection [44]. Generally, the conventional [8] and the newer pharmacological strategies [55,56] for neuropathic pain treatment need a chronic treatment to be effective. The analgesic success of the commonly available drugs is often limited by side effects that appear increasing the administration dose or by the development of tolerance [8]. Moreover, in order to successfully approach this type of pain, patients often are treated with a combination of drugs with different mechanisms of action, increasing the risk of drug interaction and often reducing patient's compliance [9]. A more long lasting effect for some type of neuropathic pain such as low back pain or disk herniation can eventually be achieved by surgical approaches or epidural treatment, obviously exposing the patients to all the risks of the surgery. The clamorous effect of stem cells on pain relief in the preclinical tests may be related to their capacity to not only control pain as a symptom, but to act as disease modifier on the mechanisms at the basis of the development and maintenance of pain condition, for example, modulating the neuroimmune component which plays a relevant role in neuropathic pain. Despite these positive and encouraging considerations, there are many issues that need to be addressed and solved for a successful clinical translation. These points are well summarized in the review by Bonfield and Caplan [57] and include the classification of the cells, their efficacy and potency, their mode of administration, their dosage and their source, together with the final goal of the analysis, and the tracking of the stem cell. Among these, as emerged in this review, the route of administration of stem cells represents an important variable which may also influence the choice of the final number of cells to be injected. Strategies for local stem cell delivery can be applied to the treatment of well localized lesions but are, however, described to increase risks and side effects such as bleeding and tissue injury [58]; certainly, from a clinical point of view, a systemic delivery is attractive, given the broad biodistribution and easy access. On the other hand, we have to point out that this route is, in some cases, associated with a passive cell entrapment within tissues that do not represent the main target of treatment, which may potentially lead to unwanted effects and may be eventually associated to a reduced effect of the cells. The homing of stem cells after a systemic injection represents in fact a much debated topic. In our first paper we described the capacity of stem cells to specifically reach the damaged nerve [10]. Although we observed a low transplant efficiency, we did not find the cells into other critical tissues such as lungs. Also Maione's group [44] reported the ability of MSC to home central nervous system areas critically involved in NP signaling describing only a scarce presence of stem cell in the lungs. In general, however, other papers report a marked lung first passage effect of the cells which limits the number of cells which can reach the area of injury [59][60][61]. Overall the literature agrees with the general idea that stem cells, even in a limited number, can interact with the host cells and orchestrate a long lasting modulation resulting, most of the times, in a final beneficial therapeutic effect [10,44,58]. Another strictly related question is the toxicity and the possible malignant transformation and cytogenetic aberrations of stem cells. The literature quite agrees on the safety of stem cells [62,63] but by a careful analysis of the preclinical papers reported here, it emerges that this aspect has not been specifically or adequately considered. In our work [46] we injected different doses of hASC reaching the highest dose of 6 × 10 6 cells/mice and we did not register any macroscopic adverse effect: no animal died or changed its habits/behaviour and no side effects have been observed. The safety of a similar dose of hASC intravenously infused in animals and humans was also described by Ra and colleagues [64]; the authors did not register any side effect or tumor mass formation in the three months after cell infusion. Also the paper by Waterman et al. [45] described no premature mortality or morbidity due to MSC treatment and the necropsy of the cell treated animals revealed no macroscopic pathology of any of the major organs. In contrast, Djouad and colleagues [65] described an increase of tumor formation in animals likely due to the immunosuppressive effects of MSC, rather than to a direct transformation of stem cells in tumor cells. Even though, as discussed, there are still many open points that need better understanding, a clear trend to clinical use of stem cells also in treating pain is apparent, as demonstrated by a very recent and scientifically sound paper [66] that reported a preliminary human study in which the autologous administration of adipose derived stem cells in the facial tissue was able to attenuate orofacial neuropathic pain symptoms. The cells were injected perineurally directly into the center of origin of pain and in the adjacent pain field of the affected branches of the trigeminal nerve. The effect of the treatment was evident 6 months after cell injection and cells were described to be safe, well tolerated by the patients, and accompanied by a significant reduction of analgesic drug doses. What is clear is that the research on stem cells is evolving; newly discovered populations of stem cells begin to be characterized and used in the regenerative medicine. The bioactive molecules that can be released by these same stem cells are starting to be identified and are likely effectors/candidates for the therapeutic effect. As an example the beneficial role of the medium conditioned by MSC for improving motor recovery was recently described [67]. Finally several reports indicate that the regenerative [68] and immunomodulatory [69] effects of MSC can be partially reproduced by the microvesicles (MVs) that are shed by activated MSC and that can be isolated from their culture medium [69]. On the basis of these considerations it is to be expected that the panorama of neuropathic pain treatment will change again shortly.
2018-04-03T02:58:32.272Z
2014-08-13T00:00:00.000
{ "year": 2014, "sha1": "80cb46f83587bf07e593efca18d65806a1e14f87", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2014/470983.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "80cb46f83587bf07e593efca18d65806a1e14f87", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
82404576
pes2o/s2orc
v3-fos-license
Hybrid mesoporous silicates: A distinct aspect to synthesis and application for decontamination of phenols Water pollution due to organic compounds is of great concern and efforts are being made to develop efficient adsorbents for remediation of toxic pollutants. The development of new functionalized materials with increased performance is growing to meet the regulatory standards in response to public concerns for environment. In this study, an attempt has been made to investigate the influence of synthesis parameters like the reaction temperature, the surfactant-to-silica ratio and reaction time on the structural and textural properties of novel ordered mesoporous silica hybrids. In order to understand the effect of different synthesis parameters, all the prepared materials were systematically characterized by various analytical, spectroscopic and imaging techniques such as XRD, BET, TG etc. It was deduced from these studies that the synthesis temperature influence greatly the structural order whereas both the P104/Na2SiO3 molar ratio and reaction time found to influence textural properties significantly. However, under optimized experimental condition, we could achieve the functionalized silica hybrids that offers successful incorporation of -Amino, -Glucidoxy, -Methacrylate, -Vinyl and -Phenyl moieties indicated by FTIR peaks at 793 cm−1, 2870 cm−1, 796 cm−1, 1630 cm−1 and 954 cm−1. XRD studies reveal orthorhombic and tetragonal symmetry for the hybrids and these materials were found to be thermally stable due to incorporation of organic moiety in silica matrix. Functionalized silica hybrids then applied as adsorbents demonstrated efficient and comparable removal of 4-aminophenol and p-nitrophenol in 20 min facilitated through organic moiety. Detailed modeling of the sorption using equilibrium and kinetic isotherms has been carried out to get an insight into the transport process. The adsorption isotherms of phenol derivatives are well-fitted with the Langmuir, Freundlich and Temkin Isotherms and the adsorption kinetics follows the pseudo second order model. The modeling confirms that the uptake is a chemisorption process. Introduction The development of Pakistan is very closely related to its effort for environmental protection. In this regard, Pakistan Environment Protection Ordinance (PEPO) in 1983 defined the legal framework. Pakistan being a member of the third world countries is still lacking pollution abatement standards, environmental management infrastructure, political will and public awareness. Despite this, a group of scientists and researchers are in continuous effort to eliminate the wastefulness and shift to principles of sustainable development to protect the environment and entire ecology from harmful effects of a wide variety of toxic inorganic and organic chemicals discharged as industrial wastes, causing serious water, air, and soil pollution. In response to such challenges, efficient and cost-effective treatment technologies are widely investigated. Adsorption is a well-known separation method and recognized as one of efficient and economic methods for water decontamination applications. A major advantage of adsorption lies in the fact that the persistent compounds are removed, rather than being broken down to potentially dangerous metabolites that may be produced by oxidation and reductive processes (Valderrama et al., 2007;Naureen et al., 2014;Noor et al., 2015). A wide range of adsorbents are tested to be highly efficient due to simplicity, low cost, effectiveness and availability. In addition, the adsorbents can be regenerated by suitable desorption processes (Pan et al., 2009). Selection of novel adsorbents with multiple and diverse application range is a challenge. In the same spirit, promising organic-inorganic hybrids have been used for the removal of toxic species from wastewater (Wang et al., 2012(Wang et al., , 2011Gao et al., 2010;Zaitseva et al., 2013;Simsek et al., 2012;Suchithra et al., 2012;Repo et al., 2011;Ge et al., 2010;Pang et al., 2011). The characteristic feature of these compounds is the combined advantage of functional variation of organic materials and thermally stable inorganic substrate, resulting in strong binding affinities. Further, the Functionalized hybrids present the best properties of each of its components in a synergic way and have high performances of physical, chemical and mechanical properties (Mercier and Pinnavaia, 1998). Phenols are important aromatic compounds having antioxidant properties. It can inhibit the oxidative degradation of organic materials. Phenols are naturally constituent in a number of biological aerobic organisms such as human blood plasma, mammalian urine, pine needles and oil from tobacco leaves. Phenol derivatives such as a-tocopherol are a component of vitamin E, thymol and carvacrol are components of lignin, from which phenol is liberated by hydrolysis. Commercially, phenol is used in the production of phenolic resins phenol-formaldehyde resin called Bakelite phenolph-thalein used as an indicator. Dilute solutions of Phenols are useful antiseptics, since Phenols are more acidic than aliphatic alcohols. However, its toxic fumes cause kidney damage. It is likely that Phenol solution contains dioxins. Consequently, phenol has only limited use in pharmaceuticals today because of its toxicity. Phenols penetrate deep into the tissue, leading to gangrene through damage to blood vessels (Nair et al., 2008). Ingestion of phenols in concentration from 10 to 240 mg/l for a long time causes mouth irritation, diarrhea, and excretion of dark urine and vision problems (Navarro et al., 2008). Most of the phenol absorbed by the body is excreted in the urine as phenol and/or its metabolites. Only smaller quantities are excreted with feces or exhaled. The refractory Phenols form stable free radicals. Such a property is undesirable (Nguyen et al., 2003d) that makes it an important pollutant in wastewater. Phenols are released by industries that produce chlorophenols for use as fungicides and insecticides in agriculture sector. A number of phenolic compounds like chlorinated, nitrated, methylated or alkylated are prevalent in the environment due to chemical processing industries and use of numerous pesticides. For instance, N-acetylated Aminophenol is a component of Paracetamol (Nagaraja et al., 2003). 3-aminophenol, 2-aminophenol and 2.4-diaminophenol are used as biomarkers for analysis of drugs (Vecchia and Tavani, 1995), precursor for indole synthesis, preparation of hair dyes , respectively. Nitrophenols result due to environmental reaction of phenol with nitrite ions. Toxicity of Phenolic compounds is due to hydrophobic nature and property to generate reactive radicals. Phenolic compounds in potable water emit an unpleasant odor and flavor in concentration as low as 5 lg/l and are poisonous to aquatic life, plants and humans. Furthermore, the position of substitution in phenol molecule also affects the toxic action. p-Nitrophenol is classified as a priority pollutant and potential environmental toxicant (Leung et al., 1997) due to its rapid breakdown in water. The Maximum Contaminant Level of 1 lg/L in drinking water (US EPA, 1986) is defined for phenols. The significant environmental risks urge for the rapid removal and detoxification of phenols. Different physical, chemical and biological treatment processes are frequently employed (Zylstra et al., 2000;Takahashi et al., 1994). For instance, adsorption, ultrasonic irradiation and microwave assisted oxidation are preferred for removal of p-Nitrophenol. Further, different materials for adsorption are investigated by a number of researchers. Roostaei and Tezel (2004) applied activated carbons, activated alumina, filtrasorb-400, silica gel and zeolites for adsorption of phenol from water. Cardenas et al. (2005) attempted porous clay heterostructure as adsorbents for removal of phenol and dichlorophenols. The development of present research draws its essence from the increasing concentration of diverse toxic pollutants in different compartments of the environment. Further, there is a continuous effort to synthesize materials with multidimensional properties. The present research focuses on preparation of specialized materials based on silica (abundantly available natural precursor) with specific property to adsorb pollutants and can also be accepted as an economical and efficient substitute to conventional adsorbents (Surhio et al., 2014). The preference of silica over other materials for preparation of Hybrid materials is based on the fact that silicates reveal many advantages. Silica is transparent and does not scatter light. Silica show low optical loss in comparison to zirconia or titanium in its rutile phase. Silica has very high thermal resistance (Lygin, 1994). Organic-inorganic hybrids are preferred as attachment on silica surface is easier due to high number of cross-linking bonds (Arakaki et al., 2000). Immobilization of organic functional groups in the inorganic framework (Buszewski et al., 1998;Arakaki et al., 2000) of silica renders more stability. The functionalized silica hybrids are synthesized in the present work to explore the opportunities for optimal route of and factors (choice of precursor and surfactant) affecting the structure-activity properties and adsorption efficiency. The present study reports the removal of 4-amino phenol and p-Nitrophenol using the functionalized Mesoporous silica based hybrids as adsorbents. Material and methods This research is an attempt to synthesize functionalized hybrids with P104 as non-ionic structure directing agent and sodium silicate as silica precursor with five different organosilanes. The direct co-condensation synthesis procedure follows addition of surfactant (4.0 g of P104) and 8 g KCl in 100 mL of water and 15 mL of Acetic acid at room temperature. A known amount of precursor was added and pre-hydrolyzed for 2 h. Organosilane (1.08 g) with a known organic moiety (3-Aminopropyltrimethoxysilane, APTMS) was added to the mixture under stirring (20 h) at 60°C and static conditions of heating (at 100°C for 24 h) for functionalization. The material was collected by filtration, dried in air and extracted with ethanol. Excessive Pluronic was washed with ethanol to remove template, filtered and dried under vacuum at 100°C for 3 h (Da'na and Sayari, 2012). Characterization Each of the functionalized hybrid Mesoporous silica materials was comprehensively characterized to determine the surface and bulk characteristics by a wide range of techniques like Attenuated total reflectance (ATR) Infrared spectroscopy (Thermo Nicolet NEXUS 670 FTIR), SEM coupled with EDX (Hitachi SU-70 Analytical UHR FEG-SEM), XRD (PANalytical Empyrean) and TG/DTA (Perkin Elmer). Adsorption protocol A known mass (3 mg) of each silica based hybrid is added to a known concentration of phenol (4-aminphenol and p-nitrophenol) solution. The contact of hybrid (adsorbent) and phenol (adsorbate) is made for a known time (20 min) on the shaker (Lab-companion SK-300). An aliquot is drawn after every 2 min, filtered and analyzed on UV-Vis spectrophotometer (UV-1601, Shimadzu) at the wavelength (k max ) of 275 nm and 317 nm for 4-aminphenol and p-nitrophenol, respectively. The concentration uptake was determined from the calibration curve constructed for standard phenol solution of five known dilutions. The adsorbed concentration on the adsorbent or uptake on each hybrid is calculated by Eq. (1): where C i , C t and C e (mg/L) are the liquid-phase concentrations of adsorbate initially, at time t and at equilibrium, respectively. V is the volume (L) of the solution and W is the weight (g/L) of sorbent. Removal of Metals (% R) by the synthesized silica based hybrids is determined from the relation given in Eq. (2): Results and discussion The successful synthesis of functionalized hybrids is attributed to diffusion of organic moiety into silica framework. The surface and bulk characteristics are revealed by a wide range of techniques. ATR-FTIR ATR-FTIR analysis of functionalized hybrids demonstrated significant peaks at 795 cm À1 , 955 cm À1 , 1348 cm À1 and 1839 cm À1 suggesting successful incorporation of the organic moiety (see Fig. 1a-e). These peaks also indicate the linkage of organic moiety to Silicon atom containing -OH, -NH 2 , -OC‚O, and -CH‚CH 2 apical functional groups. This is also supported by the literature citing a number of such assignments for Mesoporous hybrid silica materials (Liu et al., 2002;Deng et al., 1995). Therefore, it is reasonable to consider that the reaction products obtained by the hydrolytic condensation are a mixture of species with different structures (Rikowski and Marsmann, 1997). The linkage of surface functional (amino) group in functionalized silica hybrids (AM) is clearly indicated by a characteristic IR peak for vibration of Si-CH 2 R (R = NH 2 ) at 793 cm À1 as shown in Fig. 1a. On the other hand, stretching of NH 2 as broad band (3250-3450 cm À1 ) and N-H deformation peaks are exhibited at 1640-1560 cm À1 . The organic moiety of methyl group is manifested as C-H stretching at 2940-2800 cm À1 and around 1450 cm À1 (Lee et al., 2001;Piers and Rochester, 1995). The introduction of organic moiety also results in a relative decrease in the silanol bands, with an associated increase in new bands characteristic of immobilized amino propyl groups. This is evident by a broader NH 2 symmetric stretching at 3264 cm À1 . Similar bands are identified in amino modified silica (OSU-6-W-APTMS-1) synthesized by grafting method. Methyacryl (MM) functionalized silica demonstrates characteristic absorption peak at 796 cm -1 assigned to -Si-OCH 3 (see Fig. 1c). This is also supported by (Mori and Pinnavaia, 2001;Yiu et al., 2001) for propyl methacrylate groups in two modified silica (OSU-6-W-TMSPMA-1 and OSU-6-W-TMSPMA-2). This also demonstrates that efficiency of one pot synthesis is comparable to the grafting method (Piers and Rochester, 1995). The disappearance or non-existence of -CH 3 bending mode vibration (at 1331 cm À1 ) indicates that there is no unreacted -OCH 3 (Lin et al., 2002). It further suggests the comparable rather better efficacy of direct co-condensation method (Qureshi et al., 2015). A sharp absorption band characteristic of non-hydrogen bonded silanols (Zhao et al., 1997;Jentys et al., 1996) at 3746 cm À1 disappears in Glucidoxy (GM) functionalized hybrids. The new bands at 3696, 2935, 2971, 2870, and 1372 cm À1 appear at the expense of the band at 3746 cm À1 . Further, epoxy group identified at 2971 and 2870 cm À1 may be assigned to mas(OC-H) and ms(OC-H) vibrations, respectively, in GPTMS (see Fig. 1b). Asymmetrical ring stretching at 794 cm À1 is also observed. Similar stretching is also marked by other authors (Lwoswsi, 1984;Grasselli and Ritchey, 1975). The high percent transmittance may allow adding more functional groups onto the surface and formation of a highly ordered structure (Kiyani et al., 2014). On the other hand, functionalized silica hybrids also indicate the presence of strong Si-O-Si stretching vibration band at 1103 cm À1 due to pure silica precursor (Deng et al., 2009). The vinyl group (VM) is depicted by the peak at 1630 cm À1 due to C‚C stretching vibration, confirming that vinyl group (-CH‚CH 2 ) exists and connects to silicon atom in organosilane. Functionalization of silica with organic moiety of phenyl (PM) distinguished by aromatic ring vibrations at 795 cm À1 and 954 cm À1 confirms that phenyl ring is bonded to a silicon atom. Mono substituted phenyl rings in plane deformation is reported (Darga, 2007) at 1007 cm À1 . The absence of this peak also indicates that presently synthesized hybrid is not monosubstituted. SEM/EDX The morphological features including shape and size of functionalized hybrids were assessed under Scanning Electron Microscope (SEM) that revealed interesting features (see Fig. 2a-e). It is understood that morphology is dependent largely on the synthesis method, choice of ingredients and reaction conditions. Amino group (AM) induces a doughnut shape and rigorously blended morphology with silica material. Stacking of methacrylate hybrid (MM) is seen to turn into ladder. However, the spongy features are also visible reflecting the unreacted monodispersed particles. The proportion of unreacted particles may be attributed to relatively less blending imparting less linkage of two constituents. Bean shape whirling around each other is demonstrated by Glucidoxy silica hybrid (GM) distorting the hexagonal shape of silica. Furthermore, perforated layered structure and clearly distinguishing feature of vinyl sphere is the hallmark of phenyl and vinyl hybrids, respectively (Khaskheli et al., 2015). Functionalized silica hybrids containing functional groups of amine, glucidoxy, methacrylate, vinyl and phenyl revealed interesting compositions as atom percentage of Si, O, and C as common elements (see Table 3.1). In addition, the presence of nitrogen in amine-hybrid is a clear indication of successful impregnation of organic moiety with silica framework. To a close approximation, the elements Si and O stand no significant variation with change in organic moiety. This is based on the fact that silica source is used in each hybrid. The only variation is demonstrated by Carbon ranging from 0.78% to 14% in AM and MM, respectively. The lower C content is likely due to the presence of nitrogen in the earlier. XRD Functionalized silica hybrids exhibited symmetries that are orthorhombic and tetragonal (see Table 3.2). Both are of higher symmetry than hexagonal revealing extended network linkages in functionalized hybrids. So it might be concluded that addition of organosilanes to the basic framework of Mesoporous silica helps in stabilizing the hybrids. This stability is manifested and supported by the results of thermo gravimetric analysis. The literature (Wang et al., 2004) strongly supports the obsession of hkl indices of 100, 110 and 200 characteristics of Mesoporous silica (hexagonal geometry) even in the functionalized hybrids. The present study results are in accordance with the literature for hybrid (GM). The disagreement to this is also exhibited by functionalized hybrids (AM, MM, VM and PM). In addition to this agreement and disagreement of hkl indices, it is generally observed that symmetry is necessarily changed in each functionalized hybrids than Mesoporous silica. This change is directly attributed to the incorporation of organic moieties in hybrids. Further, the higher stabilized symmetries of hybrids are expected to exhibit better adsorption. TG/DTA Thermal analysis of synthesized Mesoporous silica on the basis of weight loss and enthalpy changes revealed important information. The recorded spectra of TG, DTG and DTA are presented (Table 3.3). Functionalized silica hybrids revealed characteristic features on TG/DTA curves. The apparent 2-step decomposition is due to larger moisture loss initially (Step 1) followed by actual weight loss of hybrid itself. The result is encouraging and supportive to successful synthesis and complete conversion of ingredients into a unified product. Further, single entity compound also indicates the purity of compound. It may be deduced that the functional group is successfully impregnated as a component of Si framework (Batool et al., 2015). It is also noted that relationship of weight loss with enthalpy cannot be inferred for functionalized silica. This may be due to the fact that organic moieties having different apical functional groups are inculcated to varying proportions into or onto the surface of material. Results of Thermal studies can conveniently be compared with analysis done by range of other characterization techniques. For example, higher moisture loss in functionalized hybrids is satisfied by broad -OH peak assigned in ATR-FTIR studies. This is reflected by initial weight loss attributed to evolution of -OH group in moisture content (Ashraf et al. 2013a,b,c,d). Thermally stable oligomer-templated silica with wormlike porous structures similar to those obtained using TEOS were synthesized using sodium silicate as a silica source (Kim et al., 2000;Boissiere et al., 2000). Adsorption application for phenols decontamination The application of synthesized Mesoporous silica and functionalized hybrids is extended for the decontamination of Phenolic compounds using protocol of batch adsorption process. The results are demonstrated graphically to exhibit different relations as a function of time. The first observation is the dormant or insignificant role of varying contact time on percentage adsorption. It is encouraging to report that appreciable quantities of induced phenols on each synthesized material are adsorbed. This is evident by the removal percentage of more than 90% acquired on Mesoporous silica and functionalized hybrids (Ashraf et al. 2013a, b,c,d). It is understood from the literature that addition of surfactant (SDS) reduces the adsorption for phenols (Shawabkeh and Abu-Nameh, 2007). However, the enhanced adsorption in the present study may be explained on the basis that nonionic surfactant (P104) facilitates adsorption in comparison to ionic (SDS). On the bases of results of adsorption trends (see Fig. 3), Functionalized hybrids can conveniently be categorized as potential adsorbents for (a) 4-aminophenol (b) p-nitrophenol. It is clearly distinguished that AM, GM MM and VM, PM belong to class (a) and (b), respectively. The postulate again stands true that lower adsorption of 4-aminophenol owns to unsaturation of vinyl and phenyl (Ashraf et al. 2013a,b,c,d). Fig. 4 (a and b) clearly draws the comparative analysis of Mesoporous silica and functionalized hybrids as adsorbents for the removal of phenols with nitro and amino groups. The Mesoporous silica develop comparable adsorption efficiency to functionalized hybrids (AM; GM and MM). This reflects that organic moiety of -amino, -glucidoxy and -methacrylate is not contributing significantly in providing binding sites for phenol in the silica framework (Ashraf et al., 2012). On the extreme, the polarization of functionalized hybrids having organic moiety of -phenyl (PM) and -vinyl (VM) is clearly visible. This polarization from normalization of Mesoporous silica extends more disparity for the removal of p-nitrophenols in comparison to 4-aminophenols. Thus, it can be concluded that p-nitrophenols develop comparable and better retention than Mesoporous silica on adsorbent class (a) and (b), respectively. Equilibrium and kinetic study Application of Isotherms and Kinetic models is applied to explore the mode of adsorption for metal ions, phenols and PAHs onto functionalized silica based hybrid surfaces. Batch sorption equilibrium dynamics Equilibrium isotherms are applied to get an insight into sorption mechanism to propose surface properties and affinity of adsorbents. The treatment of present study data to Langmuir, Freundlich and Temkin Isotherms demonstrated R 2 % 1 for each of the synthesized silica hybrids, applied as adsorbent for the removal of both phenol derivatives; 4-aminophenol and p-nitrophenol (see Table 3.4). This confirms the adsorption of phenols on the homogenous surface layer and also indicates multilayer adsorption for all the silica hybrids resulting in an intricate adsorbateadsorbent interaction. The homogeneity of the adsorbent structure is attributed to the silica commonly present in all the hybrids synthesized. However, heterogeneity is attributed to the induction of organic moieties within the silica network. Fitness of Temkin Isotherm indicates uniform distribution of the pollutant into the pores of adsorbent. Overall results indicate that all adsorbents followed the three isotherms were almost best fitted, except that Freundlich isotherm data are not good fitted for adsorption of phenols on GM that may be attributed to the blockage due to bulkiness of organic moiety which leads to less adsorption (Ashraf et al., 2011). Kinetic studies and adsorption capacity (qe) Validation of zero order, pseudo-first order and pseudo-second order equations for Phenols adsorption is explored from linear plots. Kinetic studies suggest that for designing of a good adsorbent the variable parameters, the fitness of pseudo second order indicates dependence of adsorption on more than one factor (Ashraf et al., 2010). In the present study the incorporation of different organic moieties into the silica network for functionalization provides more binding sites. Optimizations of available binding sites for adsorption are few parameters evaluated for the development of good adsorbents. For Phenols, the adsorption capacity (qe) values signify that synthesized silica hybrids adsorbents are classified for adsorption efficiency as good (23-30 mg/g). The general sequence of adsorbent efficiency of silica hybrids for the Phenols removal follows: AM > MM > GM > PM > VM The study of adsorption capacity suggests the possible and potential application of each functionalized silica hybrid as successful adsorbent for remediation of phenols pollution . Conclusions ➢ The synthesis of hybrids with diverse organic moieties is significant to provide opportunities to understand the role of each component in the hybrid. It also broadens the scope for application as potential adsorbents for the removal of organic pollutants. ➢ The emergence of new intense peaks in XRD spectra indicates the interaction of organic moiety with silica and crystal symmetry ranges from tetragonal to orthorhombic. ➢ EDX of functionalized hybrids containing amine, glucidoxy, methacrylate, vinyl and phenyl share common elements of Si, O, and C. In addition, presence of nitrogen in amine-hybrid is a clear indication of successful impregnation of amine moiety with silica framework. ➢ The Thermal degradation studies conclude that complexation of Mesoporous silica with organic functional groups gives strength and thermal stability to the hybrids. ➢ Functionalized hybrids can conveniently be categorized as potential adsorbents for (a) 4-aminophenol (b) p-nitrophenol. It is clearly distinguished that AM, GM MM and VM, PM belong to class (a) and (b), respectively. The postulate stands true that lower adsorption of 4-aminophenol is due to unsaturation of vinyl and phenyl. ➢ Adsorption isotherms of phenol derivatives are well-fitted with the Langmuir, Freundlich and Temkin Isotherms and the adsorption kinetics follows the pseudo second order model. The modeling confirms that the uptake is a chemisorption process.
2018-12-14T17:59:37.014Z
2015-08-29T00:00:00.000
{ "year": 2015, "sha1": "bde84732d859589f2681fbad4cc8735333e7b90a", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.sjbs.2015.08.014", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5e99e0f5b8945b90fad9f64f5d082f55c77e7fa5", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
8274604
pes2o/s2orc
v3-fos-license
Long-term Results of Transcatheter Closure of Patent Ductus Arteriosus in Adolescents and Adults with Amplatzer Duct Occluder Background: Transcatheter closure of patent ductus arteriosus (PDA) with the Amplatzer ductal occluder (ADO) has become a standard procedure in most pediatric patients. However, experience in adults and adolescents is limited. Our experience of transcatheter closure of PDA with ADO in adolescents and adults is presented in this study. Aims: The aim of this study was to investigate long-term outcomes of transcatheter closure of PDA in adolescents and adults with ADO. Materials and Methods: In this study, 69 patients (52 females and 17 males) with PDA underwent transcatheter closure between May 2004 and October 2012. The procedure was performed under fluoroscopic guidance. Chest radiograph, electrocardiogram, transthoracic echocardiography (TTE), and clinical assessment of the patients were conducted before the procedure. Clinical and echocardiographic follow-ups were performed on day 1 of the 1st month, 6th month, and 12th month and then yearly after the procedure. Results: The mean and standard deviation age of the patients was 18.08 ± 7.25 years (ranging 10-38 years). The mean and standard deviation angiographic diameter of PDA was 7.78 ± 2.78 mm. The mean and standard deviation size of the implanted device was 9.3 ± 2.9. The mean and standard deviation average pulmonary artery pressure was 32.1 ± 14.2 mmHg. The mean pulmonary flow/systemic flow ratio was 2.2 ± 0.61. The devices were successfully implanted in all patients (100%). Immediately after device implantation, 47 patients had residual shunts. The residual shunts disappeared in all the patients, except for one that lingered until 24 h after the procedure. No severe complication occurred at the immediate and long-term follow-ups. Conclusions: The long-term results suggested that transcatheter closure of PDA with ADO is a safe and effective treatment for adolescents and adults with PDA. Low complication rates and short hospital stays make this procedure the treatment of choice in most cardiovascular centers worldwide. Introduction Patent ductus arteriosus (PDA) is an abnormally persistent arterial connection after birth between the descending aorta distal to the subclavian artery and the junction of the main and left pulmonary artery (LPA) branch.PDA accounts for approximately the ADO is the most commonly used device for adults with PDA. [4]In the current study, the long-term results of transcatheter closure of PDA with ADO in adult and adolescent patients are described. Materials and Methods The study was approved by the Ethics Committee of Shahid Sadoughi University of Medical Sciences, Yazd, Iran.The informed written consent was obtained from the patients or their parents prior to the procedure. A total of 69 (52 female and 17 male) adolescent and adult patients with PDA undergoing the transcatheter closure of PDA were included in the study between May 2004 and October 2012.Electrocardiography, chest radiography, TTE, and clinical assessment were conducted prior to the procedure.Associated anomalies included bicuspid aortic valve with mild stenosis (two cases) and perimembranous ventricular septal defect (VSD) (one case).One of the patients had residual PDA with a moderate shunt after surgical ligation.Right and left heart catheterization was performed under local or general anesthesia.All patients received bacterial prophylactic antibiotic with 30 mg/kg cefazolin (maximum 1 g) 30 min prior to catheterization; two subsequent doses were repeated at 8 h and 16 h after the procedure, and 100 international units (IU)/kg (maximum 5,000 IU) of sodium heparin was administered after catheterization of the femoral artery.A monoplane left anteroposterior or lateral descending aortogram was performed to outline the ductus and obtain the shape, length, aortic ampulla, and diameter at the narrowest part and the center of the PDA [Figure 1]. Statistical study The data were analyzed using statistical software of SPSS version 15.0.0 for Windows (SPSS, Chicago, IL, USA).Using descriptive statistics, the results are expressed as mean ± SD, percentage, median, and range. Results Demographic and catheterization data of the patients are summarized in Table 1.All patients were successfully implanted with the ADO devices (ADO I: Nitinol plug device).Angiography at the end of the procedure showed complete occlusion in 16 patients (30.8%) and residual shunt in 47 patients (68.1%).Among these 47 patients, 36 had a trivial residual shunt with foaming through the device and with contrast jet <1 mm, 10 had a small residual shunt (left-to-right shunt with contrast jet >1 mm and <2 mm in diameter), and one had moderate shunt (left-to-right shunt with contrast jet >2 mm and <4 mm in diameter).On physical examination, the continuous murmur disappeared completely, except in the case of one patient [Table 1]. At 24 h after the procedure, transthoracic color Doppler echocardiography showed complete occlusion in all patients, but one patient had 98% moderate residual shunt.In this patient, the residual shunt was not resolved at 3 months, 6 months, and 12 months of follow-up, and, therefore, the patient was referred for reinterventional treatment.The patient refused retranscatheter therapy because of pregnancy. During the 41.9 ± 36.3-month period of follow-up (ranging 17-101 months), no late complication or abnormality, such as device migration, recanalization, hemolysis, endocarditis, or device-related obstruction of LPA or the descending aorta was observed. The ADO chosen was 1-2 mm larger than the narrowest diameter of the PDA and was deployed through a venous approach.The technique of transcatheter closure of PDA with ADO was previously described by Thanopoulos et al. [5] Before and immediately after release of the ADO, an aortogram was performed to evaluate the position of the device, residual shunt, and aortic obstruction [Figure 2].Pressure pullback from the ascending aorta and LPA was obtained to exclude a significant pressure gradient.All the patients had complete transthoracic echocardiographic evaluations prior to discharge.The evaluations were performed at 1 month, 6 months, and 12 months after the procedure and yearly thereafter. Special attention was paid to residual shunt, LPA stenosis, and aortic obstruction.Bacterial endocarditis prophylaxis would be discontinued at a 6-month followup if the ductus was completely occluded. Discussion The current study evaluated transcatheter closure of PDA in adults and adolescents during a period of 46 months of follow-up.The PDA should be closed when diagnosed during childhood or adulthood, or it leads to left atrial and ventricular volume overload, pulmonary hypertension, infective endocarditis, aneurysm formation, calcification, and, rarely, rupture.In adults, the treatment of silent PDA with trivial shunting remains controversial. [2]Surgical closure of PDA has been the gold standard since 1939, especially for a large PDA. [6,7]anscatheter occlusion of PDA has greatly changed to surgical ligation in the management of adult patients with PDA.In case of calcified ductus arteriosus with pulmonary hypertension, transcatheter closure is a low-risk procedure frequently offered over surgical repair, which frequently involves cardiopulmonary bypass with an anterior approach t h r o u g h a m e d i a n s t e r n o t o m y . [8]C u r r e n t l y , transcatheter closure of PDA has been established to be the technique of choice for managing PDA in adults and adolescents with excellent outcome.However, surgical closure is still the technique of choice for treating very large PDAs or PDAs not curable with transcatheter intervention. [8][19] In our study, no severe complication occurred.The incidence of residual shunts at short-term, mid-term, and long-term follow-ups was 1.9%.The incidence of residual shunts in late follow-up was reported to be 0-5%. [14,17,18]vice embolization occasionally occurs, necessitating surgical removal or transcatheter retrieval.Device embolization is one of the most important complications of transcatheter occlusion of PDA. [14][16][17][18] In the present study, no device embolization occurred.1] Although LPA obstruction is one of the most significant complications, it is not a concern in adults because of the large diameter of pulmonary artery branches. [2]In our study, LPA obstruction was not found.Hemolysis and infective endocarditis following transcatheter device closure are rare complications. [4]Other late complications associated with ADO were not observed in our study.Overall, the PDA occlusion rate in our study was as high as 98.1% after the short-term, mid-term, and longterm follow-ups.However, our study demonstrated the feasibility of and success in the treatment of PDA with the transcatheter interventional approach in adolescent and adult patients. Conclusion The fi ndings from our study showed that transcatheter closure of PDA with ADO was very effi cient and safe when used in adolescents and adults, with excellent and satisfi ed short-term, mid-term, and long-term results.The minimal incidence of complications and residual shunts makes this device ideal for the transcatheter closure of PDA in adolescents and adults.Transcatheter closure of PDA with ADO should be the fi rst choice for treating PDA in adolescents and adults. Figure 1 : Figure 1: Descending aortogram in the lateral projection showing large sized PDA SD = Standard deviation, QP/QS = Pulmonary fl ow/systemic fl ow, Sys PAP = Systolic pulmonary artery pressure, Dia PAP = Diastolic PAP pressure, PT = Procedure time, FT = Fluoroscopy time, FU = Follow-up Figure 2 : Figure 2: (a) Descending aortogram in the lateral view before the release of ADO showing trivial mesh leak shunt (b) Immediately after the release of ADO, showing the absence of residual shunt (c) Lateral chest radiograph showing radiologic appearance of ADO after release
2018-04-03T00:05:51.600Z
2015-05-01T00:00:00.000
{ "year": 2015, "sha1": "7a8fdcf4c4a9823073407609e8e626da924157f3", "oa_license": "CCBYNCSA", "oa_url": "https://europepmc.org/articles/pmc4462816", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "f5f561cef8d23514edd31071b01421943a0ed642", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12064136
pes2o/s2orc
v3-fos-license
Semi-supervised Semantic Role Labeling Using the Latent Words Language Model Semantic Role Labeling (SRL) has proved to be a valuable tool for performing automatic analysis of natural language texts. Currently however, most systems rely on a large training set, which is manually annotated, an effort that needs to be repeated whenever different languages or a different set of semantic roles is used in a certain application. A possible solution for this problem is semi-supervised learning, where a small set of training examples is automatically expanded using unlabeled texts. We present the Latent Words Language Model, which is a language model that learns word similarities from unlabeled texts. We use these similarities for different semi-supervised SRL methods as additional features or to automatically expand a small training set. We evaluate the methods on the PropBank dataset and find that for small training sizes our best performing system achieves an error reduction of 33.27% F1-measure compared to a state-of-the-art supervised baseline. Introduction Automatic analysis of natural language is still a very hard task to perform for a computer. Although some successful applications have been developed (see for instance (Chinchor, 1998)), implementing an automatic text analysis system is still a labour and time intensive task. Many applications would benefit from an intermediate representation of texts, where an automatic analysis is already performed which is sufficiently general to be useful in a wide range of applications. Syntactic analysis of texts (such as Part-Of-Speech tagging and syntactic parsing) is an example of such a generic analysis, and has proved useful in applications ranging from machine translation (Marcu et al., 2006) to text mining in the bio-medical domain (Cohen and Hersh, 2005). A syntactic parse is however a representation that is very closely tied with the surface-form of natural language, in contrast to Semantic Role Labeling (SRL) which adds a layer of predicate-argument information that generalizes across different syntactic alternations (Palmer et al., 2005). SRL has received a lot of attention in the research community, and many systems have been developed (see section 2). Most of these systems rely on a large dataset for training that is manually annotated. In this paper we investigate whether we can develop a system that achieves state-of-the-art semantic role labeling without relying on a large number of labeled examples. We aim to do so by employing the Latent Words Language Model that learns latent words from a large unlabeled corpus. Latent words are words that (unlike observed words) did not occur at a particular position in a text, but given semantic and syntactic constraints from the context could have occurred at that particular position. In section 2 we revise existing work on SRL and on semi-supervised learning. Section 3 outlines our supervised classifier for SRL and section 4 discusses the Latent Words Language Model. In section 5 we will combine the two models for semisupervised role labeling. We will test the model on the standard PropBank dataset and compare it with state-of-the-art semi-supervised SRL systems in section 6 and finally in section 7 we draw conclusions and outline future work. Gildea and Jurafsky (2002) were the first to describe a statistical system trained on the data from the FrameNet project to automatically assign semantic roles. This approach was soon followed by other researchers (Surdeanu et al., 2003;Pradhan et al., 2004;Xue and Palmer, 2004), focus-ing on improved sets of features, improved machine learning methods or both, and SRL became a shared task at the CoNLL 2004, 2005 and 2008 conferences 1 . The best system (Johansson and Nugues, 2008) in CoNLL 2008 achieved an F1measure of 81.65% on the workshop's evaluation corpus. Related work Semi-supervised learning has been suggested by many researchers as a solution to the annotation bottleneck (see (Chapelle et al., 2006;Zhu, 2005) for an overview), and has been applied successfully on a number of natural language processing tasks. Mann and McCallum (2007) apply Expectation Regularization to Named Entity Recognition and Part-Of-Speech tagging, achieving improved performance when compared to supervised methods, especially on small numbers of training data. Koo et al. (2008) present an algorithm for dependency parsing that uses clusters of semantically related words, which were learned in an unsupervised manner. There has been little research on semi-supervised learning for SRL. We refer to He and Gildea (2006) who tested active learning and co-training methods, but found little or no gain from semi-supervised learning, and to Swier and Stevenson (2004), who achieved good results using semi-supervised methods, but tested their methods on a small number of Verb-Net roles, which have not been used by other SRL systems. To the best of our knowledge no system was able to reproduce the successful results of (Swier and Stevenson, 2004) on the PropBank roleset. Our approach most closely resembles the work of Fürstenau and Lapata (2009) who automatically expand a small training set using an automatic dependency alignment of unlabeled sentences. This method was tested on the FrameNet corpus and improved results when compared to a fully-supervised classifier. We will discuss their method in detail in section 5. Fillmore (1968) introduced semantic structures called semantic frames, describing abstract actions or common situations (frames) with common roles and themes (semantic roles). Inspired by this idea different resources were constructed, including FrameNet (Baker et al., 1998) and PropBank (Palmer et al., 2005). An alternative approach to semantic role labeling is the framework developed 1 See http://www.cnts.ua.ac.be/conll/ for an overview. by Halliday (1994) and implemented by Mehay et al. (2005). PropBank has thus far received the most attention of the research community, and is used in our work. PropBank The goal of the PropBank project is to add semantic information to the syntactic nodes in the English Penn Treebank. The main motivation for this annotation is the preservation of semantic roles across different syntactic realizations. Take for instance the sentences 1. The window broke. 2. John broke the window. In both sentences the constituent "the window" is broken, although it occurs at different syntactic positions. The PropBank project defines for a large collection of verbs (excluding auxiliary verbs such as "will", "can", ...) a set of senses, that reflect the different meanings and syntactic alternations of this verb. Every sense has a number of expected roles, numbered from Arg0 to Arg5. A small number of arguments are shared among all senses of all verbs, such as temporals (Arg-TMP), locatives (Arg-LOC) and directionals (Arg-DIR). Additional to the frame definitions, PropBank has annotated a large training corpus containing approximately 113.000 annotated verbs. An example of an annotated sentence is Here BREAK.01 is the first sense of the "break" verb. Note that (1) although roles are defined for every frame separately, in reality roles with identical names are identical or very similar for all frames, a fact that is exploited to train accurate role classifiers and (2) semantic role labeling systems typically assume that a frame is fully expressed in a single sentence and thus do not try to instantiate roles across sentence boundaries. Although the original PropBank corpus assigned semantic roles to syntactic phrases (such as noun phrases), we use the CoNLL dataset, where the PropBank corpus was converted to a dependency representation, assigning semantic roles to single (head) words. Features In this section we discuss the features used in the semantic role labeling system. All features but the Split path feature are taken from existing semantic role labeling systems, see for example (Gildea and Jurafsky, 2002;Lim et al., 2004;Thompson et al., 2006). The number in brackets denotes the number of unique features for that type. Word We split every sentence in (unigram) word tokens, including punctuation. (37079) Stem We reduce the word tokens to their stem, e.g. "walks" -> "walk". (28690) POS The part-of-speech tag for every word, e.g. "NNP" (for a singular proper noun). (77) Neighbor POS's The concatenated part-ofspeech tags of the word before and the word just after the current word, e.g. "RBS_JJR". Path This important feature describes the path through the dependency tree from the current word to the position of the predicate, e.g. "coord↑obj↑adv↑root↓dep↓nmod↓pmod", where '↑' indicates going up a constituent and '↓' going down one constituent. Split Path Because of the nature of the path feature, an explosion of unique features is found in a given data set. We reduce this by splitting the path in different parts and using every part as a distinct feature. We split, for example, the previous path in 6 different features: "coord", "↑obj", "↑adv", "↑root", "↓dep", "↓nmod", "↓pmod". Note that the split path feature includes the POS feature, since the first component of the path is the POS tag for the current word. This feature has not been used previously for semantic role detection. For every word w i in the training and test set we construct the feature vector f(w i ), where at every position in this vector 1 indicates the presence for the corresponding feature and 0 the absence of that feature. Discriminative model Discriminative models have been found to outperform generative models for many different tasks including SRL (Lim et al., 2004). For this reason we also employ discriminative models here. The structure of the model was inspired by a similar (although generative) model in (Thompson et al., 2006) where it was used for semantic frame classification. The model ( fig. 1) assumes that the role label r i j for the word w i is conditioned on the features f i and on the role label r i−1 j of the previous word and that the predicate label p j for word w j is conditioned on the role labels R j and on the features f j . This model can be seen as an extension of the standard Maximum Entropy Markov Model (MEMM, see (Ratnaparkhi, 1996)) with an extra dependency on the predicate label, we will henceforth refer to this model as MEMM+pred. To estimate the parameters of the MEMM+pred model we turn to the successful Maximum Entropy (Berger et al., 1996) parameter estimation method. The Maximum Entropy principle states that the best model given the training data is the model such that the conditional distribution defined by the model has maximum entropy subject to the constraints represented by the training examples. There is no closed form solution to find this maximum and we thus turn to an iterative method. In this work we use Generalized Iterative Scaling 2 , but other methods such as (quasi-) Newton optimization could also have been used. Rationale As discussed in sections 1 and 3 most SRL systems are trained today on a large set of manually annotated examples. PropBank for example contains approximately 50000 sentences. This manual annotation is both time and labour-intensive, and needs to be repeated for new languages or for new domains requiring a different set of roles. One approach that can help to solve this problem is semi-supervised learning, where a small set of annotated examples is used together with a large set of unlabeled examples when training a SRL model. Manual inspection of the results of the supervised model discussed in the previous section showed that the main source of errors was incorrect labeling of a word because the word token did not occur, or occurred only a small number of times in the training set. We hypothesize that knowledge of semantic similar words could overcome this problem by associating words that occurred infrequently in the training set to similar words that occurred more frequently. Furthermore, we would like to learn these similarities automatically, to be independent of knowledge sources that might not be available for all languages or domains. The Distributional Hypothesis, supported by theoretical linguists such as Harris (1954), states that words that occur in the same contexts tend to have similar meanings. This suggests that one can learn the similarity between two words automatically by comparing their relative contexts in a large unlabeled corpus, which was confirmed by different researchers (e.g. (Lin, 1998;McDonald and Ramscar, 2001;Grefenstette, 1994)). Different methods for computing word similarities have been proposed, differing between methods to represent the context (using dependency relationship or a window of words) and between methods that, given a set of contexts, compute the similarity between different words (ranging from cosine similarity to more complex metrics such as the Jaccard index). We refer to (Lin, 1998) for a comparison of the different similarity metrics. In the next section we propose a novel method to learn word similarities, the Latent Words Language Model (LWLM) (Deschacht and Moens, 2009). This model learns similar words and learns the a distribution over the contexts in which certain types of words occur typically. Definition The LWLM introduces for a text T = w 1 ...w N of length N for every observed word w i at position i a hidden variable h i . The model is a generative model for natural language, in which the latent variable h i is generated by its context C(h i ) and the observed word w i is generated by the latent variable h i . In the current model we assume that the context is C( the two previous words and h i+2 i+1 = h i+1 h i+2 is the two next words. The observed w i has a value from the vocabulary V , while the hidden variable h i is unknown, and is modeled as a probability distribution over all words of V . We will see in the next section how this distribution is estimated from a large unlabeled training corpus. The aim of this model is to estimate, at every position i, a distribution for h i , assigning high probabilities to words that are similar to w i , given the context of this word C(h i ), and low probabilities to words that are not similar to w i in this context. A possible interpretation of this model states that every hidden variable h i models the "meaning" for a particular word in a particular context. In this probabilistic model, when generating a sentence, we generate the meaning of a word (which is an unobserved representation) with a certain probability, and then we generate a certain observation by writing down one of the possible words that express this meaning. Creating a representation that models the meaning of a word is an interesting (and controversial) topic in its own right, but in this work we make the assumption that the meaning of a particular word can be modeled using other words. Modeling the meaning of a word with other words is not an unreasonable one, since it is already employed in practice by humans (e.g. by using dictionaries and thesauri) and machines (e.g. relying on a lexical resource such as WordNet) in word sense disambiguation tasks. Parameter estimation As we will further see the LWLM model has three probability distributions: P(w i |h i ), the probability of the observed word w j given the latent variable h j , P(h i |h i−1 i−2 ), the probability of the hidden word h j given the previous variables h j−2 and h j−1 , and P(h i |h i+2 i+1 ), the probability of the hidden word h j given the next variables h j+1 and h j+2 . These distributions need to be learned from a training text T train =< w 0 ...w z > of length Z. The Baum-Welch algorithm The attentive reader will have noticed the similarity between the proposed model and a standard second-order Hidden Markov Model (HMM) where the hidden state is dependent on the two previous states. However, we are not able to use the standard Baum-Welch (or forward-backward) algorithm, because the hidden variable h i is modeled as a probability distribution over all words in the vocabulary V . The Baum-Welch algorithm would result in an execution time of O(|V | 3 NG) where |V | is the size of the vocabulary, N is the length of the training text and G is the number of iterations needed to converge. Since in our dataset the vocabulary size is more than 30K words (see section 3.2), using this algorithm is not possible. Instead we use techniques of approximate inference, i.e. Gibbs sampling. Initialization Gibbs sampling starts from a random initialization for the hidden variables and then improves the estimates in subsequent iterations. In preliminary experiments it was found that a pure random initialization results in a very long burn-in-period and a poor performance of the final model. For this reason we initially set the distributions for the hidden words equal to the distribution of words as given by a standard language model 3 . Gibbs sampling We store the initial estimate of the hidden variables in M 0 train =< h 0 ...h Z >, where h i generates w i at every position i. Gibbs sampling is a Markov Chain Monte Carlo method that updates the estimates of the hidden variables in a number of iterations. M τ train denotes the estimate of the hidden variables in iteration τ. In every iteration a new estimate M τ+1 train is generated from the previous estimate M τ train by selecting a random position j and updating the value of the hidden variable at that position. The probability distributions P τ (w j |h j ), P τ (h j |h j−1 j−2 ) and P τ (h j |h j+2 j+1 ) are constructed by collecting the counts from all positions i = j. The hidden variable h j is dependent on h j−2 , h j−1 , h j+1 , h j+2 and w j and we can compute the distribution of possible values for the variable h j as We set P(h j |h j−1 j−2 h j+2 j+1 ) = P(h j |h j−1 j−2 ) · P(h j |h j+2 j+1 ) which can be easily computed given the above dis-tributions. We select a new value for the hidden variable according to P τ (h j |w j , h j−1 0 , h Z j+1 ) and place it at position j in M τ+1 train . The current estimate for all other unobserved words remains the same. After performing this iteration a large number of times (|V | * 10 in this experiment), the distribution approaches the true maximum likelihood distribution. Gibbs sampling however samples this distribution, and thus will never reach it exactly. A number of iterations (|V | * 100) is then performed in which Gibbs sampling oscillates around the correct distribution. We collect independent samples of this distribution every |V | * 10 iterations, which are then used to construct the final model. Evaluation of the Language Model A first evaluation of the quality of the automatically learned latent words is by translation of this model into a sequential language model and by measuring its perplexity on previously unseen texts. In (Deschacht and Moens, 2009) we perform a number of experiments, comparing different corpora (news texts from Reuters and from Associated Press, and articles from Wikipedia) and n-gram sizes (3-gram and 4-gram). We also compared the proposed model with two state-ofthe-art language models, Interpolated Kneser-Ney smoothing and fullibmpredict (Goodman, 2001), and found that LWLM outperformed both models on all corpora, with a perplexity reduction ranging between 12.40% and 5.87%. These results show that the estimated distributions over latent words are of a high quality and lead us to believe they could be used to improve automatic text analysis, like SRL. Role labeling using latent words The previous section discussed how the LWLM learns similar words and how these similarities improved the perplexity on an unseen text of the language model derived from this model. In this section we will see how we integrate the latent words model in two novel semi-supervised SRL models and compare these with two state-of-the-art semisupervised models for SRL and dependency parsing. Latent words as additional features In a first approach we estimate the distribution of latent words for every word for both the training and test set. We then use the latent words at every position as additional probabilistic features for the discriminative model. More specifically, we append |V | extra values to the feature vector f(w j ), containing the probability distribution over the |V | possible words for the hidden variable h i 4 . We call this the LWFeatures method. This method has the advantage that it is simple to implement and that many existing SRL systems can be easily extended by adding additional features. We also expect that this method can be employed almost effortless in other information extraction tasks, such as Named Entity Recognition or Part-Of-Speech labeling. We compare this approach to the semisupervised method in Koo et al. (2008) who employ clusters of related words constructed by the Brown clustering algorithm (Brown et al., 1992) for syntactic processing of texts. Interestingly, this clustering algorithm has a similar objective as LWLM since it tries to optimize a class-based language model in terms of perplexity on an unseen test text. We employ a slightly different clustering method here, the fullibmpredict method discussed in (Goodman, 2001). This method was shown to outperform the class based model proposed in (Brown et al., 1992) and can thus be expected to discover better clusters of words. We append the feature vector f(w j ) with c extra values (where c is the number of clusters), respectively set to 1 if the word w i belongs to the corresponding cluster or to 0 otherwise. We call this method the ClusterFeatures method. Automatic expansion of the training set using predicate argument alignment We compare our approach with a method proposed by Fürstenau and Lapata (2009). This approach is more tailored to the specific case of SRL and is summarized here. Given a set of labeled seed verbs with annotated semantic roles, for every annotated verb a number of occurrences of this verb is found in unlabeled texts where the context is similar to the context of the annotated example. The context is defined here as all words in the sentence that are direct dependents of this verb, given the syntactic dependency tree. The similarity between two occurrences of a particular verb is measured by finding all different alignments σ : M σ → {1...n} (M σ ⊂ {1, ..., m}) between the m dependents of the first occurrence and the n dependents of the second occurrence. Every alignment σ is assigned a score given by where syn(g i , g σ (i) ) denotes the syntactic similarity between grammatical role 5 g i of word w i and grammatical role g σ (i) of word w σ (i) , and sem(w i , w σ (i) ) measures the semantic similarity between words w i and w σ (i) . A is a constant weighting the importance of the syntactic similarity compared to semantic similarity, and B can be interpreted as the lowest similarity value for which an alignment between two arguments is possible. The syntactic similarity syn(g i , g σ (i) ) is defined as 1 if the dependency relations are identical, 0 < a < 1 if the relations are of the same type but of a different subtype 6 and 0 otherwise. The semantic similarity sem(w i , w σ (i) ) is automatically estimated as the cosine similarity between the contexts of w i and w σ (i) in a large text corpus. For details we refer to (Fürstenau and Lapata, 2009). For every verb in the annotated training set we find the k occurrences of that verb in the unlabeled texts where the contexts are most similar given the best alignment. We then expand the training set with these examples, automatically generating an annotation using the discovered alignments. The variable k controls the trade-off between annotation confidence and expansion size. The final model is then learned by running the supervised training method on the expanded training set. We call this method AutomaticExpansionCOS 7 . The values for k, a, A and B are optimized automatically in every experiment on a held-out set (disjoint from both training and test set). We adapt this approach by employing a different method for measuring semantic similarity. Given two words w i and w σ (i) we estimate the distribution of latent words, respectively L(h i ) and L(h σ (i) ). We then compute the semantic similarity measure as the Jensen-Shannon (Lin, 1997) divergence where avg = (L(h i ) + L(h σ (i) ))/2 is the average between the two distributions and D (L(h i )||avg) is the Kullback-Leiber divergence (Cover and Thomas, 2006). Although this change might appear only a slight deviation from the original model discussed in (Fürstenau and Lapata, 2009) it is potentially an important one, since an accurate semantic similarity measure will greatly influence the accuracy of the alignments, and thus of the accuracy of the automatic expansion. We call this method Automat-icExpansionLW. Experiments We perform a number of experiments where we compare the fully supervised model with the semisupervised models proposed in the previous section. We first train the LWLM model on an unlabeled 5 million word Reuters corpus 8 . We perform different experiments for the supervised and the four different semi-supervised methods (see previous section). Table 1 shows the results of the different methods on the test set of the CoNLL 2008 shared task. We experimented with different sizes for the training set, ranging from 5% to 100%. When using a subset of the full training set, we run 10 different experiments with random subsets and average the results. We see that the LWFeatures method performs better than the other methods across all training sizes. Furthermore, these improvements are larger for smaller training sets, showing that the approach can be applied successfully in a setting where only a small number of training examples is available. When comparing the LWFeatures method with the ClusterFeatures method we see that, although the ClusterFeatures method has a similar performance for small training sizes, this performance drops for larger training sizes. A possible explanation for this result is the use of the clusters employed in the ClusterFeatures method. By definition the clusters merge many words into one cluster, which might lead to good generalization (more important for small training sizes) but can potentially hurt precision (more important for larger training sizes). A third observation that can be made from table 1 is that, although both automatic expansion methods (AutomaticExpansionCOS and AutomaticEx-pansionCOS) outperform the supervised method for the smallest training size, for other sizes of the training set they perform relatively poorly. An informal inspection showed that for some examples in the training set, little or no correct similar occurrences were found in the unlabeled text. The algorithm described in section 5 adds the most similar k occurrences to the training set for every annotated example, also for these examples where little or no similar occurrences were found. Often the automatic alignment fails to generate correct labels for these occurrences and introduces errors in the training set. In the future we would like to perform experiments that determine dynamically (for instance based on the similarity measure between occurrences) for every annotated example how many training examples to add. Conclusions and future work We have presented the Latent Words Language Model and showed how it learns, from unlabeled texts, latent words that capture the meaning of a certain word, depending on the context. We then experimented with different methods to incorporate the latent words for Semantic Role Labeling, and tested different methods on the PropBank dataset. Our best performing method showed a significant improvement over the supervised model and over methods previously proposed in the literature. On the full training set the best method performed 2.33% better than the fully supervised model, which is a 10.91% error reduction. Using only 5% of the training data the best semi-supervised model still achieved 60.29%, compared to 40.49% by the supervised model, which is an error reduction of 33.27%. These results demonstrate that the latent words learned by the LWLM help for this complex information extraction task. Furthermore we have shown that the latent words are simple to incorporate in an existing classifier by adding additional features. We would like to perform experiments on employing this model in other information extraction tasks, such as Word Sense Disambiguation or Named Entity Recognition. The current model uses the context in a very straightforward way, i.e. the two words left and right of the current word, but in the future we would like to explore more advanced methods to improve the similarity estimates. Lin (1998) for example discusses a method where a syntactic parse of the text is performed and the context of a word is modeled using dependency triples. The other semi-supervised methods proposed here were less successful, although all improved on the supervised model for small training sizes. In the future we would like to improve the described automatic expansion methods, since we feel that their full potential has not yet been reached. More specifically we plan to experiment with more advanced methods to decide whether some automatically generated examples should be added to the training set.
2014-07-01T00:00:00.000Z
2009-08-06T00:00:00.000
{ "year": 2009, "sha1": "ae29b936d437a93ad259ee008ba56fe82ab4db61", "oa_license": null, "oa_url": "http://dl.acm.org/ft_gateway.cfm?id=1699514&type=pdf", "oa_status": "BRONZE", "pdf_src": "ACL", "pdf_hash": "ae29b936d437a93ad259ee008ba56fe82ab4db61", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
225151214
pes2o/s2orc
v3-fos-license
A new generalized family of distributions: Properties and applications 1 Department of Statistics, Govt. Postgraduate College B. R. Bahawalpur, Bahawalpur 63100, Pakistan 2 Mathematics Department, Umm-Al-Qura University, Makkah, Saudi Arabia 3 Department of Mathematics, College of Science and Humanities in Al-Kharj, Prince Sattam bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia 4 Department of Mathematics, Faculty of Science, Mansoura University, Mansoura 35516, Egypt 5 Department of Statistics, Mathematics and Insurance, Benha University, Benha 13511, Egypt Introduction The classical probability distributions were generalized through the induction of location, scale and shape parameters. Recently, appreciable attempts have been made in the development of the new probability distributions which have big privilege of more flexibility, fitting specific and several real world sequence of events. The improvement in the G-Classes revolution began with the fundamental article of Alzaatreh et al. [9] in which they proposed transformed (T)-transformer (X) (T-X) family. (1.1) Many authors constructed extended generalized families by using T-X approach. Some examples of generalized classes are, beta-G [19], Kw-G type-1 [17], log-gamma-G type-2 [10], gamma-X [40], exponentiated T-X [9], Weibull-G [11], exponentiated-Weibull-H [13] and generalized odd Lindley-G [3]. Motivated by the new prospect in term of accuracy and exibility of a new distribution, we propose a new propose a new flexible family called, log-logistic tan generalized (LLT-G) family which provides greater accuracy and flexibility in fitting real-life data. Some general properties of the LLT-G class will be provided here. We provide two applications for one special sub-model of the proposed class, called log-logistic tan-Weibull (LLT-W) distribution which has decreasing, increasing, bathtub and unimodal hazard rate functions. Let r(t) = c s t s c−1 1 + ( t s ) C −2 be the pdf of a rv 0 < t < ∞ and W[G(x)] = tan π 2 G α (x) which satisfies the conditions of T-X family. The cdf and pdf of the new LLT-G family take the forms x > 0, α, c, s > 0 (1.2) and f (x) = π α c 2 s c g(x) [G(x)] α−1 sec 2 π 2 G α (x) tan π 2 G α (x) x > 0, α, c, s > 0. (1.3) The LLT-G hazard rate function (hrf) has the form iii) The LLT-Weibull as a special sub-model of LLT-G class provides adequate fits than other modified models generated by other existing families under same baseline model. This paper is outlined as follows. Five special models of the LLT-G family are provided in Section 2. In Section 3, we derived some mathematical properties of LLT-G family. Estimation of the LLT-G parameters using maximum likelihood is discussed in Section 4. Further, we present simulations for the LLT-W model to address the performance of the proposed estimators in Section 4. In Section 5, we show the importance and applicability of the LLT-G family via two real-life data applications. In Section 6, the paper is summarized. Five special distributions We study five sub-models of the LLT-G class using the baseline Weibull, normal, Rayleigh, exponential and Burr XII distributions (1.3) and provide some plots for their pdfs and hrfs. Figures 1-5 reveals that the special sub-models of the LLT-G class can provide left skewed, symmetrical, right skewed, unimodal, bimodal and reversed-J densities, and increasing, modified bathtub, decreasing, bathtub, unimodal, reversed-J shaped, and J-shaped failure rates. LLT-Weibull (LLT-W) distribution Using the Weibull pdf, g( The pdf associated with (2.1) reduces to The plots in Figure 1 depict pdf and hrf shapes of the LLT-W distribution for different parametric values. LLT-normal (LLT-N) distribution The pdf of normal distribution is g( The cdf of the LLT-N reduces to 3) The LLT-N pdf takes the form LLT-Rayleigh (LLT-R) distribution Using the Rayleigh rv with pdf, g(x) = 2 a x e −a x 2 , x > 0, a > 0, the cdf of the LLT-R distribution becomes The pdf of the LLT-R reduces to (2.6) LLT-exponential (LLT-E) distribution Consider the exponential distribution with pdf g(x) = b e −b x , x > 0, b > 0. Hence, the cdf and pdf of LLT-E distribution are and (2.8) LLT-Burr XII (LLT-BXII) distribution Let X be a Burr XII rv with pdf g( The pdf of the LLT-BXII reduces to (2.10) Figure 5 displays density and hazard rate plots of the LLT-BXII distribution. Properties In this section, the LLT-G properties such as quantile function (qf), useful expansion, moments, generating function, and order statistics are derived. The expressions derived for the LLT-G family can be handled using symbolic computation software, such as, Maple, Matlab, Mathematica, Mathcad, and R because of their ability to deal with complex and formidable size mathematical expressions. Established explicit formulae to evaluate statistical and mathematical measures can be more efficient than computing them directly by numerical integration. It is noted that the infinity limit in the sums of these expressions can be substituted by a large positive integer such as 40 or 50 for most practical purposes. Quantile function The qf of X is calculated directly by inverting (2.1) as Eq (3.1) can be used to simulate any baseline model and to obtain the median=Q(1/2), Bowley's skewness and Moors kurtosis. The cdf of the LLT-G family can be written as Using the binomial expansion and the convergent series of tan function which can be calculated using the Mathematica software. tan Applying the last two equations to (3.2), the cdf of LLT-G reduces to We can rewrite the Eq (3.2) as where . (3.5) The below power series can be calculated using the Mathematica software where h α(2 j+ic) (x) denotes the exp-G pdf with parameter α(2 j + ic). Hereafter, a rv having the pdf delivers up that the LLT-G pdf is a linear combination of exp-G densities. Thus, a few properties of LLT-G class can be calculated someplace precisely from exp-G properties. Moments Assuming that Z is a rv with a baseline G(x). The moments of X can be derived from the (r, k)th probability weighted moment (PWM) of Z defined by [22] as (3.8) Using Eq (3.7), one can write where τ r,α(2 j+ic) = 1 0 Q G (u) r u α(2 j+ic) du which is computed numerically for any baseline qf. The measures of skewness, β 1 , and kurtosis, β 2 , can be defined by the following two formulae and The plots of skewness and kurtosis for the LLT-W distribution are visualized in Figure 6. Figure 6. Skewness and kurtosis Plots for the LLT-W model. Moment generating function We present two formulae for the mgf, M(s) = E(e s X ), of the rv X. The first one comes from Eq (3.7) as where M α(2 j+ic) (s) is the exp-G mgf with power parameter α(2 j + ic). Further, Eq (3.12) can also take the form where the quantity ρ 2 j+iβ (s) = 1 0 exp [s Q G (u)] u α(2 j+ic) du is calculated numerically. Incomplete moments Incomplete moments are beneficial in calculating some inequality measures and mean deviations. The nth incomplete moments of the LLT-G family has the form (3.14) For most baseline distributions, the above (3.14) can be obtained numerically. The first incomplete moment, m 1 (z), can be calculated from Eq (3.7) as where Eq (3.15) is the primary quantity to obtain the mean deviations. We apply (3.15) to the LLT-W model. The LLT-W pdf with power parameter α(2 j + ic), reduces to and then Another formula for m 1 (z) follows, Eq (3.7) with u = G(x), as Order statistics Consider a random sample from the LLT-G class, X 1 , . . . , X n . The pdf of the ith order statistic, X i:n , has the form After some algebra, the pdf of X i:n takes the form where the exp-G density, h α(2(r+t)+c(m+1)) (x), has a power parameter α(2(r + t) + c(m + 1)) where b r [c(m + 1) − 1] and c t (2) are the coefficients of the power series expansion of tan and sec functions by using Mathematica. Eq (3.21) can be used to calculate some quantities of the LLT-G order statistics from the exp-G quantities. Estimation and simulation We will discuss the estimation of the LLT-G parameters using the maximum likelihood (MLE) and study the performance of these estimators via simulations. Maximum likelihood estimation Consider a random sample from the LLT-G class, x 1 , . . . , x n , with Θ = (α, s, c, ξ) . Then, the log-likelihood of Θ reduces to n (Θ) = n log ( where A i = π 2 G α (x i ; ξ). The above log-likelihood (4.1) is maximized simply by statistical programs namely, R, Mathcad, SAS, Mathematica and Ox programs. It also is maximized by solving the following likelihood equations. Simulations This section deals with checking the MLEs performance in estimating the LLT-W parameters using a simulation study. For n = 50, 100, 300 and 500, we generated 1,000 samples from the LLT-W model for various parametric values using the inversion method via the qf of the LLT-W given by (4.6) The numerical results of the mean square error (MSE), bias, coverage probability (C.P) and average width (AW) of the MLEs of the model parameters are obtained using R software. Further, the graphical representation of bias, MSE and C.P on several parametric values are also provided. The numerical results for bias, MSE, C.P and AW are listed in Tables 1 and 2 Modeling real-life data This section deals with checking the flexibility of the LLT-W distribution using two real-life data applications. The first real data contain 63 observations about strengths of 1.5 cm glass fibers which are reported in Smith and Naylor [38]. The second data set from Choulakian and Stephens (2001) [12], is the exceedances of flood peaks (in m3/s) of Wheaton River, Yukon Territory, Canada. The data consists of 72 exceedances for the years 1958-1984, rounded to one decimal place. Tables 3 and 4 report the MLEs (with their associated (standard errors)) of the parameters of competing models, and the statistics K-S, p-value, A * and W * . The LLT-W model is compared with the ODW, OLLEW, KW, WBXII, KEBXII, BBXII, WL distributions in Tables 3 and 4. We note that the LLT-W model gives the lowest values for all discrimination measures and largest p-value among all fitted models. Hence,the LLT-W model could be chosen as a good alternative to explain glass fibers and Wheaton river data. The results in Tables 3 and 4 indicate that LLT-W provides better fits for glass fibers and Wheaton river data sets as compared to other competing models. More visual comparison of the four best competing distributions are provided in Figures 10 and 11. The fitted densities of the four best models are shown in Figure 10 for glass fibers and Wheaton river data, whereas the estimated distribution functions for four best models are depicted in Figure 11. Further, the hrf plots of the LLT-W distribution for glass fibers and Wheaton river data are illustrated in Figure 12. Based on visual comparison, we can conclude that the LLT-W distribution provides a close fit for glass fibers and Wheaton river data and it can be utilized in fitting data with increasing and modified bathtub hazard rates. Conclusions This paper provides a new log-logistic tan generalized (LLT-G) class with three additional shape parameters to capture skewness and kurtosis behavior. Five special models of the LLT-G family are presented by choosing Weibull, normal, Rayleigh, exponential and Burr XII, as baseline distributions in the proposed family, to obtain the LLT-Weibull, LLT-normal, LLT-Rayleigh, LLT-exponential and LLT-Burr XII. The general mathematical properties are obtained for the LLT-G class. The LLT-G parameters estimation is discussed by maximum-likelihood approach and simulation results are obtained to check the performance of these estimators. The importance and flexibility of the LLT-Weibull are checked empirically using two sets of real-life data, proving that it can provide better fit as compared with other competing models, such as odd log-logistic modified-Weibull, odd Dagum-Weibull, odd log logistic exponentiated-Weibull, Weibull-Lomax, Kumaraswamy-Weibull, Kumaraswamy-exponentiated-Burr XII, and beta-Burr XII distributions.
2020-10-28T19:20:58.210Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "0a2c612ac4c71874a7a3ccd9116fc98c3dc91488", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3934/math.2021028", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "db6fe842749c7c04b7f18eca42cb66db940144b4", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
252737172
pes2o/s2orc
v3-fos-license
What influences cancer treatment service access in Ghana? A critical interpretive synthesis Objectives Multiple social-cultural and contextual factors influence access to and acceptance of cancer treatment in Ghana. The aim of this research was to assess existing literature on how these factors interplay and could be susceptible to local and national policy changes. Design This study uses a critical interpretive synthesis approach to review qualitative and quantitative evidence about access to adult cancer treatment services in Ghana, applying the socioecological model and candidacy framework. Results Our findings highlighted barriers to accessing cancer services within each level of the socioecological model (intrapersonal, interpersonal community, organisational and policy levels), which are dynamic and interacting, for example, community level factors influenced individual perceptions and how they managed financial barriers. Evidence was lacking in relation to determinants of treatment non-acceptance across all cancers and in the most vulnerable societal groups due to methodological limitations. Conclusions Future policy should prioritise multilevel approaches, for example, improving the quality and affordability of medical care while also providing collaboration with traditional and complementary care systems to refer patients. Research should seek to overcome methodological limitations to understand the determinants of accessing treatment in the most vulnerable populations. INTRODUCTION Cancer is a growing burden in low and middle income countries (LMICs). 1 2 Despite efforts by the WHO to prioritise tackling cancer inequity, hurdles remain due to the limited evidence to inform cost-effective decision-making and the high expense of cancer control. 1 In Ghana, cancer treatment is focused in large referral centres in major cities, with disparity in resources and health worker expertise in rural areas and limited coverage of the National Health Insurance Scheme (NHIS). [3][4][5][6][7] Policy efforts to expand cancer services are further hindered when patients prioritise traditional alternatives over orthodox cancer services. 8 Multiple social-cultural, economic and health system factors can influence how patients access, navigate and choose suitable cancer care in Ghana. 3 In addressing this, there is a relative lack of public health surveillance data. There have been recent attempts to reconcile this, 9 but a comprehensive understanding remains elusive. One alternative approach is to consider the relevance of theory. Socioeconomic model and candidacy framework perspectives on cancer treatment access One important way of understanding the complexity and various factors previously described in relation to cancer treatment service access is to consider this in terms of the socioecological model. 10 This has been used in many settings to map barriers to healthcare engagement from a systems perspective. 11 12 The socioecological model considers the individual within an ecosystem of intrapersonal, interpersonal, community, health organisational and policy influences. This has been applied extensively to map systems factors since it was developed by Bronfenbrenner, 10 including health behaviours in several African settings. 11 12 The process of an individual accessing cancer services is dynamic and delays in access can occur at multiple STRENGTHS AND LIMITATIONS OF THIS STUDY ⇒ A strength of this study is the combination of purposive and systematic searches, and the reflexive approach to developing the search strategy which enabled it to cover a wide range of articles. ⇒ Additionally, the critical interpretive synthesis involved a critique of the literature to identify methodological limitations and research gaps. ⇒ However, as only published academic articles are included in this study, it may overlook other forms of evidence, including locally generated and day-today working understandings. ⇒ The interpretation of the evidence will reflect the inherent biases in world view of the lead author. Open access stages. Patients may experience barriers presenting at services, negotiating the care pathway, being offered and accepting treatment. A critical interpretive synthesis (CIS) exploring health access in the UK 13 highlighted that this involves dynamic interactions between the individual, the health environment, and health professionals. The term 'access' often overlooks this dynamism, while terms such as 'uptake' provide a narrow view that overlooks patient demand and service navigation afterwards. The candidacy framework provides a multistage interacting process of patient access holistically. For example, negotiation of services is overlooked in asylum seekers and refugees. 14 van der Boor and colleagues described candidacy in two broad stages: 'access' (identification of eligibility, navigation of services) and 'negotiation' (permeation of services, appearance, adjudications, offers and resistance, and dynamic interactions with local services). 14 Candidacy has been applied to understand patient-doctor interactions influencing cancer health seeking behaviours 15 and in an African context. 16 This paper builds on the issue of factors relating to cancer treatment service access in LMIC settings such as Ghana by presenting the findings of an evidence review that was informed by theory. The primary aim was to systematically review and critique literature from a systems perspective to understand factors influencing cancer treatment service access in Ghana. A further aim was to assess the strengths and limitations of methods associated with existing research relating to this topic. METHODS Using the RETREAT (Review question, Epistemology, Timescale, Resources, Expertise, Audience/purpose, Type of data) framework, 17 CIS was considered to be the most suitable approach. 13 CIS has been used in a variety of policy and health service settings [18][19][20] and combines systematic and purposive approaches to identify multiple types of evidence and identify themes following an evidence critique. This involves considering how the problem has been constructed, underlying assumptions and epistemology, and how this has influenced the methodology and conclusions. 13 Search strategy and literature search The search strategy to find articles on access of adult cancer treatment in Ghana was developed using the question framework PerSPECTiF (Perspective, Setting, Phenomenon of interest, Environment (optional Comparison) Timing, Findings) 21 in consultation with an information specialist. First, primary systematic searches were undertaken. This was tested and refined through pilot searches, before conducting comprehensive searches in Medline (via OVID), Web of Science, CINAHL and African Index Medicus. These databases were chosen following University of Sheffield librarian advice, and after the initial database scoping exercise in Medline (via OVID) and Google Scholar. The database search strategy and terms used can be found in online supplemental table 1. Initial searches were conducted on 26 March 2021 and the databases were searched for updates on 29 March 2022. Search terms were composed of multiple equivalent thesaurus terms and phrases to cover three elements: Ghana, health service access/uptake of services and cancer. Hand searches were performed using citation follow-up, identified relevant individual journals and in the reference listed of included papers. Study selection The lead author (CZT) screened all titles and abstracts using an agreed inclusion criteria, while two other authors (RA, RC) conducted quality checks of 10% of the sample screened. Any disagreement was discussed and settled among authors. Initial screening highlighted ambiguity in the screening criteria, which was further refined to ensure consistency prior to formal selection. Inclusion criteria included only primary research conducted with a 10-year time frame to align with Ghana's increased policy interest in efficiently expand national health insurance packages. 6 Initial screening highlighted the need for a focused exclusion criteria which was again informed by the question framework PerSPECTiF. 21 Applying the PerSPECTiF framework, the phenomenon of interest ('access') was defined holistically through the candidacy framework. 13 Thus, article screening sought to include articles relating to access throughout the entire patient pathway. The setting included all levels of the socioecological model to provide a systems perspective. Potentially qualifying abstracts were read in full, and only the full texts papers that meet the review inclusion criteria were included and reviewed. Data extraction and synthesis Data were extracted from included papers to facilitate decision-making and an audit trail (see data extraction in the online supplemental material). The lead author (CZT) used a standardised data collection form to extract data from the included studies. To eliminate data extraction bias, two reviewers (RA and RC) checked 10% of the extraction. There was no discrepancy observed between the lead author extraction and the sample reviewed. Key data extracted were setting, approach, population and sample, methods design, sampling, data analysis, cancer and stage studied, and the corresponding texts cross-tabulated against the socioecological model 10 and the candidacy framework. Data were collected as line of arguments. 13 First order constructs (taken directly from the data in articles) and second order constructs (author reports from articles) were extracted and separately noted within the framework. It was noted where study authors made further (secondary) inferences and assumptions from data that were not primary findings, but relevant to the themes. Researchers' limitations were recorded. The data were segregated into qualitative, quantitative and mixed methods studies and each interpreted qualitatively. A synthesising argument 13 was Open access applied to map first and second order findings across studies to interpret the evidence and to create new concepts that draw on the whole body of evidence (third order/synthetic constructs). Inferences were mapped across the studies to enable the body of evidence to be critiqued by research question construction, methods, and conclusions and how these fit into the general findings, to identify trends in the literature and limitations with current approaches. To bring together themes in candidacy, the candidacy framework was summarised into three main stages. van der Boor and White used two stages 14 ; however, as themes were identified, it was noted that treatment acceptance and the interactions around this over time play a pertinent role in the patient pathway in Ghana. Thus, the adapted three-stage model also notes the importance of the dynamics of treatment acceptance. As part of the synthesis process, the primary literatures from the data extraction were revisited and reinterpreted with emerging evidence to ensure critical details or limitations were not missed. Critical appraisal A streamline critical appraisal for major and methodological flaws was conducted using the critical appraisal checklist published in Dixon-Woods et al. 13 The quality of quantitative and qualitative findings was assessed in terms of reliability and trustworthiness. 22 In line with the CIS, articles were not favoured based on quality alone but contribution of rich insights. The CIS deals with weak evidence through including a critique of methods and approach. Lead author (CZT) appraised all the included papers with 10% of the sample being cross-checked by RC. Patient and public involvement statement No members of the public or patients were involved in this research. Search results Systematic searches in four databases and in six journals performed in March 2021 (updated in March 2022) identified 312 citation. After duplicate removal, 203 potentially relevant abstracts were screened, subsequently 78 articles were identified for full text screening. A further 16 abstracts were identified for full text screening following citation and reference searching. Twenty-eight articles were selected for inclusion (see PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flow diagram in figure 1). These comprised 15 qualitative, 12 quantitative studies and 1 mixed methods study. Open access A summary of the articles reviewed is included in the online supplemental table 2. Applying the candidacy framework and socioecological model, key themes were identified and are presented below. The evidence mapping table (online supplemental table 3) highlights the candidacy stage and level within the socioecological model that the articles addressed, as judged by the lead author. Accessibility defined through a 'candidacy' lens The candidacy framework 13 proved valuable in assessing how healthcare access has been approached, from a holistic perspective. Treatment acceptance was a key emerging issue where there was a gap in understanding. Within treatment acceptance there were multiple aspects-delays accepting, interruptions, choosing alternatives, and incompletion or loss to follow-up, noncompliance, refusal for referral and non-acceptance of diagnosis. This was a dynamic process. Although the full candidacy process has been considered in research, there were key gaps in how it had been approached and some aspects warranted further exploration. Seven studies aimed to explore delays with initial contact with cancer services on identification of need (presentation), yet the reasons for eventual nonpresentation could not be explored in most studies that were clinic based, as all patients surveyed eventually reached treatment centres. Two studies quantified acceptance as having complete follow-up and treatment completion. A further four noted delays and high nonacceptance/loss to follow-up but did not formally explore them at an individual psychosocial level as this was not within the research aims. Twelve qualitative studies explored individual barriers to accepting care. However, this was not always the primary focus but emerged in the findings. 23 24 As these were sampled from a clinic, they represented patients who eventually (despite delays or interruptions) accessed treatment, so the enablers and barriers in those who ultimately dropped out was unknown. Therefore, in-depth qualitative psychosocial information on treatment incompletion was not collected. Although the literature on acceptance was predominantly breast cancer related, there was some limited evidence it occurred in other cancers, but the extent and reasons for this were not explored. Barriers and enablers of cancer service access interpreted through the socioecological model The findings on enablers and barriers to candidacy for cancer treatment were mapped using the socioecological model to consider the Ghanaian health ecosystem. These are summarised in figure 2. Financial barriers Inability to afford treatment was reported as a barrier leading to delays in and non-acceptance of care. This was also noted by traditional herbalists 25 and health workers. 3 Yamoah et al 26 found that it encompassed socioeconomic factors, travel costs and lost work. Although two studies indicated it was a greater barrier in those from a low income, 27 28 Sanuade et al 29 suggested this was regardless of socioeconomic status. Four studies 23 28-30 demonstrated that high cost of medical treatment led to use of herbal and traditional alternatives. Prioritisation of finances on family led to delays in accepting treatment. 23 24 31 Financial barriers impacted negotiation of care 31 ; those from a lower income were more likely to experience longer wait times. 32 Fears and beliefs about treatment and its outcomes Obrist et al 33 found patients who believed in the efficacy of treatment were more likely to complete treatment in single variable analysis, although this was not significant in multivariate models where potential confounding factors were controlled for. Fears and beliefs pose individual level barriers from qualitative finding. This included fear of treatment, medicines and the outcomes. 8 24 28 29 34 For example, for breast cancer, loosing breasts, womanhood, female identity. 24 28 29 Fears around institutional trauma suggested lack of trust health facilities. 29 Healthcare professions deliberately miscommunicated to avoid patient fear and drop out. 35 Misunderstanding about cancer signs and symptoms Lack of knowledge about cancer signs and symptoms led to delays in seeking medical treatment. 8 24 28 31 36-38 With breast cancer, lumps were not regarded as serious when painless and sometimes considered part of normal tissue. 8 24 28 31 This was influenced by beliefs held in the community and within patients' social networks. 8 31 36 Lay beliefs were influenced by the terminology used for breast cancer in the local dialect, which led to poor understanding. 39 Figure 2 Key influences on candidacy for cancer treatment mapped against the socioecological model. [10][11][12] Open access Social demographics associations Evidence was conflicting as to whether age, religion and ethnicity impacted stage of diagnosis, wait times and treatment completion, which may associate differently in different cancers given different demographics and natural histories. 26 32 33 40-43 As ethnicities often cluster predominantly in different regions, the potential confounding of local health system and environment should be checked in future studies. Some evidence indicates that lower education is associated with the presenting of larger tumour masses 43 44 and waiting longer for treatment, 32 but this was not a consistent influencer, and whether it was associated with treatment acceptance was not explored. One study exploring treatment pathways for young people with chronic diseases suggested community beliefs were more influential than educational status. 36 Another found that community beliefs and norms influenced the perceptions of breast cancer regardless of socioeconomic status. 39 Despite clear financial barriers to treatment, there was no evidence that income status and occupation were associated with presentation and acceptance of treatment, but low income status was associated with increased wait times in one study. 32 Interpersonal Marital relationships influence treatment seeking Qualitative evidence in female cancer suggests husbands influence their wives' treatment seeking behaviours and acceptance, 8 Family prioritisation delayed treatment Women prioritised other activities linked to their economic, family and social roles, such as working for more money, treating children and paying school fees. These lead to delays presenting for, negotiating and accepting treatment. 8 23 24 28 31 Caring for others meant patients put the needs of others first, neglecting their own health. This aligns with findings on the unaffordability of treatment. Close support networks influence treatment access Patients understanding about cancer, its causes and how they engage with care was influenced by close friends and family. 8 23 28 34 36-38 45 Misinformation could lead to late presentation, delay help seeking and use of alternatives. 8 23 36 37 For women, lack of husband, family and friend support delayed treatment seeking. 27 45 Familial financial support was an enabler for some to seek treatment, 23 45 whereas family neglect may impeded access. 3 Institutional Healthcare personnel as gatekeepers to medical and alternative care Poor detection at primary health facilities, community pharmacy and private settings may have delayed diagnosis. 3 8 23 31 36 44-46 Some women sought over the counter medications for pain management. 37 Seeking assistance from someone other than a nurse or doctor was associated with a larger mass at diagnosis for breast cancer, which could include a diverse mix of pluralist and community supports. 44 Some health professionals also advised herbal alternatives, delaying medical cancer treatment. 29 A mixed method study found an inability for facilities to diagnose cancer, improper documentation and filing of patient folders and workload-likely exacerbated by a shortage of healthcare workers trained in oncology outside of major tertiary centres. 3 In agreement, qualitative studies with patients found misdiagnosis were common. 37 38 There were delays due to the complex referral process, waiting a long time to get results, having to go to many hospitals and laboratories to be diagnosed, and consultant rescheduling. 37 38 Delays between referral and starting radiotherapy were suggested to be due to resource availability, while irregular medicine supply also meant patients had to source medicine outside hospitals at high cost. 23 47 Patients showed negative perceptions of the care system and professionals. 27 29 29 45 Patients perceived treatment delays due to workforce shortage, hospital machines breaking and medicines shortages. 29 These beliefs appear to contribute to a lack of confidence and trust in the health system. Non-completing patients were more likely to harbour negative views such as that the unavailability of cancer medicines delayed their treatment. 33 Doubts in the efficacy and disappointment with conventional treatment created barriers to seeking treatment 27 and influenced use of pluralistic treatments. 30 Fear of radiation led some not to receive clinically recommended treatment. 30 Community The body of literature showed the strong role community beliefs and norms played in shaping access to cancer care. These were interconnected with personal perceptions and health system factors. Spiritual and traditional beliefs about cancer causes and treatment In an overwhelming majority of the literature, patients assigned their cancer diagnosis to spiritual causes, which led patients to seek traditional herbal and spiritual treatments, delaying presentation and interrupting medical treatment at multiple stages. This was associated with financial barriers to conventional treatment, 23 28 29 advice from supports such as spouse, 45 health workers, 29 religious messages, 8 29 community networks and beliefs. 8 29 31 34 36 39 Additionally, alternative therapies were often perceived as more available and acceptable, seen as efficacious. 27 28 39 47 Religiosity plays a diachronous role Religious beliefs, messages and leaders influenced alternative therapy use 8 29 and caused delays. 8 29 37 Yet, religious leaders were identified within patients' trusted support networks 8 and their advice facilitated medical presentation. 8 24 Some studies found the church played a supportive role, encouraging women to present at services and providing financial assistance to low-income families. 24 Gender and identity norms For women with breast cancer, mastectomy was associated with a fear of 'diminished sexuality and femininity' which led many women to delay treatment after seeing an oncologist. 24 28 29 31 Unaddressed fears about fertility loss may have increased dropout. 35 However, the barriers around identity may differ in other tumours and population groups. Community networks influenced beliefs and norms Common misconceptions, beliefs and behaviours held by patients were reinforced by community and social network beliefs. 8 29 31 36 Interlinked with community beliefs about cancer is self and socially experienced stigma due to the cause of the disease being spiritual: a curse, misendeavour or the patient being a witch. 24 27 39 This led to patients seeking traditional herbal and spiritual treatments, while creating shame and secrecy. A retrospective survey found patients who did not complete treatment were more likely to answer they do not know if they were fearful of their community response. However, this likely reflects uncertainty by next of kin respondents, who were substantially higher in 'non-completing treatment' groups. 33 Policy NHIS inclusion of cancer care Based on the financial barriers to treatment reported by patients and healthcare workers, lack of cancer care inclusion within the NHIS was inferred to lead to treatment refusal and delays. 3 This was the case for cancers not covered by the scheme, 26 42 48 as well as breast 23 24 and cervical cancer. 27 40 Patients with breast cancer were unanimously frustrated that the NHIS did not cover substantial amounts of treatment and discussed the huge financial burden, especially of chemotherapy drugs. 23 24 This meant some women could not start treatment on time. 34 This was aggravated by medicines stock-outs, 3 23 29 47 requiring purchase elsewhere at additional cost. 23 47 Not being insured was significantly associated with a shorter wait time for breast cancer treatment, 32 which could reflect preferential treatment to those paying upfront due to delays in the administrative process of reimburse NHIS funding. Knowledge of hormone receptor status predicted complete treatment follow-up, as this service is offered at a cost. 41 Healthcare professionals acknowledged costs were barriers for patients but struggled to broach such topics. 35 Integration of pluralistic care approaches Given the prominent roles of traditional, herbal and spiritual care improved integration with the orthodox health system could improve patient access. 25 28 Thirty-eight per cent of clinical workers surveyed in Ghana attribute treatment disruptions to traditional medicines use. 3 Reported use of traditional healers was a significant predictor of late presentation after other variables were controlled for. 44 An assessment of factors associated with treatment completion found visiting a traditional healer was a significant predictor of not completing treatment. 33 Although alternative support could be concurrent to orthodox medical intervention, over 50% of complementary and complementary medicine practitioners surveyed indicated they did not let patients seek other care alongside 25 and 63% of customers said they declined orthodox therapy while using such therapies. 30 Nevertheless, one study found that although 12.2% seek alternative therapies, this only partially explains high rates (73.1%) of loss to follow-up. 41 While traditional herbalists are considered health professionals, with some services integrated into the Ghanaian health system, poor knowledge of cancer causes and symptoms, and treatment and reluctance to refer to other services are barriers to providing their patients with timely appropriate care. 25 This is influenced by a perceived reluctance collaborate from other health system components. 25 Mburu et al suggested the interaction between treatment approaches is non-linear as acts at multiple pathway stages. 37 Critique of the evidence As part of the CIS approach, a critique of the literature was conducted to identify themes in methods, assumptions, theories and analysis to identify methodological limitations and research gaps for further studies. A summary of the studies characteristics is displayed in figure 3. This review accurately represents evidence in Ghana, which has a high breast cancer contribution. Most studies focus either on presentation delays or treatment interruptions, barriers and treatment delays. Although many studies note treatment incompletion and loss of follow-up, only two assess this directly and they note challenges in data collection, 33 41 there are no qualitative studies exploring definitive incompletion. Eighty-eight per cent (25/28) of studies were based in tertiary treatment clinics in the Greater Accra and Ashanti region, so may not reflect those in other regions. As 89% of qualitative studies sampled purposively from tertiary clinics, this led to biased sampling. They omit those who did not attend, those who dropped out without contact or cannot reach treatment centres. Sampling patients in clinics represents those who eventually presented for and accepted treatment. Barriers could be experienced differently in the under-represented population. As some studies had a small sample and were predominantly Christian, there was limited ethnographic diversity, which may influence generalisability. Findings on the social demographic traits linked to barriers accessing treatment were inconclusive. Hypothetical inquiry was found to be common across studies. This may lead to error if an individual is not able to accurately predict their behaviour in an unknown situation, 49 which may be the case as cancer is stigmatised and not talked about openly. 50 Further work should explore and seek to understand the impact cognitive bias may have when using hypothetical situations. The gaps in research identified are summarised in box 1. DISCUSSION This study used CIS to review multiple types of qualitative and quantitative evidence from literature on access to cancer treatment services in Ghana. Applying 'candidacy' enabled the dynamic and continuous process of accessing, negotiating and accepting treatment to be explored within the Ghanaian social, economic and policy environment. It highlighted determinants of cancer treatment service access in Ghana are interlinked and within each stage of the socioecological model. There is a research gap in understanding the determinants of accessing treatment in the most vulnerable populations due to methodological limitations. Through this approach, we were able to critique the literature, highlighting trends in methodology and gaps in evidence for future study. The CIS enabled detailed context-specific insights as well as identifying limitations in research approaches, data collection and acquisition challenges to inform future research. A reflective and iterative approach to broaden the breadth of evidence, interpretation and assimilation, was taken. This was particularly valuable for integrity in research in West African, due to epistemic injustice in how knowledge is perceived and interpreted. 51 This is the first study to have explored the applicability of the candidacy framework of healthcare access to a Ghanaian setting. Although this has proven valuable in other African settings, 16 this model was developed in a UK setting, therefore it was important to acknowledge pre-existing bias in perspective and thus to critically assess this framing and how it might impact the interpretation of results. 51 Selecting globalised frameworks (over those locally synthesised) can lead to interpretive marginalisation. Another limitation of the framework was that we found differences in conceptualising and describing patient engagement pathway with cancer services between studies meant that ascertaining candidacy categories for each required researcher interpretation. Only published academic articles were included in this study, which may overlook other forms of evidence, including locally generated and day-to-day working understandings. However, this was minimised through Box 1 Themes and research gaps identified through critiquing the evidence ⇒ Most studies are situated at tertiary clinics, so may not represent rural regions. ⇒ Sampling from tertiary clinics means populations who do not present, negotiate the referral pathway from local services and eventually choose treatment are not represented. Understanding what influences access in these missing populations warrants further study. ⇒ Treatment dropout is frequently observed but the reasons for this are not sufficiently explored. ⇒ As most studies focus on breast cancer, there is need to understand the extent of treatment drop out across all cancers and which factors influence this. Open access an iterative approach to include multiple databases, targeting local journals and supplementary searches. A methodological challenge was the vague and broad nature of terms relating to 'accessing' healthcare. Thus, there was a need to balance breadth of search with pragmatism to formulate a multistep search strategy. However, this may mean not all relevant literature was uncovered. Additionally, the critical interpretive nature of the review meant the evidence interpretation, conducted by the lead author, will reflect their inherent biases in world view. 22 This review was focused on access to cancer treatment in Ghana exclusively, so it is uncertain whether the findings translate to other contexts. The evidence highlighted financial barriers to cancer treatment access, which interacted with cultural factors and societal influences, such as norms around managing household finances, prioritisation and cultural acceptance of alternative medicines. Globally, catastrophic costs (defined as greater than 40% household effective income) due to non-communicable disease health expenditure are prevalent in LMICs, leading to individuals not taking medicines and impoverishment. 52 This exacerbates inequities, having the greatest impact on the poor and leads to detrimental coping strategies. 53 Poor patient clinician relationships have been found to lead people to seek traditional medicines alternatives in Ghana. 54 Traditional medicine practitioners were seen to offer more patient-centric, holistic care which was more comforting. A lack of trust was noted for orthodox facilities, which may reflect cultural beliefs as well as past healthcare experiences. An evidence synthesis across LMICs found modern medicines viewed to be harmful and ineffective; suspicion and mistrust of biomedicine lengthened delay and led to alternative use, and the impact of this may be exacerbated in the most vulnerable. 50 A qualitative exploration of influences on traditional medicines use in Ghana found their 'pull' by accessibility and alignment with cultural beliefs, whereas scepticism of biomedicine may push people from orthodox healthcare. 54 At a community level, spiritual beliefs about the origin of cancer interacted with personal perceptions at the individual level. Notions and beliefs typically held in the community can be ascribed as lay explanatory models of disease. In accordance with Kleinman, 55 lay models of disease can differ from biomedical models based on social experiences and impact how individuals interpret and act on their condition. Community factors influenced explanatory models for hypertension in rural northern Ghana and impact treatment access. 56 Similarly, in this review, explanatory models created stigma leading to secrecy and selection of traditional medicines over biomedical intervention. At a policy level, two key themes where reforms could improve cancer treatment uptake stood out: (1) greater inclusion of cancer treatments within the NHIS, (2) enhanced integration of traditional medicines to provide complementary options to medical care for cancer. Despite the NHIS aim of achieving universal health coverage for all, there has been notable disparity, with the lowest coverage concentrated in the poor. 57 58 Multidimensional barriers due to poverty, dissatisfaction and distrust of the health service and staff may prohibit enrolment. 59 Furthermore, catastrophic spending due to out of pocket costs remains high. 60 The NHIS plan seeks to cover a considerable amount of the local disease burden and since its nascence the inclusion list has been revised in response to transitions in disease burden and advancing treatments. 61 62 Despite breast and cervical cancer in theory being covered, 63 it is widely noted women still face considerable financial burden, which this study further highlights. The burden in men however remains less clearly mapped. The high expense of cancer medicines poses a challenge to decision-makers, who must weight costs and benefits when deciding on how to invest health budgets. Approaches such as the health technology assessment platform recently established in Ghana could help prioritise high-cost medicines. 64 65 Another policy area identified in this review was coordination between medical services and traditional medicines. Studies in Ghana have shown traditional medicine users are more likely to be poor and not insured on the NHIS. 66 The NHIS currently provides services through a plural system of public, faith-based, governmental and private facilities, and includes some traditional medicines. 57 Although lack of harmonisation of traditional medicines with the healthcare system 67 was reported in this study, there have been multiple reforms to this end since the Traditional and Alternative Medicine Directorate was established under the Ghanaian Ministry of Health in 2001. 68 Still coordination is hindered by a lack of professional respect from other health professionals. 67 Co-current use of alternative forms of care has been found in pregnant women in Ghana. It was highlighted the individual psychosocial and emotional support they provide, which this study found can be lacking patient interactions with orthodox medicine, despite being key for candidacy in cancer treatment. 15 69 The African Union has calls to recognise the importance of traditional medicines and a Global Centre for Traditional Medicines has been established in India. 70 71 However, there remain challenges in how traditional and complementary medicines are perceived and deemed efficacious. Barry 72 suggests differently constructed modes of evidence are needed for traditional medicines, as scientific evidence through clinical trials offers a reductionist, narrowly defined view of evidence, that overlooks the role of lived-in social experiences, advocating for the 'expanded epistemology of science'. 73 Future policies should seek to improve the affordability and quality of publicly provided medical care while harmonising with complementary treatments that align with community beliefs. Future research is needed to address the research gaps identified. First, to understand the extent of treatment non-adherence across all cancers and what individual, Open access social and health system factors impact this. Second, there are methodological limitations in understanding the views of those who do not attend clinics, which may represent the most vulnerable. Researchers should seek approaches to overcome this which are suitable within the local context. Twitter Laura A Gray @DrLauraAGray Acknowledgements The authors would like to thank the School of Health and Related Research Library team for their expert advice in designing the search protocol, as well as all researcher staff and graduate at the Universities of Sheffield and Ghana who commented or provided feedback on preliminary results. Contributors CZT, RA, LAG and RC conceptualised the study. CZT led in designing the study, conducted literature search to collect data, performed the data analysis and wrote the first draft. RA, LAG and RC contributed to selecting literature for inclusion in the study and advised on data collection and analysis. RC, RA, RNOA and LAG reviewed the initial and final draft manuscript and made intellectual inputs to improve quality. All authors read and approved the final manuscript. CZT is responsible for the overall content as guarantor and accepts full responsibility for the work and controlled the decision to publish. The corresponding author (CZT) attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted. Funding This research was funded by Wellcome as part of a PhD studentship (108903/B/15/Z). Competing interests None declared. Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting or dissemination plans of this research. Patient consent for publication Not applicable. Ethics approval Not applicable. Provenance and peer review Not commissioned; externally peer reviewed. Data availability statement Data sharing not applicable as no datasets generated and/or analysed for this study. Data sharing not applicable as this is a review of literature, and no datasets were generated. Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise. Open access This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/ licenses/by/4.0/.
2022-10-07T06:17:40.812Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "f10b4ec9b322d4fa37d190a7c053b2dd9cefaee8", "oa_license": "CCBY", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/12/10/e065153.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a10cd7d8770afffa8597be58ea4a1016d7c47ef7", "s2fieldsofstudy": [ "Political Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235436705
pes2o/s2orc
v3-fos-license
Structured and sustained family planning support facilitates effective use of postpartum contraception amongst women living with HIV in South Western Uganda: A randomized controlled trial Background Despite low pregnancy intentions, many women accessing contraception discontinue use, increasing the risk of unwanted pregnancies among women living with HIV (WLWH). We evaluate whether a family planning support intervention, inclusive of structured immediate one-on-one postpartum counseling, and a follow-up mechanism through additional health information and SMS reminders affects continuous contraceptive use and pregnancy incidence among recently postpartum WLWH. Methods We performed a randomized controlled trial between October 2016 and June 2018 at a referral hospital in southwestern Uganda. We included adult WLWH randomized and enrolled in a 1:1 ratio to receive family planning support or standard of care (control) and completed an interviewer-administered questionnaire at enrolment, 6 and 12 months postpartum. Our two primary outcomes of interest were; continuous use of contraception, and incidence of pregnancy. Secondary outcomes included contraception uptake, method change, discontinuation and pregnancy intentions. The trial was registered with clinicaltrials.gov (NCT02964169). Results A total of 317(99%) completed all study procedures. Mean age was 29.6 (SD = 6.0) vs 30.0 (SD = 5.9) years for the intervention vs control groups respectively. All women were enrolled on ART. Total women using contraception continuously were 126 (79.8%) in the intervention compared to 110 (69.2%) in control group (odds ratio (OR) = 1.75; confidence interval (CI) = 1.24-2.75, P = 0.003). Pregnancy rates were 2% (N = 3) in the intervention vs 9% (N = 14) in the control group (OR = 0.20, 95% CI = 0.05-0.62, P = 0.006). Pregnancy intention was lower in the intervention vs control group (OR = 0.23, 95% CI = 0.08-0.64, P = 0.002). Women actively enrolled on contraception reduced more in the control compared to the intervention group (OR = 3.92, 95% CI = 1.66-9.77, P = 0.001). Women enrolled on each contraceptive method did not differ by group except for implants. More women initiating contraception use within three months postpartum had better continued use for either intervention (N = 123, 97.6% vs N = 3,2.4%) or control group (N = 86,78.2% vs N = 24,21.8%). Method-related side effects were less reported in the intervention group (OR = 0.25, 95% CI = 0.10-0.60, P = 0.001). Conclusion We found that sustained and structured family planning support facilitates continuous use of contraception and lowers rates of pregnancy amongst postpartum WLWH in rural southwestern Uganda. Women who initiated contraception within three months postpartum were more likely to maintain continuous use of contraception than those initiating later. Further evaluation of actual and perceived facilitators to the continuous contraception use by this support intervention will help replication in similar settings. Trial registration NCT02964169 Supporting women living with HIV (WLWH) to delay or prevent unwanted pregnancy may improve women's health [1]. While many WLWH want to have children [2,3], in a recent survey of immediately post-partum WLWH in southwestern Uganda, over 95% wanted to avoid unwanted pregnancies in the next 12 months [4]. However, despite low pregnancy intentions, the overall postpartum contraceptive prevalence for WLWH in Uganda remains at 57%, with up to 50% of pregnancies reported as unplanned [5]. Many women accessing contraception therefore, may inconsistently use or discontinue use without switching to another alternative and effective method, which can lead to unwanted pregnancies, perinatal HIV transmission, and other pregnancy complications [6,7]. According to some studies, about 15% of breastfeeding women conceive before resumption of menses, 28% women conceive while exclusively or almost exclusively breastfeeding [8,9]. These risks and complications are higher among WLWH [2,3]. Correct use of family planning methods can avert 10% of child mortality and more than 30% of maternal deaths by promoting spacing of pregnancies [7,10,11]. Pregnancies in the postpartum period pose the greatest risk for the health of women and their infants, with increased risks of adverse health outcomes [12]. Using a birth control method soon after childbirth is very crucial in ensuring unintended pregnancies are avoided in more than 95% of postpartum women who want to avoid pregnancy in the next 24 months [12]. Data suggest that women who began contraception before the return of their menses were more likely to continue contraception use by the end of their first year postpartum compared to those who initiated a family planning method after the return of their menses [11]. Other scholars have noted up to 40 percent of women with an unintended pregnancy while accessing contraception in Missouri, USA were using the method incorrectly or inconsistently [13]. This rate may be higher in sub-Saharan Africa and Uganda. Some behavioral change interventions targeting to improve uptake, continuation, and proper use of postpartum family planning in similar low resource settings include; easing access to family planning services, increasing supply and method choice, enabling women to immediately switch to preferred or more acceptable and effective methods when they encounter problems, and improving follow-up mechanisms (eg, appointment or refill reminders via mobile technology) [6,14]. However, postpartum contraceptive uptake remains low [12] despite many health facility encounters women and or couples have around the postpartum period. These encounters would potentially be a cost-effective, convenient, efficient and reliable "touch point" to initiate and or provide a woman with an effective family planning method. Limited research has been conducted to understand contraception discontinuation and unintended pregnancies during the early postpartum period, especially among postpartum WLWH whose probability of contraceptive failure has been reported to rise sharply over time [15,16]. We performed a randomized controlled trial to evaluate whether a family planning support intervention voucher, inclusive of structured immediate postpartum counseling, and a follow-up mechanism providing additional health information and SMS reminders has a measurable impact on continuous use of contraception, and pregnancy incidence at 12 months postpartum among recently postpartum WLWH delivering at a publically-funded regional referral hospital in rural, southwestern Uganda. We hypothesized that improved family planning support would improve continuous use of effective FP by reducing delayed refills, discontinuation, improving rates of switching to an alternative and effective contraception method, and thus improve protection against unintended pregnancies within the first year following childbirth. Earlier analyses showed that very few women in this cohort (N = 9, 2.8%) wanted to have another child within the first year following childbirth, with women in the intervention group more likely to initiate contraception within 8 weeks postpartum [4,17]. In this analysis we present the 12-month data on continuous contraceptive use and pregnancy incidence. Study design and setting This analysis includes 12-month exit data collected from WLWH enrolling into a randomized controlled trial at the maternity ward of a regional referral hospital in southwestern Uganda. A complete description of the study's six-month follow-up period has been published [4]. This interim analysis presented the initiation of a modern family planning method within 8 weeks postpartum as a primary outcome of interest. Mbarara Regional Referral Hospital (MRRH) is a publically-funded teaching hospital serving 10 districts, with a population of over 5 million people. The study aimed to evaluate the effect of family planning support vs standard of care on contraceptive use at 12 months postpartum (NCT02964169). The hospital is equipped with trained staff, midwives, and obstetricians able to offer comprehensive family planning services. Women accessing care at this hospital represent varied social and demographic backgrounds. The hospital performs over 12 000 deliveries annually and reports a 13% HIV prevalence among women (hospital records). Due to structural and capacity challenges at the referent hospital site, routine discharge is often completed without family planning counseling. Participants and recruitment This study was initiated in October 2016 and enrolment ended in May 2017. Eligible participants were WLWH 18 years of age or older, having a male sexual partner and/or anticipating one within the next 2 years, admission into a postnatal ward regardless of pregnancy outcome and qualified for any family planning methods available. The exclusion criteria included: 1) HIV-negative, 2) history of hypersensitivity to latex, 3) no male sexual partner and/or not anticipating one for the next 2 years, 4) only sexual partner has had vasectomy, 5) resides and works more than 20km from the study site, or 6) inability to complete informed consent process as assessed by the study nurses. A primary partner was defined either as a regular spouse, who is also a regular sexual partner, or the most recent sexual partner if no main partner was named. Trained research assistants (RAs) approached WLWH in the postnatal ward within 12 hours after delivery to capture all women delivering at this facility. RAs obtained voluntary written informed consent from all eligible participants in the local language in a private area of the hospital. All consenting participants gave written informed consent, or for those who could not write, a thumbprint was made on the consent form. Family planning voucher intervention Following delivery, the women randomized to the intervention group were counseled and given a family planning voucher by the study nurse. Structured, immediate postpartum counseling was offered in a clinic setting, in a private room by a well-trained study nurse and lasted up to 40 minutes as previously reported [4]. The one-on-one educational counseling provided face-to-face standardized (a list of items to talk about was generated) information to the woman and primary sexual partner (if available) on the available contraceptive methods, family size choices and desires, medical eligibility for the different contraceptive methods, dual contraception, when to start contraception, how to use methods, potential side effects and benefits/effectiveness, and where to access family planning methods. Women were given opportunity to ask questions to facilitate women's informed choice to any of the five freely-available family planning methods at MRRH (including condoms, injectables, contraceptive pills, copper IUDs, and contraceptive implant). WLWH were also counseled on standard days method (SDM), and lactational amhenorrea method (LAM). The same voucher and counseling was also given to the male sexual partner, when available, due to its identified effect on family planning utilization [18]. The voucher was used as an incentive to motivate women to seek/demand and access family planning easily at family planning clinics. A well-trained nurse was available at the postnatal clinic to identify women with vouchers to access the relevant service provider within one hour of arrival. Although family planning is freely-available in public health facilities, stock outs, especially of the long-acting contraceptive methods (implants and IUDs), attributed mainly to supply chain challenges are common [19]. The study promoted minimal stock outs of all methods at MRRH during the study period through regular involvement in forecasting and ordering. Private facilities rarely experience contraceptive methods stock outs [19], which are fairly affordable, and thus the family planning voucher also offered an opportunity for free administration (eg, injection, IUD placement, implant) of a contraceptive method purchased (by participants) outside of the public health care facility. For this study, women who reside and or work within 20km from MRRH were enrolled, thus all women were in close-proximity to a facility with family planning services. The voucher was offered for free, had an expiration of 3 months from the date of delivery and included detailed information about side effects for the different contraceptive methods as well as a general overview on benefits of family planning [4]. Within 3 months postpartum, the women were expected to have returned to a health facility for at least 2 of their scheduled routine postnatal visits and or immunization appointments. After 6 months postpartum, the women who selected oral contraceptive pills were sent daily scheduled SMS reminders [adherence support] for the first 4 months, then weekly reminders for the next 2 months ( Figure 1). This level of SMS support has been found to have a positive impact on adherence to ART [20]. Enrolled sexual partners/regular spouses of women in intervention arm also received these reminders weekly [but not daily or monthly]. The scheduled/timed reminders were also sent monthly if one chose an injectable contraception. Daily reminders were sent to women who chose male or female condoms. Routine reviews on family planning were done for all women alongside their routine visits at the HIV clinic or post-natal PMTCT visits. Control group In order to align the control group with guidelines-based standard of care, women in the control group were offered routine family planning counseling at discharge as defined by the Uganda clinical guidelines [21] by a well-trained ward nurse and documented in the Ministry of Health discharge form. The control group was not given a voucher nor received any SMS reminders. Women from both groups were invited to start any available family planning method of their choice prior to discharge. The choice and place of family planning was entirely up to the participants regardless of group. All women accessing services at the hospital family planning clinic received care by a trained nurse to counsel and administer a chosen family planning method, beside the study nurse, who doubled as a dedicated contact person at the same clinic. Women were followed for one year. All participants completed interviewer-administered interviews at baseline, six and twelve months postpartum. Interviews were conducted by two trained research assistants fluent in English and the main local language in a private office space or any other private space of their choice. Each interview took about 30-45 minutes. Permission to contact spouses/sexual partners was obtained from all enrolled women. If permission-to-contact was given, a spouse/ sexual partner was contacted, enrolled and interviewed at baseline, 6 and 12 months for both groups. The spouses for controls were not given vouchers nor sent SMS reminders. Randomization and blinding A study biostatistician generated a randomization list with a block size of 20, totaling 160 participants equally in each of the two groups into which mothers could be randomly assigned and enrolled. The aim of the study and details of the procedures to be involved in the trial, were explained before randomization. Once mothers consented to participate in the study, a study number was allocated by the research assistant by taking the next in a series of similar opaque envelopes provided to conceal allocation of groups. These opaque envelopes were labeled with computer-generated list of numbers with group allocation. The intervention provider and study participants were not blinded to the intervention, as the study design and nature of the intervention did not allow it. However, the research assistants were blinded to the group allocation until eligibility and study participation was confirmed. They were also blinded to the hypothesis of the study. WLWH were screened for eligibility and enrolled equally into the intervention arm (Family planning support) and standard of care (control group) between October 2016 and May 2017. A different RA from the one enrolling participants was trained to specifically collect follow-up data to limit social desirability bias. Data was collected electronically. The data analyst was blinded to the group allocated to different study participants. A transport refund of US$3 was given on each visit. Study outcome Our primary outcomes of interest for this trial were 1) effective contraception use, defined as consistent use (both self-report and observational chart review by a study nurse at the family planning clinic to identify and confirm consistent use) of a family planning method within 12 months postpartum, and 2) pregnancy incidence, defined as the rate of pregnancies in the next one year following enrolment. Missing family planning method for more than 2 calendar days in a cycle/mo for all contraceptive methods (except for more than a week for injectables) was defined as inconsistent use of contraception. Outcomes from both reports were evaluated to confirm the internal validity and consistency of the two measures. A family planning chart or card is usually provided at any facility providing family planning services, and routinely presented by the women and filled out by any attending HCP initiating, switching or renewing a contraceptive method. Blister packs for oral contraceptives were inspected by the study RA whenever available. Secondary outcomes included family planning uptake at 6 and 12 months postpartum, change of family planning method, pregnancy intentions, contraceptive discontinuation (abandonment or late refills of any contraceptive method for more than 1 week except more than 1 month for injectables) and reasons for discontinuation. Consistent use for condoms or diaphragm as an effective family planning method was defined as "use every time one had sex". Although postpartum counseling on contraceptive methods focused on the five methods; condoms, injectables, contraceptive pills (including progestin-only pills for breastfeeding mothers), copper IUDs, and contraceptive implants as provided at MRRH, modern family planning was defined as use of these five and any other methods such as diaphragm, cervical cap which participants could have obtained from other facilities. Switching was defined as any change of a family planning method. Pregnancy tests were done at 6 and 12-month follow up visits. Follow-up tools also contained specific questions to explore and document a pregnancy occurrence within the last 6 months. The women were further instructed to inform/notify the study nurse upon any other pregnancy diagnoses during the study period. Sample size and statistical analysis Provision of a family planning voucher has a significant impact on contraceptive uptake and long-term contraceptive use by an increase of 18 percentage points within 2 years of the reporting period among postpartum women [18]. The current contraceptive uptake among WLWH is 45% [2]. We therefore hypothesized that improved and focused family planning support through a voucher will increase effective contraceptive use among WLWH to at least 63% within a follow-up period of one year postpartum. Allowing for a two-sided type I error of 5%, we required 320 postpartum women (with equal numbers of participants in the intervention and control groups) to enable 90% power to demonstrate a significant difference in consistent (effective) contraceptive use (our first primary outcome) between groups. For the second primary outcome, Nieves and colleagues [2] documented a pregnancy desire/ aspiration rate of 33% for sexually active WLWH attending ART clinic. Specifically, Snow and colleagues [1] also reported a lower and significant likelihood in desiring more children in future of 27.7% among married or cohabiting HIV-positive women when compared to 56.4% among HIV-uninfected women. Improved family planning accessibility and support offered through a voucher reduced the rate of births in the next year following the intervention to 6.8% [18]. We therefore hypothesized a decrease in pregnancy or births in the one year following the intervention to 6.8% with improved family planning availability and support among the recent post-partum WLWH. Allowing for a two-sided type I error of 5%, we required a total of 80 women in each of the two groups to have a 95% chance of detecting a significant difference in pregnancy rates (our second primary outcome) between groups. We described demographic and clinical data for the cohort using standard descriptive statistics. The Household Food Insecurity Access Scale (HFIAS) was calculated as recommended [38]. We compared dichotomous outcomes between study groups by estimating crude odds ratios with 95% confidence intervals, and testing for differences between the two groups. We estimated P-values with χ2 testing using a level of significance of 0.05. We compared continuous outcomes and estimated P-values using studentized t-tests. All primary and secondary outcomes were analyzed using intention-to-treat analyses (although no participants were wrongly allocated a group [23]. Although our study was fully randomized, the differences in baseline characteristics noted between study groups was assessed for confounding by fitting multivariable logistic regression models. As per the revised CONSORT guidelines for reporting randomized trials [26], we assessed for sub-group effects for the following characteristics by testing the significance of interaction terms in a multivariable regression model: 1) children living in household below 18 years of age (dichotomized into <3 children and ≥3 children categories), 2) parity (dichotomized into 1-3, >3), 3) Prenatal visits (<3 and ≥3 visits), 4) Household income (≤150 000 and >150 000 Ugandan Shillings), 5) involvement in any domestic violence (Involvement, no involvement), 6) Religion (Catholic, protestant, others), 7) disclosure to spouse, and 8) duration on ART (<4, 4-8 and >8 years). These sub-groups were not pre-specified but identified at data analysis stage when comparing different baseline characteristics across the two arms. A Mantel-Haenszel test was also done to control for each of these variables. All statistical analyses were performed using STATA version 13.0 (Statacorp, College Station, Texas, USA). Compliance with ethical standards All human subjects' ethical approvals were obtained from Institutional Review Committees of Mbarara University of Science and Technology (No.10/08-16) and Uganda National Council of Science and Technology, and registered with clinicaltrials.gov (NCT02964169). A research assistant trained in human participant research conducted informed consent procedures with eligible participants in the local language in a private area. A written informed consent was obtained from all eligible participants. RESULTS Of the 2237 women screened for eligibility between October 2016 and May 2017, 364 participants were eligible (Figure 1). A total of 28 women declined participation because they had not yet disclosed their HIV sero-status to partner, and 16 declined due to the time commitment. Of the 1873 women excluded, 1829 were due to HIV-negative sero-status, residing and working outside catchment area (21), age below 18 years (16), history of tubal ligation (4), or reported history of hypersensitivity to latex (3). A total of 320 postpartum WLWH were randomized and enrolled equally into the family planning voucher and control arms of the study following delivery at MRRH. Three hundred and seventeen (99%) of enrolled participants completed all study procedures. The characteristics of the enrolled women are presented in detail elsewhere [17] and summarized in Table 1. In brief, the mean age of participants was 29.6 (Standard Deviation [SD] = 6.0) and 30.0 (SD = 5.9) years for the control and family planning voucher groups, respectively. Mean CD4 count was 396 cells/mm 3 (SD = 61) for those enrolled in control vs 393 cells/mm 3 (SD = 64) in family planning voucher. At enrolment, half of the women in both the voucher (N = 87, 55%) and control (N = 86, 54%) groups wanted to have a child in 2 years postpartum. Over 80% of referent pregnancies in the voucher (N = 136, 86%) and control (N = 128, 81%) groups were reported as planned. All women were enrolled on ART, with mean ART duration of 5.1 (SD = 4.5) for those in the voucher group and 4.1 years (SD = 3.3) for those enrolled in the control condition. Almost half of participants (46%) attained education greater than primary (50% vs 43%). A small number of male sexual partners participated in the study including 18 (11%) and 21 (13%) for the voucher and control, respectively. Most of the women (N = 107, 70% vs N = 103, 69%) reported prior use of modern family planning methods. None of the women opted to start or receive immediate postpartum family planning before discharge. Other demographic and clinical characteristics were similar between the two groups as presented in Table 1. Both self-report and postnatal chart audit to evaluate contraceptive use generated identical outcomes, confirming the internal validity and consistency of the two measures. By 12 months postpartum (1st primary outcome), 126 (79.8%) women used an effective family planning method(s) consistently in the family planning support group compared to 110 (69.2%) women in control group (OR = 1.75; 95% CI = 1.24-2.75, P = 0.003, Table 2). Pregnancy rates (2nd primary outcome) were 2% (N = 3) in the first 12 months postpartum of the family planning intervention group compared to 9% (N = 14) in the control group (OR = 0.20; 95% CI = 0.05-0.62, P = 0.006). The desire/intention to have another child was lower in the intervention group compared to control group (OR = 0.23; 95% CI = 0.08-0.64, P = 0.002). The proportion of women who ever-enrolled on any family planning method in the last 12 months postpartum was not significantly different between groups (OR = 2.03, 95% CI = 0.50-8.28, P = 0.316). In this cohort, the proportion of women still actively enrolled on an effective family planning method at 12 months postpartum reduced for both groups, but more in the control compared to the intervention group (OR = 3.92; 95% CI = 1.66-9.77, P = 0.001). By 12 months postpartum, frequency of methods used differed by group for each family planning method (except for oral contraception): injectables were selected by most women (N = 194, 63%, P = 0.057, Figure 2) and 61% of this proportion was in the experimental arm vs 65% in the control arm. The proportion of women using implants followed at 19.8% (N = 61, 23% vs 17%, P = 0.012), with <10% of women in each group selecting condoms (5.5% vs 9.2%, P = 0.630), oral contraception (3.9% vs 3.9%, P = 0.933), and IUDs (7.1% vs 2.6% P = 0.091). About 3% of the women in the control arm and none in the intervention arm used standard days method, and or lactation amenorrhea method. Less than 4% used dual contraception in each of the groups. No women reported use of a diaphragm or cervical cap. More women who started early use of contraception within 3 months postpartum had better continued use of contraception than those starting later than three months postpartum for either intervention (N = 123,97.6% vs N = 3, 2.4%) or control group (N = 86, 78.2% vs N = 24, 21.8%). Although no statistically significant differences were detected, more women in the intervention group changed contraceptive method within the course of one year postpartum (OR = 1.36; 95% CI = 0.54-3.22, P = 0.548). All women with switches opted for long-acting implants or IUD from a less effective contraceptive method (condoms). However, fewer women (15, 9.6%) discontinued family planning for at least one week within the first year after birth for the intervention group compared to 37 (24.2%) women in the control group (OR = 0.34; 95% CI = 0.17-0.65, P = 0.001). The reasons for discontinuing family planning did not differ by group, except for method-related side effects which were less reported in the intervention group compared to the control group (OR = 0.25; 95% CI = 0.13-0.58, P = 0.001). The different sub-group analyses did not indicate any differences in family planning continuation between groups across any ART adherence level categories reported throughout the study period. While we performed a randomized control trial and anticipated that any differences in baseline characteristics occurred by chance, we detected baseline differences in children living in a specific household, parity, prenatal visits, household income, domestic violence, religion, disclosure to spouse, and duration on ART ( Table 1). We assessed whether the estimated odds ratio was affected by differences in baseline characteristics between groups by fitting multivariable logistic regression models. In these models, we found no meaningful change in the odds ratio of confirmed consistent/continuous use of an effective contraceptive method at 12 months for intervention vs control participants after adjustment for the factors listed above (adjusted OR = 2.15; 95% 95% CI = 1.35-4.41; P = 0.004). In stratified analyses to assess for differences in our primary outcome within sub-groups, no sub-group-by-treatment interaction terms were significant (Table 3). Thus, while we never set out to estimate effects within sub-groups, our results do not suggest differential effects in treatment within specific sub-groups of WLWH. DISCUSSION We demonstrate that sustained and structured family planning support facilitates consistent use of effective postpartum contraception amongst WLWH in Southwestern Uganda. We found an 11% increase in consistent use of contraception among women enrolled in the family planning support group (80%) vs women in standard of care (control) group (69%). We also found a 7% lower rate of pregnancy in the intervention group (2%). Women who started early use of contraception within 3 months postpartum were more likely to maintain contraceptive use than those starting later than three months postpartum. There was a 10% rate reduction in the desire to have another pregnancy/child in two years postpartum among women in the intervention group (3%) vs control (13%). More women in the control group discontinued family planning mostly because of method-related side effects compared to intervention group. There were similar rates of women who had ever-enrolled on any family planning method within the course of the year in both groups, although active-use of contraception at 12-month postpartum reduced significantly in control vs intervention groups. While not statistically significant, we did observe a higher rate of change/switching contraceptive methods in the intervention vs control groups, with different methods used for those ever enrolled on an effective family planning method within the course of one year. Only proportions of women enrolled on implants were statistically significant. These data therefore demonstrate that, consistent or continuous use of contraception could be increased by a well-structured comprehensive family planning voucher program inclusive of structured and dedicated one-on-one counseling to: reassure, provide proper information, understand and address socio-cultural concerns, support and ensure correct and continuous use of contraception, as well as a good follow-up mechanism to improve provider-client-interaction and remind users of appointments for review or refill to avoid unintentional discontinuation due to missing clinically allowable grace period. Our data contribute to existing evidence on optimal uptake and utilization of family planning to avert unwanted pregnancies, child mortality and maternal deaths when couples are well supported to adequately space pregnancies in resource-limited settings. Like other prior studies [39,40], we found a modest increase in continuous use of contraception among women supported to initiate and use different methods of contraception. Other scholars have also documented reduced unwanted pregnancy rates when women are well counseled and given adequate information to quickly initiate postpartum contraceptive use during antenatal or postnatal visits [41], plus other secondary outcomes, including reduced contraceptive method discontinuation [42], reduced pregnancy intentions [18], improved method switching/ change [43], uptake and long-term contraceptive use among postpartum women [4,18]. Our data therefore signal that, among WLWH qualified to use available postpartum contraception, sustained structured family planning support through structured one-on-one counseling, improved follow-up through additional health information, routine reviews and scheduled SMS reminders to prompt refills during this critical postpartum period using the existing structures of a publically funded hospital significantly improves uptake and adherence to effective contraceptive methods. This approach seems to facilitate proper information transfer, continuity, correct use and reassurance about anticipated side effects, thus providing efficient, convenient, low-cost means to reduce delayed refills, delayed switching and discontinuation duration in a resource-limited setting where uptake to freely available contraceptive methods are mainly constrained by method acceptability and information gap [6]. The significant difference in other clinically significant outcomes between groups, also offer promising preliminary data that structured family planning support is useful in delaying pregnancy intentions, as well as method discontinuation due to side effects in this select population that access regular and routine HIV care at designated specialized HIV clinics. Whether other populations gain preferential benefit from the structured family planning support over standard of care remains an important question for further investigation. Our results are largely consistent with prior studies comparing the effect of improved quality of care through intensive counseling at initial visit/consultation and multiple contacts or technological reminders with routine care for reducing rates of unwanted pregnancies as well as improving continuation rates and acceptability to contraception. We acknowledge two 2013 (9 trials,) and 2019 (10 trials, n = 6242) systematic reviews reporting on strategies to improve adherence and continuation of hormonal methods of contraception [39,44]. However, only one prior trial (n = 43) has specifically compared a multi-component intervention of a one-onone nurse counseling, a videotape about OCs and written material about OCs during antepartum period vs routine resident-physician counseling (usual care), exclusive of any technological reminders or calls [45]. Also noteworthy from both reviews, no adherence, consistent use data was reported among trials investigating use of direct in-person counseling or multiple components of counseling during initial visit or at discharge. Additionally, in both reviews, neither pregnancy nor discontinuation data were reported among trials investigating use of reminders and or educational messages compared to routine care. As we acknowledge documented benefit of various strategies to improve contraceptive adherence and continuation, we note that several of these trials had small sample sizes, and only two involved technological approaches. The evidence in the recent review was also largely limited by variability of comparator "usual care" and heterogeneity of outcome definitions. For example, Trent and colleagues defined on-time injection appointments as adherence in an intervention that included daily appointment text messages 72 hours before a scheduled clinic visit, a call following missed appointment and monthly health messages, where standard of care included counseling, clinic appointment reminders as well as a call from the nurse manager whenever a scheduled re-injection appointment was missed [46]. Bereson and colleagues [47] on the other hand, defined and assessed consistent OC use (adherence) and pregnancy from both self-report and medical records audit after a 6-month intervention of counseling plus weekly, monthly and toll-free phone calls compared to standard care from a nurse practitioner with written protocols for new OC users. Two other trials assessed number of missed pills per cycles one through three [48] or at six months, use of OC at last sexual encounter and interruptions in OC use longer than seven days, [49]. Importantly, all the 10 trials included in both reviews had high lost-to-follow-ups of 24%-44%. Unlike these prior studies, our study was powered to demonstrate a significant benefit (or lack thereof) of a multi-component intervention of sustained family planning support (inclusive of structured one-on-one counseling at discharge, and a structured follow-up-mechanism through additional health information, reviews and SMS reminders during this critical postpartum period) on consistent use of postpartum contraception compared to standard of care amongst WLWH in Southwestern Uganda. Importantly, although scholars have documented up to a third of women starting a modern contraceptive method discontinuing the method within the first year [6], our trial's discontinuation rate was 24% amongst women receiving standard of care vs 10% for the intervention group. Although the provision of a structured follow-up mechanism inclusive of SMS reminders and or reviews, as well as additional health information about family planning benefits and expected possible side-effects on the voucher could have been helpful in continuously supporting women on different contraceptive methods chosen, qualitative data analysis of interviews conducted with a subset of participants to understand how this intervention facilitated couples' or individual decision-making to initiate and continue contraception is ongoing. Another potential explanation for differences between our study and prior data, which have shown larger contraception use rates is probably our inclusion of high risk WLWH already enrolled at specialized HIV/ART clinics where they receive routine medical reviews, counseling and ART refills. Our selection criteria could therefore over-estimate true differences in contraception uptake and or consistent use within the general population, and specifically in women at lower risk for unmet contraceptive need. Over 80% of referent pregnancies in our study population were also reported as planned. Although our study documented increased rates of discontinuation due to side effects with control vs family planning support group as previously reported [44], women who started early use of contraception immediately postpartum had better continued use of contraception than those starting later than three months postpartum. Additionally, the benefit of a well-structured and sustained multi-component intervention for improving effective contraceptive use was seen across most sub-groups. Effect sizes appeared more in certain sub-groups, for example women aged ≥30 years of age, those with parity ≥3, attended ≥4 prenatal visits, women who earn/ have an household income >150,000UGX (1US$ = 3650UGX), and women that had ever used modern family planning, which corroborates prior work demonstrating consistent less likelihood of discontinuing contraceptive use in these categories [6,50]. However, there were no significant differences in the effect of the confirmed consistent use of a contraceptive method across these categories. Our study had a number of strengths. This study documents consistent or continuous use of contraception in a randomized controlled trial amongst postpartum WLWH followed up over a 12-month period. We also document unintended pregnancies, failure, method switch, pregnancy intentions as well as reasons for discontinuation. Our research assistants and data analysts were blinded to the group allocation and hypothesis of the study. A different RA from the one enrolling the participants collected follow-up data to limit social desirability bias. Our data are also the first to our knowledge powered to detect a difference in continuous contraceptive use with a sustained and structured multi-component family planning support compared to the standard of care. We also performed this randomized controlled trial in a regional referral, prototypical, publically-funded and operated hospital in a rural setting with an active postnatal and family planning unit, subject to standard limitations of public sector health care facilities in the region. As such, the study has great potential for generalizability to similar settings. Given that most women in our study (86%) had reportedly disclosed to at least one of their sexual partners, we observed a low rate of eligible participants declining participation (N = 44, 8%) mainly due to worry for possible unintended sero-status disclosure (N = 28, 64%) or limited time available to them to participate in the study (N = 16, 36%). The population of postpartum WLWH with a high proportion of contraception uptake also enabled us to document the differences in contraceptive uptake, continuation, switch and discontinuation of the initiated contraceptive method and the effect of baseline subcategories by study arm with consistent use of contraception within a multivariate model. Our study had some important limitations. We observed an increase in contraceptive uptake from 38% to 98% and pregnancy intentions dropped from 55% to 8% from the pre-study period to the study period, suggesting either presence of strict inclusion criteria of only WLWH, all already enrolled and receiving ART from designated HIV clinics. This may also suggest a possibility of a Hawthorne effect, which might have resulted from improved availability of contraception different method choices at the facility within easy reach of all women regardless of the group or involvement in the study, as well as improved provider-client-interaction from the well-trained facility nurses that administered these methods and provided good, one-on-one counseling throughout the study period as recommended by the standard of care guidelines vs routine care that is frequently practiced in similar publically-funded facilities. As such, similar rates of women who had ever enrolled on any family planning method within the course of the year in both groups were observed. We observed a striking percentage of more than 80% intended pregnancies in this study. We hypothesize that the reason for this high rate of intended pregnancy is possibly due to the ongoing Test and Treat policy in Uganda, enrolment of all our participants on ART (with mean ART duration of 5.1 (SD = 4.5) for those in the voucher group and 4.1 years (SD = 3.3). This prior enrolment on ART may have improved counseling and knowledge for improved behavior change. Less than 40% of women also reported use of modern contraception 2 years' pre-pregnancy, probably in anticipation of pregnancy. We further note a small proportion of women who desired pregnancy in the first year following delivery (4 vs 5 women) and a small rate of pregnancies (3 vs 14 pregnancies) that made it difficult to make meaningful comparisons between intended and unintended pregnancies. Another limitation of our study was the inability to singly identify an aspect of this particular multi-component intervention that influenced consistent use of contraception. Although our study was matched to include male partners, few males participated. However, the measurement of our study outcomes did not require partner participation, suggesting generalizability and replicability of our study findings. CONCLUSION AND RECOMMENDATIONS We found that sustained and structured family planning support inclusive of structured one-on-one counseling at discharge and an improved follow-up mechanism facilitates consistent use of effective contraception amongst postpartum WLWH in rural southwestern Uganda. Pregnancy rates were therefore rare in the family planning support group. Women who were initiated to start use of contraception within 3 months following birth had better continued use of contraception than those starting later than three months postpartum. There were similar rates of changes/switches in postpartum contraception use with different methods for those ever enrolled on an effective family planning method within the course of one year. There was also a significant reduction in the desire to have another pregnancy/child in two years postpartum among women in the intervention group vs control. Active enrolment on any family planning method at 12 months postpartum was better in the family planning support group. Given the increased use of modern contraception, especially injectables in this population, there is clear need to improve family planning support from the context-aware health care providers through a structured one-on-one counseling before discharge, as well as active follow-up to facilitate provider-client interaction, early initiation, continuous use, and adequate switching to reduce unnecessary/unintended discontinuation and unplanned pregnancies. This is particularly important in resource-limited settings where women's family planning needs differ over time from their sexual partners and their negotiation skills are lacking, especially amongst women having irregular sexual partners/encounters due to separation, migration, work, social lifestyle or spousal rejection of modern contraception use. Improved quality of care at consultation and or initial visit, support and provision of additional health information could also improve continuation rates, reduce unwanted pregnancies and facilitate adequate referral or switching to a more tolerable effective contraception option. This work documents postpartum contraceptive use dynamics among WLWH through a client-specific prospective trial over the 12month period. The trial reflects the effectiveness of an intervention aimed at reducing unwanted/unplanned pregnancies, through improved information transfer using one-on-one face-to-face counseling, availing educational reference materials, and an appropriate follow-up mechanism to continuously engage and encourage family planning initiation and consistent use. There is therefore a need to scale up this model in all maternity centers to help curb unplanned pregnancies, maternal morbidity and mortality. Further work should help clarify which methods are related with which side effects or health concerns so as to help reduce the likelihood of these negative expressions. Additionally, further evaluation of the actual and perceived barriers to continuous use of modern contraception in such resource-limited settings will help improve its availability and consistent use in such settings.
2021-06-16T05:20:12.932Z
2021-06-05T00:00:00.000
{ "year": 2021, "sha1": "d7e69e42b0ead1d4abcb3444317bf61f1d885601", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7189/jogh.11.04034", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d7e69e42b0ead1d4abcb3444317bf61f1d885601", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
22933855
pes2o/s2orc
v3-fos-license
Phytochemical, antioxidant and protective effect of cactus cladodes extract against lithium-induced liver injury in rats Abstract Context: Opuntia ficus-indica (L.) Mill. (Castaceae) (cactus) is used in Tunisian medicine for the treatment of various diseases. Objective: This study determines phytochemical composition of cactus cladode extract (CCE). It also investigates antioxidant activity and hepatoprotective potential of CCE against lithium carbonate (Li2CO3)-induced liver injury in rats. Materials and methods: Twenty-four Wistar male rats were divided into four groups of six each: a control group given distilled water (0.5 mL/100 g b.w.; i.p.), a group injected with Li2CO3 (25 mg/kg b.w.; i.p.; corresponding to 30% of the LD50) twice daily for 30 days, a group receiving only CCE at 100 mg/kg of b.w. for 60 days and then injected with distilled water during the last 30 days of CCE treatment, and a group receiving CCE and then injected with Li2CO3 during the last 30 days of CCE treatment. The bioactive components containing the CCE were identified using chemical assays. Results: Treatment with Li2CO3 caused a significant change of some haematological parameters including red blood cells (RBC), white blood cells (WBC), haemoglobin content (Hb), haematocrit (Ht) and mean corpuscular volume (VCM) compared to the control group. Moreover, significant increases in the levels of glucose, cholesterol, triglycerides and of aspartate aminotransferase (AST), alanine aminotransferase (ALT), alkaline phosphatase (ALP) and lactate dehydrogenase (LDH) activities were observed in the blood of Li2CO3-treated rats. Furthermore, exposure to Li2CO3 significantly increased the LPO level and decreased superoxide dismutase (SOD), catalase (CAT) and glutathione peroxidase (GPx) activities in the hepatic tissues. Conclusion: CCE possesses a significant hepatoprotective effect. Introduction Lithium (Li) is widely used in the treatment of bipolar disorder and has received a great deal of attention in the existing research literature (Sharif et al. 2011). Lithium is initially distributed in the extracellular fluid and then accumulates in some major organs, such as the liver and kidney. Prolonged treatment with Li was associated with many diseases, including diabetes insipidus (Sahu et al. 2013), thyroid, goitre development (Rogers & Whybrow 1971) and haematological dysfunctions (Oktem et al. 2005). On the other hand, it has been reported that lithium inflicts oxidative damage on liver by generating reactive oxygen species (ROS) increasing lipid peroxidation (Nciri et al. 2012), and reducing antioxidant enzyme activities (SOD, CAT and GPx) (Vijaimohan et al. 2010). In the recent years, alternative therapeutic approaches have become very popular (Narayana & Dobriyalm 2000). Nature has been a source of medicinal treatments for thousands of years (King et al. 1998). Previous studies indicate that a great number of medicinal plants have the ability to biosynthesize phytochemicals, possessing several activities that are used to exert an efficient protective effect against oxidative stress and related disease. The cactus [Opuntia ficus-indica (L.) Mill. (Cactaceae)] is widely grown in Latin America, South Africa and the Mediterranean area. According to several studies, cactuses are known for their high content of fibre, mineral, vitamins, fatty acids and carotenoids (Tesoriere et al. 2005). It can be used as an antidiabetic, ulcerogenic, antiviral, anti-inflammatory and analgesic agent, and may protect against numerous chronic diseases (Wiese et al. 2004;Alimi et al. 2011;Antunes-Ricado et al. 2015). To the best of our knowledge, there are no data concerning the in vivo effect of cactus Opuntia ficus-indica cladodes extract on hepatic damage and oxidative stress induced by lithium. Thus, the present study, carried out in rats, evaluates the protective effect of CCE against lithium-induced oxidative stress and hepatotoxicity through an analysis of serum biochemical assay, antioxidant enzyme activities in liver and histopathology analysis. Plant material and preparation of extract Cactus young cladodes (2-3 weeks of age) were collected at the beginning of March 2014 in Gafsa, state of Tunisia. A voucher specimen (OFI 0214) was identified and authenticated by a taxonomist, Dr. Boulbaba Ltayef, and deposited at the herbarium (H03) in the Faculty of Sciences, University of Gafsa, Tunisia. The sample was washed with water, cut into small pieces and then pressed using a hand-press, homogenized with 10 mM Tris-HCl, pH 7.4 at 4 C and centrifuged 30 min at 3500g at 4 C. The supernatant was subsequently collected and lyophilized. Before use, the lyophilized extract was dissolved in water. Determination of total phenolic and flavonoid content The total phenolic content of CCE was determined by the Folin-Ciocalteu method (Dewanto et al. 2002). Total phenolic content was expressed as mg gallic acid equivalent (GAE)/g extract and gallic acid was used as standard. The total flavonoid content of the samples was determined as previously described (Dewanto et al. 2002) and quercetin was used as standard. The results were expressed as mg quercetin equivalent (QE)/g extract. Extraction of CCE polysaccharides CCE powder (3 g) was dissolved in 30 mL of distilled water and heated at 80 C for 3 h. The obtained solution was filtered and centrifuged at 12,000g for 15 min at 4 C. The obtained supernatant was precipitated overnight at 4 C by adding ethanol (four times greater than the volume of extract solution), followed by centrifugation at 4500g for 10 min. The precipitate was dissolved in 20 mL of distilled water and deproteinized by the Sevag reagent (chloroform/butanol 4:1, v/v), as described by Navarini et al. (1999). The resulting aqueous fraction was extensively dialyzed against double-distilled water for three days and precipitated again by adding fourfold volume of ethanol. After centrifugation, the precipitate was washed with anhydrous ethanol, dissolved in distilled water and lyophilized. Fourier transforms infrared spectroscopic analysis of CCE polysaccharide The CCE polysaccharides were identified using a Fourier transform infrared spectrophotometer (Shimadzu, FTIR-8400S, Kyoto, Japan) equipped with an IR solution 1.10 Shimadzu software in the range of 4000-500 cm À1 . FT-IR scans were collected on completely dried thin films of FP cast on KBr discs. The spectra covered the infrared region of 4000-500 cm À1 , the number of scans per experiment was 10 and the resolution was 6 cm À1 . Extraction and HPLC analysis CCE extract (1 g) was mixed with 10 mL of 80% methanol agitated for 10 min, vortexed and then centrifuged at 10,000g for 10 min. An aliquot of CCE (0.5 mL) was added to 0.5 mL of acetone and agitated for 30 min at room temperature. After that, the homogenate was centrifuged (12,000g for 15 min). HPLC analysis was carried out with an analytical HPLC system Varian Pro Star model 230 (Varian Associates, Walnut Creek, CA) equipped with a ternary pump (model Q2 Prostar 230) and a photodiode array detector (model Prostar 335). The HPLC separation of the active compounds was carried out on C-18 reverse phase HPLC column (Zorbax, 250 mm Â4.6 mm, particle size 5 lm). The mobile phase consisted of water:acetic acid (98:2 v/v) (A) and water:acetonitrile:acetic acid (58:40:2 v/v/v) (B). The elution gradient used was: 0-80% B for 25 min, 80-100% B for 10 min and 100-0% B for 5 min. The flow rate was 0.9 mL/min and the injection volume was 20 lL. Compound identification was performed at 280 nm for gallic acid, catechin, caffeic acid, epicatechin, vanillic acid and coumarin and at 360 nm for rutin, isorhamnetin, quercetin and kampferol. The identification of all compounds was carried out by comparing their retention times with those obtained by injection of the standard solutions under the same conditions. In vitro antioxidant activity of CCE DPPH radical-scavenging activity Antiradical activity was evaluated by measuring the scavenging activity of CCE on the 2,2-diphenyl-l-1-picrylhydrazil (DPPH) radical using the method described by Kirby and Schmidt (1997) with some modifications. Briefly, 500 lL of CCE at different concentrations ranging from 0.05 to 0.6 mg/mL was added to 375 lL of methanol and 125 lL of a DPPH solution (0.2 mM in methanol) as a source of free radicals. These solution mixtures were kept in the dark for 60 min. Scavenging activity was measured by monitoring the decrease in absorbance at 517 nm. BHT (butylated hydroxytoluene) was used as a positive compound. The DPPH radical-scavenging activity (RSA) was calculated using the following formula: where Ac is the absorbance of the control reaction and As is the absorbance of CCE. The IC 50 values were calculated from the graph plotting. The test was carried out in triplicate. Reducing power assay The reducing power of the CCE was determined by assessing its ability to reduce iron (III) as described by the method of Yildirim et al. (2001). Briefly, 1.25 mL of phosphate buffer (0.2 M, pH 6.6) was mixed with 1.25 mL of potassium ferricyanide solution (10 g/L) and 1 mL of CCE at different concentrations (0.1-0.7 mg/mL). The mixtures were incubated at 50 C for 30 min, then 1.25 mL of 10% (w/v) trichloroacetic acid was added and subsequently centrifuged at 3000g for 10 min, followed by mixing 1.25 mL of the supernatant solution with 1.25 mL of distilled water and 0.25 mL of ferric chloride (1 g/L). After 10 min, the absorbance was measured at 700 nm. A higher absorbance indicates a higher reducing power. The EC 50 value (mg/mL) was the CCE concentration at which the absorbance was 0.5 for the reducing power and was calculated from the graph plotting. BHT was used as a standard, and the test was carried out in triplicate. Metal (Fe 2þ ) chelating activity The chelating ability of Fe 2þ ions with CCE extract was evaluated using the method of Dinis et al. (1994). A volume of 0.5 mL of CCE extract at different concentrations ranging from 0.1 to 0.8 mg/mL was added to 1.6 mL demineralized water and 0.5 mL of FeCl 2 (2 mM). After 15 min, 0.1 mL ferrozine (5 mM) was added to the mixture. After 10 min, the absorbance of the complex (Fe 2þ /ferrozine) having a red or purple colour was measured at 562 nm. The chelating activity of mixture Fe 2þ /ferrozine was calculated as: Ac is the absorbance of control reaction and As is the absorbance of CCE extract. The EC 50 value was defined as the concentration (mg/mL) and was calculated from the graph plotting. Ethylenediamine tetraacetic acid (EDTA) was used as a positive control and the test was carried out in triplicate. Animals and treatments Two-month-old healthy male Wistar rats (n ¼ 24) weighing about 120 ± 10 g were purchased from Central Pharmacy of Tunis (Tunisia) and maintained for an adaptation period of 1 weeks under the same conditions of temperature (22 ± 2 C), relative humidity (70 ± 4%) and 12 h light/dark cycle. The animals were fed commercial pellet diet and tap water ad libitum. The animals were treated according to the Tunisian code of practice for the Care and Use of Animals for Scientific Purposes and the European convention for the protection of vertebrate animals used for experimental and other scientific purposes (Council of Europe No123, Strasbourg, 1985). After the adaptation period, animals were divided into four groups of six rats each and treated as follows: Group 1 (C): control rats given distilled water (0.5 mL/100 g of body weight; i.p.). Group 2 (Li): rats administered intraperitoneally (i.p.) with 25 mg/kg of lithium carbonate (dissolved in distilled water) twice daily for 30 days. This concentration was chosen according to previous data (Oktem et al. 2005). Group 3 (CCE): rats given CCE at 100 mg/kg of b.w. for 60 days and then injected with distilled water (0.5 mL/100 g b.w.; i.p.) during the last 30 days of CCE treatment. Group 4 (Li þ CCE): rats given CCE at 100 mg/kg of b.w. for 60 days and then injected with lithium carbonate at a dose 25 mg/kg of b.w. (i.p.) during the last 30 days of CCE treatment. After 60 days of treatment, animals from each group were rapidly sacrificed by decapitation in order to minimize the handling stress, and their blood samples were collected in two tubes. The first tube was dry and served for serum collection; the second was heparinized and served to determine haematological parameters. Preparation of liver extracts About 1 g of liver was cut into small pieces and immersed into a 2 mL ice-cold lyses buffer (TBS, pH 7.4) using ultra-turraks homogenizer for 15 min, then filtered and centrifuged (5000g, 30 min, 4 C). Supernatants were stored at À80 C until use. Evaluation of lipid peroxidation The lipid peroxidation level in liver was measured as the amount of thiobarbituric acid reactive substance (TBARS) according to the method of Ohkawa et al. (1979). For this assay, 125 lL of supernatant (S1) were mixed with 50 lL of TBS, 125 lL of TCA-BHT to precipitate proteins and centrifuged (1000g, 10 min, 4 C). Then, 200 lL of supernatant was mixed with 40 lL of HCl (0.6 M) and 160 lL of TBA (0.72 mM) dissolved in Tris and the mixture was heated at 80 C for 10 min. The absorbance was measured at 532 nm. The amount of MDA was calculated using an extinction coefficient of 156 mM 1 cm À1 and expressed in nmoles/mg protein. Determination of catalase activity (CAT) Catalase activity was measured according to the method of Aebi (1984). For the assay, 780 lL of PBS (100 mM, pH 7.4) and 20 lL of (liver) homogenate were taken in a cuvette. The reaction was started by adding 200 lL H 2 O 2 (500 mM), and absorbance was recorded at every second for 1 min at 240 nm. Enzyme activity was calculated using an extinction coefficient of 0.043 mM À1 cm À1 and expressed in international units (I.U.); in micromoles H 2 O 2 destroyed/min/mg protein at 25 C. Determination of superoxide-dismutase (SOD) Liver tissue was homogenized in 10 volumes of ice-cold 1.15% KCl buffer containing 0.4 mM of PMSF and then centrifuged at 2000 rpm for 10 min (4 C). Total (Cu, Zn and Mn) SOD activity was determined according to the method of Durak et al. (1993). This assay was based on the inhibition of nitro blue tetrazolium (NBT) reduction by the xanthine-xanthine oxidase system as a superoxide generator. One unit of SOD was defined as the enzyme amount causing 50% inhibition in the NBT reduction. SOD activity was expressed as units/mg protein. Determination of gluthatione peroxydase (GPx) GPx activity was assayed using the method described by Flohe and Gunzler (1984). The enzyme activity was expressed as lmoles of GSH oxidized/min/g protein. Protein assays Protein content was estimated according to Lowry's method (1951), using the bovine serum albumin as standard. Determination of haematological parameters The heparinized blood samples were analyzed in order to determine the haematological parameters (red blood cells, white blood cells, haemoglobin, mean corpuscular volume and haematocrit) using an electronic automatic apparatus (MAXM, Beckman Coulter Inc., Fullerton, CA). Assays of serum markers The level of glucose, cholesterol, triglycerides as well as the activity of aspartate amino transferase (AST), alanine amino transferase (ALT), lactate dehydrogenase (LDH) and alkaline phosphatase (ALP) in serum were assayed spectrophotometrically using kits (Spinreact, Girona, Spain). Histopathological examination Liver tissues were quickly excised and immersed for 48 h at 4 C in a fixative solution (10% formaldehyde, in phosphate buffer, pH 7.6), dehydrated in ethanol and embedded in paraffin. Paraffin sections were cut into $5-8 lm using a microtome and stained with haematoxylin-eosin solution (H&E). Tissue preparations were observed and micro-photographed under a light BH 2 Olympus microscope (Olympus, Tokyo, Japan). Statistical analysis Data were expressed as means ± standard deviation (SD). Statistical significance was assessed by Student's t-test, p < 0.05 was considered statistically significant. Total polyphenol and flavonoid contents The CCE used in this study contained phenolic compounds (125.01 ± 0.90 mg GAE/g CCE) whose level was expressed as gallic acid equivalents. Total flavonoids were expressed as quercetin equivalent per gram of the extract. The flavonoid compounds found in the CCE amounted to 71.02 ± 0.757 mg QE/g CCE (Table 1). HPLC analysis of CCE The HPLC elution profile of CCE shown in Figure 1 revealed the presence of phenolic acids identified at 280 nm. There were six known phenolic acids found in CCE (gallic acid, catechin, caffeic acid, epicatechin, vanillic acid, and coumarin) and two unknown compounds. The HPLC analysis of flavonoids showed seven compounds identified at 360 nm including four known flavonoids: rutin, isorhamnetin, quercetin and kampferol ( Figure 2). FT-IR spectral analysis of CCE polysaccharides As shown in Figure 3, the FT-IR spectra of CCE-purified polysaccharide displayed a broad stretching intense peak at 3415 cm À1 , which is the characteristic absorption of hydroxyl groups, followed by weak C-H stretching bands at 2922 cm À1 (Xu et al. 2009). The weak peak at 2349 cm À1 is a non-identified compound. The band around 1647 cm À1 was attributed to the stretching vibration of C ¼ O in protonated carboxylic acid. It also revealed the presence of uronic acids. The band towards 1407 cm À1 was attributed to the absorbance of the COO À deprotonated carboxylic group (Manrique & Lajolo 2002). The peak observed at 1247 cm À1 could be explained by the C-O stretching band of complex polysaccharides (Naqvi et al. 2011). The peak observed at 1075 cm À1 could be characteristic of rhamnose polysaccharide content and the peak observed at 613 cm À1 could be characteristic of b-D-glucose (Zhao et al. 2007). DPPH radical-scavenging activity As can be seen in Figure 4, CCE free radical-scavenging activities were able to reduce the stable free radical DPPH presented by IC 50 value and defined as the concentration of the antioxidant required to scavenge 50% of DPPH present in the test solution. The value of CCE extract was 0.30 ± 0.03 mg/mL. The results were compared with the scavenging ability of control samples of BHT (0.09 ± 0.06 mg/mL). (1), catechin (4), caffeic acid (5), epicatechin (6), vanillic acid (7) and coumarin (8). The HPLC separation of the active compounds was carried out on C-18 reverse phase HPLC column (Zorbax, 250 mm Â4.6 mm, particle size 5 lm) on an elution gradient at 25 C. The mobile phase consisted of water:acetic acid (98:2 v/v) (A) and water:acetonitrile:acetic acid (58:40:2 v/v/v) (B). The elution gradient used was: 0-80% B for 25 min, 80-100% B for 10 min and 100-0% B for 5 min. The flow rate was 0.9 mL/min and the injection volume was 20 lL. Reducing power assay Based on the reducing power assay, it was found that the addition of CCE led to the reduction of Fe 3þ to Fe 2þ by donating an electron The effective concentration of CCE EC 50 (0.36 ± 0.08 mg/mL) yielded 0.5 of absorbance compared with BHT (0.039 ± 0.05 mg/mL) as a positive control (Table 2). Metal (Fe 2þ ) chelating activity The metal chelating activity was evaluated by measuring the formation of the complex ferrozine -Fe 2þ . Results presented in Table 2 show high-level chelating activity of CCE (IC 50 ¼ 0.49 mg/mL), but was not stronger than the standard EDTA (IC 50 ¼ 0.034 mg/mL). (6), kampferol (7). The HPLC separation of the active compounds was carried out on C-18 reverse phase HPLC column (Zorbax, 250 mm Â4.6 mm, particle size 5 lm) on an elution gradient at 25 C. The mobile phase consisted of water:acetic acid (98:2 v/v) (A) and water:acetonitrile:acetic acid (58:40:2 v/v/v) (B). The elution gradient used was: 0-80% B for 25 min, 80-100% B for 10 min and 100-0% B for 5 min. The flow rate was 0.9 mL/min and the injection volume was 20 lL. Evaluation of lipid peroxidation (MDA) and antioxidant enzymes (SOD, GPx and CAT) The results presented in Figure 5 shows the effect of lithium alone and in combination with CCE on the induction of lipid peroxidation in liver determined on the basis of MDA level. The latter significantly increased in liver (p < 0.01) in lithium-treated rats compared to controls (3.02 ± 0.36 nmol nmol/mg protein), which suggested the presence of oxidative stress. Administration of CCE (p < 0.01) led to a significant reduction in MDA levels compared to the lithium-treated group (6.85 ± 0.31/mg protein). Figure 5 also shows that treatment with Li induced a significant (p < 0.01) decrease in SOD, CAT and GPx. These changes were alleviated when the rats were treated with CCE; there was significant increase in the activity of these enzymes to almost control values. On the other hand, in CCE-treated rats, no significant change in enzymes activities occurred when compared with that of normal group. Haematological parameters The effects of lithium, CCE and their combination on haematological parameters in the rats are shown in Table 3. Results indicated that Li caused a significant reduction (p < 0.01) of RBC, Hb, Ht, VCM values and decreased WBC counts compared to the control group. In the case of the group treated with both lithium and CCE at 100 mg/kg of b.w., the haematological parameters were found to revert to almost normal values. Histological observations The liver showed the following histopathological changes ( Figure 6). The control group shows no obvious abnormality (Figure 6(A)). In the present study, lithium application constituted histopathological changes which caused severe liver damage, including sinusoidal dilation, congested central veins, vacuolization and inflammatory cell infiltration when compared with control liver (Figure 6(B)). The hepatocellular damage was slightly reduced in lithium and CCE-treated rats at a dose of 100 mg/kg b.w. (Figure 6(D)). However, no histological alterations were observed in the liver of CCE-treated group when compared to the control (Figure 6(C)). Discussion Over the past several years, Opuntia ficus-indica has been widely used in traditional herbal medicine to treat various types of potential damage to vital organs due to the protective activities and nutritional values this plant possesses. In the present work, the CCE was phytochemically studied in order to evaluate the total phenolic and flavonoid content in CCE and investigate its antioxidant potency against oxidative stress induced in rats by the injection of lithium carbonate (Li 2 CO 3 ). It is well-known that phenolic substances found in CCE exhibit considerable free radical-scavenging activities by virtue of their reactivity as hydrogen-or electron-donating agents, as well as metal ion-chelating properties, preventing metal-induced free radical formation (Rice-Evans et al. 1996). As shown in Table 1, the plant contained high phenolic and flavonoid content. These results are more important than those reported in previous studies showing the antioxidant and antigenotoxic activities of CCE (Brahmi et al. 2011). Also, it was found that several analytical methods such as free radical scavenging, reducing capacity and metal chelating activity that have been used to evaluate the eventual antioxidant capacity of CCE in vitro could be employed altogether to evaluate hepatoprotective effects in vivo. The stable DPPH is widely used to investigate the scavenging activity of some natural compounds due to their hydrogen donating ability (Hajji et al. 2010). As shown in Figure 4, the CCE was able to effectively reduce the stable free radical DPPH. These results suggested that the presence of phenolic compounds in CCE might be the main cause of their considerable radical-scavenging activity. However, the phenolic compounds are highly dependent on the number and arrangements of hydroxyl groups as well as the presence of constituents to serve as electron donors (Lapornik et al. 2005). Table 2 shows that the reducing power assay used to evaluate CCE antioxidant potential is based on the reduction of Fe 3þ to Fe 2þ to donate electrons (Yildirim et al. 2001). This assay indicated that the CCE contained a high amount of total phenolics and flavonoids that showed greater reducing power than that of synthetic antioxidant (BHT). Metal chelating activity is claimed to be among antioxidant mechanisms since it reduces the concentration of the catalyzing transition metal in lipid peroxidation. Among the transition metals, Fe 2þ ion is known to be the most important lipid prooxidant due to its high reactivity. IC 50 value of chelating activity was the concentration of the CCE required to chelate 50% of Fe 2þ present in the reaction mixtures. Lower IC 50 reflected better chelating activity. Results here showed that CCE had the highest chelating activity compared with EDTA as a positive standard. For the first time, these antioxidant properties could render CCE an excellent plant to protect lithium-induced toxicity in vivo. Therefore, this alkali metal is known to enhance the production of reactive oxygen species (ROS) leading to an oxidative stress in different organs and can inflict damage on lipids, proteins and DNA (Khairova et al. 2012;Vijaimohan et al. 2010). Previous studies have revealed that exposure of rats to lithium toxicity can lead to alteration of antioxidant defence mechanisms and enhancement of lipid peroxidation in rats (Nciri et al. 2012;Oktem et al. 2005). Under our experimental conditions, daily injection of Li at a dose of 25 mg/kg twice for 30 days caused a significant increase in MDA levels (p < 0.01) in liver tissues, which acted as a potential lipid peroxidation (LPO) biomarker. Increase in MDA level enhanced the LPO and increased ROS production with subsequent disturbance of membrane function and integrity (Toplan et al. 2013). Administration of CCE at a dose of 100 mg/kg of b.w. prevented this lithium-induced increase of MDA level when compared with the lithium-treated rats. This action could be explained by the ability of CCE to reduce the LPO level in cell membrane by scavenging free radicals induced by lithium. In this context, other works have demonstrated that CCE is capable of protecting tissues against oxidative stress induced by various toxins by inhibiting LPO both in vivo and in vitro Figure 5. Effect of lithium carbonate and CCE on MDA level and antioxidant enzyme activities (SOD, CAT and GPx) in control (C), carbonate lithium-treated (Li), cactus-treated (CCE), and cactus supplemented with carbonate lithium (Li þ CCE) groups. Data are expressed as means ± SD for six rats in each group. Statistical comparison was performed using Student's t-test. Ã p < 0.05, ÃÃ p < 0.01 compared with control group (C). þp < 0.05, þþp < 0.01 compared with lithium carbonate (Li)treated group. (Brahmi et al. 2012;Hfaiedh et al. 2014). The mechanisms whereby lithium exerts its deleterious effects have not been accurately determined yet. However, it has been suggested that induction of oxidative stress is the central mechanism whereby lithium exert their cellular effect (Nciri et al. 2012;Oktem et al. 2005). Additionally, it has been previously reported that acute exposure of rats to Li may induce oxidative stress by excessive formation of reactive oxygen species (Khairova et al. 2012;Musik et al. 2014). Reduction in antioxidant enzymes (SOD, CAT and GPx) is responsible for lithium-induced oxidative damage (Khairova et al. 2012;Vijaimohan et al. 2010). In available literature data, Li has been found to enhance GPx activities. Our results further confirmed those in earlier studies, indicating that oral administration of Li 2 CO 3 in drinking water for 4 weeks leads to a decrease in GPx activity in different organs (Kielczykowska et al. 2008;Vijaimohan et al. 2010). In addition, a significant reduction in CAT activity was noticed in Li-treated rats, which could be explained by the overconsumption of this enzyme involved in the conversion of H 2 O 2 to H 2 O. As for the antioxidant enzyme SOD, it showed a significant decrease in liver after lithium treatment, which resulted in the loss of free radical scavenging mechanism (Keen et al. 1985). SOD is also known for its role in inhibiting OH À production by scavenging O 2 À , and thus leads to a decrease in the initiation of LPO (Fridovich 1994). The administration of CCE had a potent protective effect on oxidative lithium-induced damage in rats, as revealed by a significant increase in hepatic CAT, SOD and GPx activities. Also, the beneficial effect of CCE could be explained by the antioxidant capacity of its constituents. As can be seen in Figures 1 and 2, the HPLC analysis revealed the presence of four flavonoids (rutin, isorhamnetin, quercetin, kampferol) and six phenolic acids (gallic acid, catechin caffeic acid, epicatechin, vanillic acid, coumarin), which are known to have beneficial effects as their responsibility in preventing the formation of reactive oxygen species . Further, the antioxidant capacity of CCE was found to be due to its total polyphenolic contents as well as polysaccharides (fibres). A study by Lee et al. (2002) indicates that the antioxidant properties of CCE are mainly due to vitamins and flavonoids, more particularly quercetin that has been reported to be a highly efficient radical scavenger. Lithium has also been shown to induce haemorrhage, inflammatory cells infiltration congestion and sinusoidal dilatation, which can cause increased WBC counts. The elevated WBC counts could be due to the stimulation of immune system and showed that there were oedema and inflammation (Kielczykowska et al. 2014;Sharma & Iqbal 2005). This study also indicated that exposure to Li induced significant haematological changes because of decrease in RBC, Hb, Ht and VCM compared to control, which suggested the occurrence of anaemia (Kielczykowska et al. 2014;Malhotra & Dhawan 2008). The significant reduction in RBC counts, Hb and Ht might be due to diminished erythropoietin, haemoglobin synthesis reduction and an increase in erythrocyte destruction rate in hematopoietic tissues. Therefore, the observed decrease of these parameters giving rise to the production of oxidative radicals could reach the cell membrane and cause membrane lipid peroxidation (Kato et al. 2006). The obtained result is in agreement with Malhotra and Dhawan (2008), who point out that alkali metal exposure promotes oxidative damage in erythrocytes. However, supplementation with CCE exhibited potential protective effect in most of the haematological parameters in lithium-treated rats (Table 4). Therefore, the CCE was able to reactivate the erythropoiesis mechanism and thus enhance the production of erythropoietin. CCE was found to have important antioxidant properties able to inhibit Li-induced haematological changes. Also, we believe that the CCE examined in earlier studies has anti-inflammatory effects (Antunes-Ricado et al. 2015). Additionally, the experimental results also revealed that the lithium treatment could affect the lipid metabolism of liver (total cholesterol and triglycerids) and glucose level in rats. The increase of these parameters can be seen as a sign of liver damage. Similar changes have been reported by Vijaimohan et al. (2010) who point out that lithium induced free radicals, which is likely to cause change in serum lipid profiles. Glucose is considered as a parameter more vulnerable to the presence of lithium in an organism influencing carbohydrate metabolism (Kielczykowska et al. 2014). An increased level of glucose was also noticed in rats after oral administration of Li 2 CO 3 with different doses for 7 weeks (Sharma & Iqbal 2005). Also, intravenous lithium causes an increase in plasma glucose and reduction of plasma insulin in normal rats (Kielczykowska et al. 2014). Moreover, lithium seemed to produce other biochemical defects in liver revealed by a significant rise in serum hepatic marker enzymes, such as AST, ALT, LDH and ALP Vijaimohan et al. 2010). Both enzymes AST and ALT can be used in order to elevate the function and integrity of liver cells inducing an increase in the serum levels (Ahmad et al. 2011;Kielczykowska et al. 2014). Additionally, the increase of LDH levels is considered an indicator of liver damage (Ncibi et al. 2008). Apart from LDH, the elevated level of ALP reflected a cytotoxic effect exerted by lithium. In this context, several authors have reported the significant elevation in these serum enzyme levels after lithium treatment (Nordenberg et al. 1982). Pretreatment with CCE significantly restored the increase in serum hepatic enzymes in lithium-treated rats when compared with lithiumintoxicated rats. Earlier studies in this laboratory, which have shown a remarkable enhancement in the different parameters studied in rats treated with nickel (Hfaiedh et al. 2008) and chlorpyrifos (Ncibi et al. 2008), are in agreement with the results obtained in the present study. This action may be due to the ability of CCE and its bioactive constituents to stabilize the membrane permeability and reduce the leakage of enzymes into the blood . In order to further confirm the protective effect of CCE against lithium-induced oxidative damage in liver, a histopathological examination was performed ( Figure 6). In fact, histological changes, seen in the liver of animals treated with lithium, are characterized by an important sinusoidal dilation, congested central veins, vacuolization and inflammatory cell infiltration. Our results corroborated previous findings reported by Sharif et al. (2011), who indicated that alkali metal causes histopathological and enzymatic changes in rats. However, the CCE attenuated the histological alterations induced in lithium-treated rats, which could be associated with CCE antiradical/antioxidant and metal-chelating capacities (Dok Go et al. 2003). Conclusion In the present work, the CCE under investigation was found to possess excellent antioxidant activities based on various in vitro and in vivo assays. The different in vitro antioxidant tests proved that the CCE is rich in flavonoids, phenolic compounds as well as polysaccharide. Also, this study demonstrated that CCE could have a protective effect on lithium-induced hepatotoxicity and oxidative stress in our experimental model. Future research is needed to carry out further biochemical investigations in order to isolate and clarify the mechanism behind the activity of this extract.
2018-04-03T01:01:25.892Z
2016-12-12T00:00:00.000
{ "year": 2016, "sha1": "0990340d485f6192dd946d419ebc6b2efd5a451b", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/13880209.2016.1255976?needAccess=true", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "0990340d485f6192dd946d419ebc6b2efd5a451b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
119322409
pes2o/s2orc
v3-fos-license
On the coupon-collector's problem with several parallel collections In this note we evaluate the expectation and variance of the waiting time to complete $m$ parallel collections of coupons, in the case of coupons which arrives independently, one by one and with equal probabilities. Introduction The coupon-collector's problem is a classical problem in combinatorial probability. The description of the basic problem is easy: consider one person that collects coupons and assume that there is a finite number, say N , of different types of coupons. These items arrive one by one in sequence, with the type of the successive items being independent random variables that are each equal to k with probability p k . In the coupons-collector's problem, one is usually interested in answering the following questions: which is the probability to complete the collection (or a given subset of the collection) after the arrival of exactly n coupons (n ≥ N )? which is the expectation and the variance of the number of coupons that we need to complete the collection (or to complete a given subset of the collection)? which is the expectation and the variance of the number of coupons that we need to complete a set of collections? how these probabilities and expectations change if we assume that the coupons arrive with unequal probabilities or in groups of constant size? The first results, due to De Moivre, Laplace and Euler (see [7] for a comprehensive introduction on this topic), deal with the case of constant probabilities p k ≡ 1 N , while the first results on the unequal case have to be ascribed to Von Schelling (see [8]). Many other studies have been carry out on this classical problem ever since (see e.g. [6], [3], [1] and [2]). In this note we will consider the waiting time to complete a set of m collections, all independent to each other. We will assume that any collection is made of a finite number of different coupons, at any unit of time arrive m new coupons, one for each collection, and that the probability to purchase any type at any time is uniform. We will be able to derive the expectation and variance of the waiting time to complete all the m collections. Single collection with equal probabilities Let us start by considering a single collection. Assume that this collection consists of N different coupons, which are equally likely, with the probability to purchase any type at any time equal to 1 N . In this section we will evaluate the expectation and the variance of the random number of coupons that one needs to purchase in order to complete the collection. We will follow two approaches, the first one present in most of the textbooks in Probability, while the second one, based on a Markov Chain approach, will allow us to extend the computation to the case of parallel collections. The Geometric Distribution approach Let X denote the (random) number of coupons that we need to purchase in order to complete our collection. We can write X = X 1 + X 2 + . . . + X N , where for any i = 1, 2, . . . , N , X i denotes the additional number of coupons that we need to purchase to pass from i − 1 to i different types of coupons in our collection. Trivially X 1 = 1 and, since we are considering the case of a uniform distribution, it follows that when i distinct types of coupons have been collected, a new coupon purchased will be of a distinct type with probability equal to N −i N . By the independence assumption, we get that the random variable X i , for i ∈ {2, . . . , N }, is independent from the other variables and has a geometric law with parameter N −i+1 N . The expected number of coupons that we have to buy to complete the collection will be therefore The Markov Chain approach Even if the previous result is very simple and the formula completely clear, it will not help us in order to deal with the problem of parallel collections. We will introduce then a different approach, that will provide an alternative to the previous computation and that we will be able to extend in the following section, where the situation becomes more complicate. Assuming as before that one coupon arrives at any unit of time, it is possible to solve the previous problem by using a Markov Chains approach. Indeed, let Y n be the number of different types of coupons collected after n units of time and assume again that the probability of finding a coupon of any type at any time is p = 1 N . {Y n , n ∈ N} is then a Markov Chain on the state space S = {0, 1, . . . , N } and it is immediate to see that its transition matrix is given by Note that QF = F Q = F − Id (see [4] for more details). If we define the random variable , a classical result (see [5]) states that k N is the minimal non negative solution of the linear system: It is immediate to see that in this case Then, for j = 0, . . . , N − 1, and the expected waiting time to collect all the different coupons is given by The computation of the conditional variance is similar (see again [4]). Denoting by A simple computation gives that the variance of the waiting time to collect all the different coupons is given by It is possible to see that (1) and (2) are indeed the same. Parallel collections with equal probabilities Let us now consider the case that m different collections are available. Assume that the i-th collection is made of N i different coupons and that the probability to purchase any type at any time is equal to 1 N i . Moreover, let us assume that we purchase simultaneously one coupon of every collection. In this section we will derive the expectation and the variance of the waiting time to complete all the m different collections. With the notation of the previous section, let X i denote be the random number of coupons needed to complete the i-th collection. The random number of coupons needed to complete all the m collections will be therefore max(X 1 , . . . , X m ). It looks not simple to determine the law of this random variable, or even just to provide a direct computation of its expectation. On the converse, adapt to this situation the Markov Chain's approach is quite simple. Let Example 2 (Continue) Considering again three collections, each with six different coupon to be collected, the variance of the waiting time to complete all the three collections is equal to 44.8975.
2016-09-14T08:46:02.000Z
2016-09-14T00:00:00.000
{ "year": 2016, "sha1": "39553fc656d4d13c4c181be6248a432cfdceb842", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "39553fc656d4d13c4c181be6248a432cfdceb842", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
36466020
pes2o/s2orc
v3-fos-license
Impact of endoscopy-based research on quality of life in healthy volunteers AIM: To study the impact of an endoscopy-based long-term study on the quality of life in healthy volunteers (HV). METHODS: Ten HV were included into a long-term prospective endoscopy-based placebo-controlled trial with 15 endoscopic examinations per person in 5 different drug phases. Participants completed short form-36 (SF-36) and visual analog scale-based questionnaires (VAS) for different abdominal symptoms at days 0, 7 and 14 of each drug phase. Analyses were performed according to short- and long-term changes and compared to the control group. RESULTS: All HV completed the study with duration of more than 6 mo. Initial quality of life score was comparable to a general population. Analyses of the SF-36 questionnaires showed no significant changes in physical, mental and total scores, either in a short-term per-spective due to different medications, or to potentially endoscopic procedure-associated long-term cumulative changes. Analogous to SF-36, VAS revealed no significant changes in total scores for pathological abdominal symptoms and remained unchanged over the time course and when compared to the control population. CONCLUSION: This study demonstrates that quality of life in HV is not significantly affected by a long-term endoscopy-based study with multiple endoscopic procedures. pilot study the authors evaluate the changes in quality of life in HV in a long-term endoscopy-based drug study. The study was conducted according to existing ethical recommendation and guidelines. Using the well-validated SF-36 and visual analog scale-based questionnaires, they show that endoscopy-based research has no significant impact on quality of life in HV even under rigorous conditions. This study provides valuable and important information regarding the quality of life of HV who participate in endoscopy-based studies, and supports the exist- ing recommendation and guidelines on topic in Gastroenterology. INTRODUCTION Endoscopy is the main diagnostic tool for examination of the upper gastrointestinal tract. The main advantage of endoscopy in comparison to other non-invasive procedures lies in its ability to obtain tissue biopsies, which permit histological evaluation of tissues for clinical as well as basic research [1] . The majority of endoscopy-based pharmacological studies are performed in patients with or without a specific disease. Investigations surrounding this focused patient population yields answers for only a limited number of questions. Studies designed around healthy volunteers (HV) may resolve some of those issues (e.g. by reducing inter-individual differences); although at the same time may open a new series of questions. One of the main differences between patients and HV is hidden in the dilemma "treat the disease" in patients and "do no harm" in HV [2,3] . "Harm" can arise from a number of sources such as the study-related treatment, the endoscopy procedure itself, and/or other related discomfort (e.g. multiple blood samples, restriction of daily activities). Upper GI-endoscopy is a relatively well-accepted procedure. The magnitude of discomfort to the examined person depends upon several factors such as previous contact with the examiner, appropriate elucidation of the procedure, sedation, investigator's experience, duration of examination and the expected benefit from the procedure [4] . Because of the potential therapeutic benefit, it is easier to justify the examination of the patients. Since in most cases, HV can not expect any health-related advantages from the procedure, the involvement of HV in endoscopy-based research is a matter of ethical concern [5] . Updated recommendations on ethics in gastrointestinal endoscopy-based research were discussed and summarized in the recent Workshop of the European Society of Gastrointestinal Endoscopy (ESGE) [6] . The main ethical considerations were as follows: first, there must be a research issue that can not be adequately addressed by the involvement of patients; and second, the inherent risk for HV is regarded to be "acceptable". However the term "acceptable" is not well defined and it is not uncommon that different scientific and ethical committees make different conclusions concerning endoscopy-based studies. Objective methods for ethical evaluation could potentially be helpful in narrowing such differences in opinions. Targeting the quality of life (QOL) in volunteers could potentially contribute to objectivity in ethical questions, however, there are no data regarding this issue. For this reason we aimed to evaluate the QOL in HV during endoscopy-based research. To answer these questions, we used the previously well-validated short-form 36 (SF-36) and visual analog scale-based questionnaires (VAS). In this pilot study, we demonstrate that an endoscopy-based drug trial has no significant influence on the QOL in healthy participants even under rigorous conditions and a long-term protocol. Ethical approval and volunteer recruitment The study design was approved by the local ethics committee of Magdeburg University Hospital and by government authorities. HV, as defined below, were recruited for the study in accordance with the Declaration of Helsinki principles and recommendations of the ESGEworkshop on the ethics of gastrointestinal endoscopybased research [6] . The recruitment of HV was performed through the distribution of brochures or information posters avoiding affiliation or dependency of HV to the investigators. The purpose, risks of the procedure and the drug-related side effects were fully explained to participants and included in the written informed consent (e.g. risk of bleeding due to platelet inhibitory drugs). HV were informed about the opportunity to revoke study consent and cancel participation at any time. At least 24 h consideration time was provided to the HV before all participants provided written informed consent. Due to the extensive study protocol, all HV were financially reimbursed for their time, effort and contribution to the research. Financial compensation was equal in all HV and was modestly calculated based on generally used factors including compensation for time (or possible income shortfall due to participation in the study), risk, discomfort and inconvenience. HV Ten HV [8 men and 2 women, age 27.8 years (22-37) and body mass index 24.7 kg/m 2 (22.3-27.2)] were included in the study based on previously described inclusion/ exclusion criteria [7] . Briefly, HV had no abnormality at physical examination or during routine laboratory examination, no history of relevant illnesses especially history of peptic ulceration, no pregnancy in women and were negative for H. pylori infection. Study design The study was conducted as a prospective doubleblind, placebo-controlled study with cross-over design. The main aim of the trial was to study the influence of several frequently used drugs on the healing of gastroduodenal lesions (preliminary clinical results were reported elsewhere) [8] . In addition, we aimed to evaluate the impact of the endoscopy-based trial on QOL in HV. All volunteers were asked to fill out the questionnaires o n t h e d ay o f e a ch e n d o s c o p y, d e s c r i b i n g t h e subjective symptoms and QOL. Rofecoxib, clopidogrel, prednisolone, esomeprazole or placebo were given alone and/or together with acetylsalicylic acid (ASA). The endoscopic procedures were performed on days 0, 7 and 14 ( Figure 1). The volunteers received one of the test drugs for one week and the same drug together with ASA for another week. Total duration of the study was over 6 mo including a 4-wk washout period between 5 different drug phases. Endoscopy After overnight fasting, the endoscopic examination was performed by an experienced endoscopist (G.T.) using a standard endoscope (GIF 145, Olympus, Germany). All HV received topical lidocaine 2% for local anesthesia and 20 mg of butylscopolaminiumbromid intravenously (iv) to inhibit gastric and small bowel propagation. Additionally, midazolam 5 mg was given iv if requested by the participants. During the procedure, patients were monitored for heart rate and oxygen saturation. After a systematic examination of the stomach and duodenum, multiple biopsies (10 per region, a total of 30 per endoscopy) were taken using standard forceps (5 mm maximum open diameter) from the distal duodenal bulb, antrum and corpus. Questionnaires: Visual analog scale and SF-36 To determine the QOL in HV, we used a validated German version of the SF-36 (short form) questionnaire which includes an 8-scale profile: physical functioning, role physical, bodily pain, general health, vitality, social functioning, emotional role and mental health. These scales further represent two distinct higher-ordered clusters due to physical and mental health variance. A lower score in SF-36 indicates a greater impairment and can range from 0 (worst health) to 100 (best health). To determine the abdominal symptoms of HV, we used a previously validated visual analog scale (VAS) based questionnaire [9] with the following dimensions of symptoms: abdominal pain, bloating, reflux, nausea, vomiting, diarrhea and loss of appetite. The summary symptom score (0-70) was calculated by adding up the values of the single scores. A single score value from 1-10 was defined as presence, and 0-1 as the absence of any symptoms. As shown in Figure 1, VAS questionnaires were filled out on days 0, 7 and 14 and the SF-36 questionnaires were filled out before and after each drug phase. The score parameters of HV were compared to a previously evaluated control population of 28 patients, who underwent a single endoscopic examination and in the absence of acid suppressive drugs no visible macroscopic or histological abnormalities were found [9] . Statistical analysis All data were analyzed using the SPSS 10 (Chicago, IL, USA) and Graph Pad Prism 4.0 (San Diego, CA, USA). The differences between the groups were analyzed according to corresponding formula and rules, using where appropriate paired or unpaired t-test and Friedman's test (FT) for non-normal distributed data. For all comparisons, a P-value (two-sided) of < 0.05 was regarded as significant. Study participation All 10 HV completed all 5 phases of the study with 15 endoscopies per HV. With the exception of 2 individuals who either experienced a prolonged bleeding after biopsies had been taken or melena (due to the combination of ASA and clopidogrel) without development of anemia, there were no further drug-or endoscopy-related complications. The VAS and SF-36 questionnaires were completed and returned in 99.5% and in 88% of all cases/time points, respectively. In 91% of endoscopies, HV declined to receive midazolam for sedation and received only topical anesthesia with lidocaine 2%. Analyses of the SF-36 questionnaires SF-36 questionnaires were analyzed in regard to different aspects, such as influence of study drugs, different time points, as well as long term changes in QOL. The physical, mental and total scores, determined by the short form SF-36 questionnaire ranged between 65.6 to 97.0, 65.3 to 96.0 and 75.6 to 97.1, respectively. We found no significant changes in physical, mental or total scores either between different medications or due to participation in each drug phase (data not shown). Overall changes in physical, mental and total scores are shown in Table 1. For the long-term impact of the intensive endoscopy-based trial on the QOL in HV, the SF-36 assessed data were compared between baseline (day 0 during the first phase) and the last time point of the last phase more than 6 mo later. Remarkably, physical (88.4 ± 6.0 vs 86.4 ± 6.7), mental (82.1 ± 8.1 vs 84.6 ± 7.4) and total scores (88.0 ± 5.7 vs 88.6 ± 6.1) showed no significant changes over the 6-mo period ( Figure 2). Figure 1 Flow chart of the study. The multiple drug study with cross-over design was divided into 5 phases with test drug alone and in combination with low-dose aspirin during the second week. On days 0, 7 and 14 of each study, ten HV were asked to complete the questionnaires. Upper GIendoscopy was performed on the same days with a 4-wk wash out period between the different phases. Analyses of physical symptoms with visual analog scale (VAS) questionnaires Physical complaints were assessed for different symptoms with VAS and analyzed in a similar manner to the SF-36 questionnaires. As anticipated, subjective physical and emotional perception even at baseline differed from one person to another (high inter-individual variability, data not shown). Almost 60% of all complaints were due to abdominal pain and reflux, but these scores showed no significant changes when compared with baseline ( Figure 3). The rarest complaints with < 10% of the maximal score were nausea and loss of appetite. To evaluate the global changes we constructed a summary score; values at baseline (3.2 ± 5.1) were comparable to scores at day 7 (2.6 ± 3.7) and day 14 (3.0 ± 3.4) and were less than 5% of the theoretical maximal score. Analyses of the impairment of endoscopic examination To analyze the subjective endoscopy-related impairment in HV, we used VAS which allows the volunteers to score the procedure-oriented discomfort. In analogy to physical complains VAS, we observed high interindividual variability with the overall range varying from absence to 80% of maximum possible impairment during the endoscopy. As demonstrated in Figure 4, there was no significant difference between self-assessed scores for complaints about endoscopic procedures (1.4 ± 2.0, 1.4 ± 1.1 and 1.35 ± 1.1 for day 0, 7 and 14, RM ANOVA, P > 0.05). Difference between HV and controls The VAS-scores from HV were compared with previously 470 January 28, 2010|Volume 16|Issue 4| WJG|www.wjgnet.com published VAS-scores from a control population [9] . The overall mean for summary symptom scores of HV and controls were comparable (HV 2.9 ± 4 and controls 2.7 ± 4.4, t-test: P > 0.05) and was lower than 5% of the theoretically possible value. We found significantly more complaints for abdominal pain (0.98 ± 1.37 vs 0.3 ± 0.8, P = 0.012) and vomiting (0.22 ± 0.5 vs 0.03 ± 0.2, P = 0.046) in the HV group ( Figure 5), but due to the limited number of HV we could not perform age adjustment for the symptoms. The difference between both scores were below the defined cut-off value for the presence or absence of symptoms and were significantly lower than a symptomatic population as previously published [9] . DISCUSSION QOL of HV is one of the major ethical concerns in endoscopy-based research. Due to the lack of objective data, the decision to include or not to include HV into a clinical trial is made rather arbitrarily and is mostly based on the individual judgment of clinicians and the opinion of expert committees. Therefore, there is a strong need for prospective data that would help to gain knowledge directly related to HV. To test if the endoscopy-based study impacted the QOL of HV, we prospectively evaluated QOL in HV under rigorous conditions. During the study of multiple potentially GI-harmful medications, HV underwent multiple (150) endoscopic examinations with over 4500 biopsies in 5 independent phases. We demonstrated that by accurate implementation of recommendations and guidelines, even a long-term endoscopy-based study has only a little or no impact on the QOL in HV. Several different methods to measure QOL, for example Sickness Impact Profile, Medical Outcome Study Short-Form 36 (SF-36), Nottingham Health Profile, Quality of Well Being Scale and other diseasespecific scales have been previously validated [10] . We decided to use the SF-36 questionnaire for its brevity and its comprehensiveness. SF-36 is one of the best validated and widely used questionnaires both in clinical practice and research work in gastroenterology [11] . To increase the reliability of the study, we also used VAS which was previously validated in dyspeptic patients by our group [9] . Both questionnaires are based on self-evaluation which minimizes the investigator-related impact, and further contributes to reliability of the outcome results. In this pilot study, we found that the endoscopy-based trial was not associated with significant changes in QOL scores. These scores were comparable between different phases and time points as well as when compared to the general population [12] and to the control population [9] . Based on our results and previous knowledge, several conclusions can be made [6] . First, implementation of the current guidelines and recommendation are a reliable fundament for the successful realization of an endoscopy-based study. Second, careful selection of volunteers might be important, especially taking into account the motivation of the HV. Motivation is -at least in part -based on an idealistic character, even if financial compensation is provided. However, especially in developing countries, financial compensation may become the dominant stimulus for participation which may overcome the study-related discomfort [5,13] . In our study, inclusion of HV was based on a "first come, first served" principle, however, there is still a chance of creating a bias due to personal motivation, willingness to participate in the study or even financial interest [5,13] . In this regard it is important to mention that analogous to other studies, all the HV were reimbursed for their time, effort and contribution to the research, but financial compensation was modest at best. Since personal motivation may vary considerably, testing HV with higher and lower tolerance Control population (data previously published in [9] , adopted with permission) underwent only one upper GI endoscopy without endoscopic or histological pathology. Maximal complaint score for any symptom or summary scores are 10 and 70, respectively. The absolute symptom value was below the cut-off for presence of the symptom. to endoscopy-based studies, especially in relation to financial reward, could add interesting and valuable information and should be considered in future studies. Besides motivation, physician/clinician and nursing care (GCP) also plays an important role in subjective cognition of impaired QOL. All endoscopic examinations were performed by an experienced endoscopy team which could have influenced the satisfaction, and thereby tolerance to the procedure [14] . To the best of our knowledge, the only comparable study that has addressed a similar issue was from Adachi et al [15] . This group analyzed cardiac stress in HV without sedation, and showed that endoscopic examination of HV without sedation increased cardiac stress (without affecting cardiac output) by 66% [15] . Although evaluation of objective physiological parameters may add valuable information, especially if correlated to VAS or even SF-36, the correlation to personal impairment is still unknown. The VAS impairment score in our study was constant throughout the study, with mean impairment of 15% as a maximal value. Although premedication would improve tolerance to endoscopic procedures, premedication with midazolam was only used in 9% of procedures. From another perspective, the "social life" of HV may also have had an impact on the decision to use or not use premedication. Necessary daily activities like study, work or transportation could be significantly influenced by premedication, especially because the duration of this study was 6 mo. Furthermore, it is worth mentioning that none of the HV interrupted the study, and all of them declared that they would re-participate. Some of them did participate in a later endoscopy-based study with 4 endoscopic procedures within 2 h [7] . It was not the aim of our study to clarify whether endoscopy-based research is ethical or not, but primarily to evaluate QOL in HV following the implementation of current recommendations and guidelines. Nevertheless, this information could indirectly contribute to the ethical considerations of endoscopy-based research, if the guidelines and recommendations are acceptable. Therefore, safety standards including experienced endoscopists are a pre-requisite in such studies. Endoscopy-based studies have some risks for complications, which in the worse case can be lethal, even if volunteers are young and healthy [16] . However, the risk is negligible if the procedure is carried out with maximum accuracy and according to the recommendations for standards of sedation and monitoring of patients during GI-endoscopy [3,17,18] . In summary, in this pilot study we show for the first time that participation in long-term endoscopy-based trials is not necessarily associated with significant changes in QOL in HV. Evaluation of QOL might be a valuable tool regarding ethical questions in further studies. Larger studies are warranted to confirm the data and to further analyze the interaction between QOL and motivation of HV. ACKNOWLEDGMENTS The authors are grateful to Dr. Ajay Goel and Dr. Klaus Moenkemueller for their helpful discussions and critical reading of the manuscript, and to Mrs. Kathrin Beier, Mrs. Diana Worm, Mrs. Martina Leucke and the endoscopy team for providing excellent care of the HV. We are also thankful to the HV for their commitment to the study. Background Involvement of healthy volunteers (HV) is absolutely essential in gaining valuable knowledge in clinical and preclinical research. The involvement of HV (HV) has a long history, but many unresolved issues of ethical, methodological or even legal concerns in this regard still remain unanswered. The importance of this becomes even more pronounced when invasive procedures like GI endoscopy are undertaken. Existing ethical recommendations and guidelines on this topic are mostly based on the opinion of experts, but many of these questions have not been studied in a systematic manner. Research frontiers Multiple studies have shown that the motivation of HV plays a key role in recruiting volunteers. Financial reward is one of the crucial motivating factors in HV. Participation in research studies might be associated with study-related risks, discomfort and inconvenience that might become even more relevant in endoscopy-based studies. Impairment in the quality of life in HV may play a crucial role in understanding these ethical concerns, however, so far no study has addressed the impact of endoscopy-based research on the quality of life of HV. Innovations and breakthroughs In this pilot study the authors evaluate the changes in quality of life in HV in a long-term endoscopy-based drug study. The study was conducted according to existing ethical recommendation and guidelines. Using the well-validated SF-36 and visual analog scale-based questionnaires, they show that endoscopybased research has no significant impact on quality of life in HV even under rigorous conditions. Applications This study provides valuable and important information regarding the quality of life of HV who participate in endoscopy-based studies, and supports the existing recommendation and guidelines on this topic in Gastroenterology. Terminology Healthy volunteer: is a person who voluntarily participates in a research study. Volunteer come from volunteering which means working, participating or being involved in something without being motivated by financial or material gain. The historical intention, such as to promote good, serve society and improve human quality of life show static movement in paid volunteerism. Endoscopy-based research: are studies or trials performed on individuals which involve endoscopic techniques. Short Form 36: is a well-validated and widely used short survey which includes 36 specific questions on health-related quality of life.
2018-04-03T03:47:04.873Z
2010-01-28T00:00:00.000
{ "year": 2010, "sha1": "f187be96b6d9c3c906ac5d1388eb36895c0180b1", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v16.i4.467", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "1a5b43e6a6b02017db7ea88043fa1aa174d72ace", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
34626316
pes2o/s2orc
v3-fos-license
MinDelay: Low-latency Forwarding and Caching Algorithms for Information-Centric Networks We present a new unified framework for minimizing congestion-dependent network cost in information-centric networks by jointly optimizing forwarding and caching strategies. As caching variables are integer-constrained, the resulting optimization problem is NP-hard. To make progress, we focus on a relaxed version of the optimization problem, where caching variables are allowed to be real-valued. We develop necessary optimality conditions for the relaxed problem, and leverage this result to design MinDelay, an adaptive and distributed joint forwarding and caching algorithm, based on the conditional gradient algorithm. The MinDelay algorithm elegantly yields feasible routing variables and integer caching variables at each iteration, and can be implemented in a distributed manner with low complexity and overhead. Over a wide range of network topologies, simulation results show that MinDelay typically has significantly better delay performance in the low to moderate request rate regions. Furthermore, the MinDelay and VIP algorithms complement each other in delivering superior delay performance across the entire range of request arrival rates. Abstract-We present a new unified framework for minimizing congestion-dependent network cost in information-centric networks by jointly optimizing forwarding and caching strategies. As caching variables are integer-constrained, the resulting optimization problem is NP-hard. To make progress, we focus on a relaxed version of the optimization problem, where caching variables are allowed to be real-valued. We develop necessary optimality conditions for the relaxed problem, and leverage this result to design MinDelay, an adaptive and distributed joint forwarding and caching algorithm, based on the conditional gradient algorithm. The MinDelay algorithm elegantly yields feasible routing variables and integer caching variables at each iteration, and can be implemented in a distributed manner with low complexity and overhead. Over a wide range of network topologies, simulation results show that MinDelay typically has significantly better delay performance in the low to moderate request rate regions. Furthermore, the MinDelay and VIP algorithms complement each other in delivering superior delay performance across the entire range of request arrival rates. I. INTRODUCTION Research on information-centric networking (ICN) architectures over the past few years has brought focus on a number of central network design issues. One prominent issue is how to jointly design traffic engineering and caching strategies to maximally exploit the bandwidth and storage resources of the network for optimal performance. While traffic engineering and caching have been investigated separately for many years, their joint optimization within an ICN setting is still an underexplored area. Recently, in [11], Ioannidis and Yeh formulate the problem of cost minimization for caching networks with fixed routing and linear link cost functions, and propose an adaptive, distributed caching algorithm which converges to a solution within a (1-1/e) approximation from the optimal. Similarly, there have been a number of attempts to enhance the traffic engineering in ICN [12], [13], [14], [15], [16]. In [12], Carofiglio et al., formulate the problem of joint multipath congestion control and request forwarding in ICN as an optimization problem. By decomposing the problem into two subproblems of maximizing user throughput and minimizing overall network cost, they develop a receiver-driven windowbased Additive-Increase Multiplicative-Decrease (AIMD) congestion control algorithm and a hop-by-hop dynamic request forwarding algorithm which aim to balance the number of pending Interest Packets of each content object (flow) across the outgoing interfaces at each node. Unfortunately, the work in [12] does not consider caching policies. Posch et al. [13] propose a stochastic adaptive forwarding strategy which maximizes the Interest Packet satisfaction ratio in the network. The strategy imitates a self-adjusting water pipe system, where network nodes act as crossings for an incoming flow of water. Each node then intelligently guides Interest Packets along their available paths while circumventing congestion in the system. In [17], Yeh et al., present one of the first unified frameworks for joint forwarding and caching for ICN networks with general topology, in which a virtual control plane operates on the user demand rate for data objects in the network, and an actual plane handles Interest Packets and Data Packets. They develop VIP, a set of distributed and dynamic forwarding and caching algorithms which adaptively maximizes the user demand rate the ICN can satisfy. In this work, we present a new unified framework for minimizing congestion-dependent network cost by jointly choosing node-based forwarding and caching variables, within a quasistatic network scenarios where user request statistics vary slowly. We consider the network cost to be the sum of link costs, expressed as increasing and convex functions of the traffic rate over the links. When link cost functions are chosen according to an M/M/1 approximation, minimizing the network cost corresponds to minimizing the average request fulfillment delay in the network. As caching variables are integer-constrained, the resulting joint forwarding and caching (JFC) optimization problem is NP-hard. To make progress toward an approximate solution, we focus on a relaxed version of the JFC problem (RJFC), where caching variables are allowed to be real-valued. Using techniques first introduced in [18], we develop necessary optimality conditions for the RJFC problem. We then leverage this result to design MinDelay, an adaptive and distributed joint forwarding and caching algorithm for the original JFC problem, based on a version of the conditional gradient, or Frank-Wolfe algorithm. The MinDelay algorithm elegantly yields feasible routing variables and integer caching variables at each iteration, and can be implemented in a distributed manner with low complexity and overhead. Finally, we implement the MinDelay algorithm using our Java-based network simulator, and present extensive experimental results. We consider three competing schemes, including the VIP algorithm [17], which directly competes against MinDelay as a jointly optimized distributed and adaptive forwarding and caching scheme. Over a wide range of network topologies, simulation results show that while the VIP algorithm performs well in high request arrival rate regions, MinDelay typically has significantly better delay performance in the low to moderate request rate regions. Thus, the MinDelay and VIP algorithms complement each other in delivering superior delay performance across the entire range of request arrival rates. II. NETWORK MODEL Consider a general multi-hop network modeled by a directed and (strongly) connected graph G = (N , E), where N and E are the node and link sets, respectively. A link (i, j) ∈ E corresponds to a unidirectional link, with capacity C ij > 0 ( bits/sec). We assume a content-centric setting, e.g. [19], where each node can request any data object from a set of objects K. A request for a data object consists of a sequence of Interest Packets which request all the data chunks of the object, where the sequence starts with the Interest Packet requesting the starting chunk, and ends with the Interest Packet requesting the ending chunk. We consider algorithms where the sequence of Interest Packets corresponds to a given object request are forwarded along the same path. Assume that loop-free routing (topology discovery and data reachability) has already been accomplished in the network, so that the Forwarding Interest Base (FIB) tables have been populated for the various data objects. Further, we assume symmetric routing, where Data Packets containing the requested data chunks take the same path as their corresponding Interest Packets, in the reverse direction. Thus, the sequence of Data Packets for a given object request also follow the same path (in reverse). For simplicity, we do not consider interest suppression, whereby multiple Interest Packets requesting the same named data chunk are collapsed into one forwarded Interest Packet. The algorithm we develop can be extended to include Interest suppression, by introducing a virtual plane in the manner of [17]. For k ∈ K, let src(k) be the source node of content object k. 1 Each node in the network has a local cache of capacity c i (object units), and can optionally cache Data Packets passing through on the reverse path. Note that since Data Packets for a given object request follow the same path, all chunks of a data object can be stored together at a caching location. Interest Packets requesting chunks of a given data object can enter the network at any node, and exit the network upon being 1 We assume there is one source for each content object for simplicity. The results generalize easily to the case of multiple source nodes per content object. satisfied by a matching Data Packet at the content source for the object, or at the nodes which decide to cache the object. For simplicity, we assume all data objects have the same size L (bits). The results of the paper can be extended to the more general case where object sizes differ. We focus on quasi-static network scenarios where user request statistics vary slowly. Let r i (k) ≥ 0 be the average exogenous rate (in requests/sec) at which requests for data object k arrive (from outside the network) to node i. Let t i (k) be the total average arrival rate of object k requests to node i. Thus, t i (k) includes both the exogenous arrival rate r i (k) and the endogenous arrival traffic which is forwarded from other nodes to node i. Let x i (k) ∈ {0, 1} be the (integer) caching decision variable for object k at node i, where x i (k) = 1 if object k is cached at node i and x i (k) = 0 otherwise. Thus, t i (k)x i (k) is the portion of the total incoming request rate for object k which is satisfied from the local cache at node i and t i (k)(1 − x i (k)) is the portion forwarded to neighboring nodes based on the forwarding strategy. Furthermore, let φ ij (k) ∈ [0, 1] be the (real-valued) fraction of the traffic t i (k)(1 − x i (k)) forwarded over link (i, j) by node i = src(k). Thus, j∈O(i,k) φ ij (k) = 1, where O(i, k) is the set of neighboring nodes for which node i has a FIB entry for object k. Therefore, total average incoming request rate for object k to node i is where I(i, k) is the set of neighboring nodes of i which have FIB entries for node i for object k. Next, let F ij be the Data Packet traffic rate (in bits/sec) corresponding to the total request rate (summed over all data objects) forwarded on link (i, j) ∈ E: Note that by routing symmetry and per-hop flow balance, the Data Packet traffic of rate F ij actually travels on the reverse link (j, i). As in [18] and [20], we assume the total network cost is the sum of traffic-dependent link costs. The cost on link (j, i) ∈ E is due to the Data Packet traffic of rate F ij generated by the total request rate forwarded on link (i, j), as in (2). We therefore denote the cost on link (j, i) as D ij (F ij ) to reflect this relationship. 2 We assume D ij (F ij ) is increasing and convex in F ij . To implicitly impose the link capacity constraint, we assume D ij (F ij ) → ∞ as F ij → C − ji and D ij (F ij ) = ∞ for F ij ≥ C ji . As an example, gives the average number of packets waiting for or under transmission at link (j, i) under an M/M/1 queuing model [21], [22]. Summing over all links, the network cost (i,j) D ij (F ij ) gives the average total number of packets in the network, which, by Little's Law, is proportional to the average system delay of packets in the network. III. OPTIMIZATION PROBLEM We now pose the Joint Forwarding and Caching (JFC) optimization problem in terms of the forwarding variables (φ ij (k)) (i,j)∈E,k∈K and the caching variables (x i (k)) i∈N ,k∈K as follows: (4) The above mixed-integer optimization problem can be shown to be NP-hard [23]. To make progress toward an approximate solution, we relax the problem by removing the integrality constraint in (4). We formulate the Relaxed JFC (RJFC) problem by replacing the integer caching decision variables (5) It can be shown that D in (5) is non-convex with respect to (w.r.t.) (φ, ρ), where φ ≡ (φ ij (k)) (i,j)∈E,k∈K and the caching variables ρ ≡ (x i (k)) i∈N ,k∈K . In this work, we use the RJFC formulation to develop an adaptive and distributed forwarding and caching algorithm for the JFC problem. We proceed by computing the derivatives of D with respect to the forwarding and caching variables, using the technique of [18]. For the forwarding variables, the partial derivatives can be computed as where the marginal forwarding cost is Note that ∂D ∂rj (k) in (7) stands for the marginal cost due to a unit increment of object k request traffic at node j. This can be computed recursively by (8) Finally, we can compute the partial derivatives w.r.t. the (relaxed) caching variables as follows: The minimization in (5) is equivalent to minimizing the Lagrangian function subject to the following constraints: A set of necessary conditions for a local minimum to the RJFC problem can now be derived as with the complementary slackness condition The conditions (11)-(13) are necessary for a local minimum to the RJFC problem, but upon closer examination, it can be seen that they are not sufficient for optimality. An example from [18] shows a forwarding configuration (without caching) where (11) is satisfied at every node, and yet the operating point is not optimal. In that example, t i (k) = 0 at some node i, which leads to (11) being automatically satisfied for node i. This degenerate example applies as well to the joint forwarding and caching setting considered here. A further issue arises for the joint forwarding and caching setting where ρ i (k) = 1 for some i and k. In this case, the condition in (11) at node i is automatically satisfied for every j ∈ O(i, k), and yet the operating point need not be optimal. To illustrate this, consider the simple network shown in Figure 1 with two objects 1 and 2, where r 1 (1) = 1, r 1 (2) = 1.5, c 1 = 1, c 2 = 0 and src(1) = src(2) = 3. At a given operating point, assume ρ 1 (1) = 1, φ 12 (1) = 1 and φ 13 (2) = 1. Thus, ρ 1 (2) = 0, φ 13 (1) = 0 and φ 12 (2) = 0. It is easy to see that all the conditions in (11) and (12) are satisfied. However, the current operating point is not optimal. An optimal point is in fact reached when object 2 is cached at node 1, instead. That is, This example, along with the example in [18], show that when ρ i (k) = 1 or t i (k) = 0, node i still needs to assign forwarding variables for object k in the optimal way. In the degenerate cases where We therefore focus on the following modified conditions where In (15), we used the fact that (14) is satisfied. Condition (15) suggests a structured caching policy. If we sort the data objects in decreasing order with respect to the "cache scores" {t i (k)δ i (k)}, and cache the top c i objects, i.e. set ρ i (k) = 1 for the top c i objects, then condition (15) is satisfied. This is indeed the main idea underlying our proposed caching algorithm described in the next section. IV. DISTRIBUTED ALGORITHM: MINDELAY The conditions in (14)- (15) give the general structure for a joint forwarding and caching algorithm for solving the RJFC problem. For forwarding, each node i must decrease those forwarding variables φ ij (k) for which the marginal forwarding cost δ ij (k) is large, and increase those for which it is small. For caching, node i must increase the caching variables ρ i (k) for which the cache score t i (k)δ i (k) is large and decrease those for which it is small. To describe the joint forwarding and caching algorithm, we first describe a protocol for calculating the marginal costs, and then describe an algorithm for updating the forwarding and caching variables. Note that each node i can estimate, as a time average, the link traffic rate F ij for each outgoing link (i, j). This can be done by either measuring the rate of received Data Packets on each of the corresponding incoming links (j, i), or by measuring the request rate of Interest Packets forwarded on the outgoing links (i, j). Thus, given a functional form for Assuming a loop-free routing graph on the network, one has a well-defined partial ordering where a node m is downstream from node i with respect to object k if there exists a routing path from m to src(k) through i. A node i is upstream from node m with respect to k if m is downstream from i with respect to k. To update the marginal forwarding costs, the nodes use the following protocol. Each node i waits until it has received the value ∂D/∂r j (k) from each of its upstream neighbors with respect to object k (with the convention ∂D/∂r src(k) (k) = 0). Node i then calculates ∂D/∂r i (k) according to (8) and broadcasts this to all of its downstream neighbors with respect to k. The information propagation can be done by either piggybacking on the Data Packets of the corresponding object, or by broadcasting a single message regularly to update the marginal forwarding costs of all the content objects at once. Having described the protocol for calculating marginal costs, we now specify the algorithm for updating the forwarding and caching variables. Our algorithm is based on the conditional gradient or Frank-Wolfe algorithm [24]. Let Φ n = φ n ij (k) i∈N ,k∈K,j∈O(i,k) (ρ n i (k)) i∈N ,k∈K be the vector of forwarding and caching variables at iteration n. Then, the conditional gradient method is given by where a n ∈ (0, 1] is a positive stepsize, andΦ n is the solution of the direction finding subproblem Here, D(Φ n ) is the gradient of the objective function with respect to the forwarding and caching variables, evaluated at Φ n . The set F is the set of forwarding and caching variables Φ satisfying the constraints in (5), seen to be a bounded polyhedron. The idea behind the conditional gradient algorithm is to iteratively find a descent direction by finding the feasible directionΦ n − Φ n at a point Φ n , whereΦ n is a point of F that lies furthest along the negative gradient direction − D(Φ n ) [24]. In applying the conditional gradient algorithm, we encounter the same problem regarding degenerate cases as seen in Section III with respect to optimality conditions. Note that when t i (k)(1−ρ i (k)) = 0, the ∂D ∂φij (k) component of D(Φ n ) is zero, and thus provides no useful information for the optimization in (18) regarding the choice ofΦ n . On the other hand, when t i (k)(1 − ρ i (k)) > 0, eliminating this term from ∂D ∂φij (k) in (18) does not change the choice ofΦ n , since t i (k)(1 − ρ i (k)) > 0 is not a function of j ∈ O(i, k). Motivated by this observation, we define where δ n ij (k) and t n i (k) are the marginal forwarding costs and total request arrival rates, respectively, evaluated at Φ n . We consider a modified conditional gradient algorithm where the direction finding subproblem is given bȳ It can easily be seen that (20) is separable into two subproblems. The subproblem for (φ ij (k)) is given by It is straightforward to verify that a solutionφ n i (k) = (φ n ij (k)) j∈O(i,k) to (21) has all coordinates equal to zero except for one coordinate, sayφ n im (k), which is equal to 1, where m ∈ arg min corresponds to an outgoing interface with the minimal marginal forwarding cost. Thus, the update equation for the forwarding variables is: for all i ∈ N , where ω n i (k) = t n i (k) j∈O(i,k) φ n ij (k)δ n ij (k) . The subproblem (25) is a max-weighted matching problem which has an integer solution. For node i, let ω n i (k 1 ) ≥ ω n i (k 2 ) ≥ . . . ≥ ω n i (k |K| ) be a re-ordering of the ω n i (k)'s in decreasing order. A solutionρ n i to (25) hasρ n i (k) = 1 for k ∈ {k 1 , k 2 , . . . , k ci }, andρ n i (k) = 0 otherwise. That is,ρ n i (k) = 1 for the c i objects with the largest ω n i (k) values, andρ n i (k) = 0 otherwise. The update equation for the caching variables is: for all i ∈ N , ρ n+1 i (k) = (1 − a n )ρ n i (k) + a nρn i (k), for all k ∈ K. (26) As mentioned above, the solutionsρ n i to (25) are integeravelued at each iteration. However, for a general stepsize a n ∈ (0, 1], the (relaxed) caching variables corresponding to the update in (17) may not be integer-valued at each iteration. In particular, this would be true if the stepsize follows a diminishing stepsize rule. Although one can explore rounding techniques and probabilistic caching techniques to obtain feasible integer-valued caching variables x n i (k) from continuousvalued relaxed caching variables ρ n i (k) [11], this would entail additional computational and communication complexity. Since we are focused on distributed, low-complexity forwarding and caching algorithms, we require ρ n i (k) to be either 0 or 1 at each iteration n. This is realized by choosing the stepsize a n = 1 for all n. In this case, the update equation (17) is reduced to: whereΦ n is the solution to (21) and (25). That is, the solutions to the direction finding subproblems provide us with forwarding and caching decisions at each iteration. We now summarize the remarkably elegant MinDelay forwarding and caching algorithm. MinDelay Forwarding Algorithm: At each iteration n, each node i and for each object k, the forwarding algorithm chooses the outgoing link (i, m) to forward requests for object k, where m is chosen according to m ∈ arg min j∈O(i,k) δ n ij (k). That is, requests for object k are forwarded on an outgoing link with the minimum marginal forwarding cost. MinDelay Caching Algorithm: At each iteration n, each node i calculates a cache score CS n (i, k) for each object k according to CS n (i, k) t n i (k)δ n i (k). where δ n i (k) ≡ min j∈O(i,k) δ n ij (k). Upon reception of data object k new not currently in the cache of node i, if the cache is not full, then k new is cached. If the cache is full, then CS n (i, k new ) is computed, and compared to the lowest cache score among the currently cached objects, denoted by CS n (i, k min ). If CS n (i, k new ) > CS n (i, k min ), then replace k min with k new . Otherwise, the cache contents stay the same. The cache score given in (29) for a given content k at node i is the minimum marginal forwarding cost for object k at i, multiplied by the total request rate for k at i. By caching the data objects with the highest cache scores, each node maximally reduces the total cost of forwarding request traffic. One drawback of using stepsize a n = 1 in the MinDelay algorithm is that it makes studying the asymptotic behavior of the algorithm difficult. Nevertheless, in extensive simulations shown in the next section, we observe that the algorithm (a) Abilene Topology [12]. Fig. 2. Network Topologies counts (Y k n (t)) k2K,n2N , and performs the following congestion control, forwarding and caching in the virtual plane as follows. Congestion Control: For each node n and object k, choose the admitted VIP count at slot t, which also serves as the output rate of the corresponding virtual queue: Then, choose the auxiliary variable, which serves as the input rate to the corresponding virtual queue: where W > 0 is a control parameter which affects the utilitydelay tradeoff of the algorithm. Based on the chosen ↵ k n (t) and k n (t), the transport layer VIP count is updated according to (18) and the virtual VIP count is updated according to: Forwarding and Caching: Same as Algorithm 1. The network layer VIP count is updated according to (19). C. Utility-Delay Tradeoff We now show that for any control parameter W > 0, the joint congestion control, forwarding and caching policy in Algorithm 3 adaptively stabilizes all VIP queues i G = (N , L) for any 6 2 int(⇤), without knowing m 3 yields a throughput vector which can be arb to the optimal solution ↵ ⇤ (0) by letting W ! 1 the following, we assume that the VIP arrival pro (i) for all n 2 N and k 2 K, {A k n (t); t = 1, 2, with respect to t; (ii) for all n and k, A k n (t)  A t. Fig. 2. Network Topologies counts (Y k n (t)) k2K,n2N , and performs the following congestion control, forwarding and caching in the virtual plane as follows. Congestion Control: For each node n and object k, choose the admitted VIP count at slot t, which also serves as the output rate of the corresponding virtual queue: Then, choose the auxiliary variable, which serves as the input rate to the corresponding virtual queue: where W > 0 is a control parameter which affects the utilitydelay tradeoff of the algorithm. Based on the chosen ↵ k n (t) and k n (t), the transport layer VIP count is updated according to (18) and the virtual VIP count is updated according to: Forwarding and Caching: Same as Algorithm 1. The network layer VIP count is updated according to (19). C. Utility-Delay Tradeoff We now show that for any control parameter W > 0, the joint congestion control, forwarding and caching policy in Algorithm 3 adaptively stabilizes all VIP queues in the network G = (N , L) for any 6 2 int(⇤), without knowing . Algorithm 3 yields a throughput vector which can be arbitrarily close to the optimal solution ↵ ⇤ (0) by letting W ! 1. Similarly, in the following, we assume that the VIP arrival processes satisfy (i) for all n 2 N and k 2 K, {A k n (t); t = 1, 2, . . .} are i.i.d. with respect to t; (ii) for all n and k, A k n (t)  A k n,max for all t. Consumers Servers Sg (f) Fat Tree topology. behaves in a stable manner asymptotically. Moreover, the Min-Delay significantly outperform several state-of-the-art caching and forwarding algorithms in important operating regimes. V. SIMULATION EXPERIMENTS In this section we present the results of extensive simulations performed using our own Java-based ICN Simulator. We have considered three competing schemes for comparison with MinDelay. First, we consider the VIP joint caching and forwarding algorithm introduced in [17]. This algorithm uses a backpressure(BP)-based scheme for forwarding and a stable caching algorithm, both based on VIP (Virtual Interest Packet) queue states [17]. In the VIP algorithm discussed in [17], multiple Interest Packets requesting the same Data Packet are aggregated. Since we do not consider Interest Packet aggregation in this paper, we compare MinDelay with a version of VIP without Interest aggregation, labeled BP. We consider the VIP algorithm (or BP) to be the direct competitor with MinDelay, since to the best of our knowledge, it is the only other scheme that explicitly jointly optimizes forwarding and caching for general ICN networks. The other two approaches implemented here are based on the LFU cache eviction policy. We note that for stationary input request processes, the performance of LFU is typically much better than those of LRU and FIFO. 3 In the first approach, denoted by LFUM-PI, multipath request forwarding is based on the scheme proposed in [12]. Here, the forwarding decision is made as follows: an Interest Packet requesting a given object is forwarded on an outgoing interface with a probability inversely proportional to the number of Pending Interest (PI) Packets for that object on that outgoing interface. The second LFU-based approach implemented here, denoted by LFUM-RTT, has a RTT-based forwarding strategy. In this strategy, described in [14], the multipath forwarding decision is based on the exponentially weighted moving average of the RTT of each outgoing interface per object name. An Interest Packet requesting an object is forwarded on an outgoing interface with a probability inversely proportional to the average RTT recorded for that object on that outgoing interface. We tested the MinDelay forwarding and caching algorithm against the described approaches on several well-known topologies depicted in Fig. 2. In the following, we explain the simulation scenarios and results in detail. A. Simulation Details Each simulation generates requests for 1000 seconds and terminates when all the requested packets are fulfilled. During the simulation, a requesting node requests a content object by generating an Interest Packet containing the content name and a random nonce value, and then submits it to the local forwarder. Upon reception of an Interest Packet, the forwarder first checks if the requested content name contained in the Interest Packet is cached in its local storage. If there is a copy of the content object in the local storage, it generates a Data Packet containing the requested object, along with the content name and the nonce value, and puts the Data Packet in the queue of the interface on which the Interest Packet was received. If the local cache does not have a copy of the requested object, the forwarder uses the FIB table to retrieve the available outgoing interfaces. 4 Then, the forwarder selects an interface among the available interfaces based on the implemented forwarding strategy. In particular, for MinDelay, we update the marginal forwarding costs given in (22) at the beginning of each update interval (with a length between 2-5 seconds), and cache the results in a sorted array for future use. Hence, the forwarding decision given in (28) takes O(1) operations. After selecting the interface based on the considered forwarding strategy, the forwarder creates a Pending Interest Table (PIT) entry with the key being the content name concatenated with the nonce value, and the PIT entry value being the incoming interface ID. Note that we concatenate the nonce value to the content name since we do not assume Interest Packet suppression at the forwarder. Hence, we need to have distinguishable keys for each Interest Packet. Next, the forwarder assigns the Interest Packet to the queue of the selected interface, to be transmitted in a FIFO manner. Upon reception of a Data Packet, the forwarder first checks if the local storage is full. If the storage is not full, it will cache the contained data object 5 in local storage. If the storage is at capacity, it uses the considered cache eviction policy to decide whether to evict an old object and replace it with the new one. In the case of MinDelay, the forwarder regularly updates the cache score of the currently-cached contents using (29) at the beginning of the update intervals and keeps a sorted list of the cached content objects using a hash table and a priority queue. When a new Data Packet arrives, the forwarder computes its cache score, and compares the score with the lowest cache score among the currently-cached content objects. If the score of the incoming Data Packet is higher than the current lowest cache score, the forwarder replaces the corresponding cached object with the incoming one. Otherwise, the cached contents remain the same. Finally, the forwarder proceeds by retrieving and removing the PIT entry corresponding to the Data Packet and assigning the Data Packet to the queue of the interface recorded in the PIT entry. In all topologies, the number of content objects is 5000. Each requester requests a content object according to a Zipf distribution with power exponent α = 0.75, by generating an Interest Packet each of size 1.25 KBytes. All content objects are assumed to have the same size and can be packaged into a single Data Packet of size 500 KBytes. The link capacity of all the topologies, except in Abilene topology illustrated in Fig. 2a, is 50 Mbps. We first consider the Abilene topology [12] depicted in Figure 2a. There are three servers, at nodes 1, 5, and 8, each serving 1/3 of the content objects. That is, object k is served by server k mod 3 + 1 for k = 1, 2, . . . , 5000. The other eight nodes of the topology request objects according to Zipf distribution with α = 0.75. Also, each requester has a content store of size 250 MBytes, or equivalently 500 content objects. In the GEANT topology, illustrated in Figure 2b, there are 22 nodes in the network. All nodes request content objects. Each content object is randomly assigned to one of the 22 nodes as its source node. Each node has a content store of size 250 MBytes, or equivalently 500 content objects. In the DTelekom topology , illustrated in Figure 2c, there are 68 nodes in the network. All nodes request content objects. Each content object is randomly assigned to one of the 68 nodes as its source node. Each node has a content store of size 250 MBytes, or equivalently 500 content objects. In the Tree topology, depicted in Figure 2d, there are four requesting nodes at the leaves, C1, C2, C3 and C4. There are three edge nodes, E1, E2, and E3. Each content object is randomly assigned to one of the two source nodes, S1 and S2. Each requesting and edge node has a content store of size 125 MBytes, or equivalently 250 content objects. In the Ladder topology [12], depicted in Figure 2e, there are three requesting nodes, A1, A2 and A3. The source of all the content objects are at node D3. Each node in the network, except node D3, has a content store of size 125 MBytes, or equivalently 250 content objects. Finally, in the Fat Tree topology, depicted in Figure 2f, requesters are at the roots, i.e., nodes C1, C2, C3 and C4. There are 16 servers at the leaves. In this topology, each content object is randomly assigned to two servers, one chosen from the first 8 servers, and the other from the second 8 servers. All the requesting nodes as well as Aggregation and Edge nodes have a content store, each of size 125 MBytes, or equivalently 250 content objects. B. Simulation Results In Figures 3 and 4, the results of the simulations are plotted. The figures illustrate the performance of the implemented schemes in terms of total network delay for satisfying all generated requests (in seconds) and the average cache hits in requests/node/second, versus the arrival rate in requests/node/second, respectively. We define the delay for a request as the difference between the creation time of the Interest Packet and the arrival time of its corresponding Data Packet at the requesting node. A cache hit for a data object is recorded when an Interest Packet reaches a node which is not a content source but which has the data object in its cache. When a cache hit occurs, the corresponding metric is incremented one. To reduce randomness in our results, we ran each simulation 10 times, each with a different seed number, and plotted the average performance of each scheme in Figures 3 and 4. Figure 3 shows the total network delay in seconds versus the per-node arrival rate in request/seconds, for the abovementioned topologies. As can be seen, in all the considered topologies, MinDelay has lower delay in the low to moderate arrival rate regions. In the higher arrival rate regions, BP's outperforms MinDelay in 3 of the tested topologies (Abilene, GEANT, and Tree), As shown in [17], the BP performs well in high arrival rate regions since the VIP algorithm adaptively maximizes the throughput of Interest Packets, thereby maximizing the stability region of user demand rates satisfied by the network. When the network is operating well within the stability region, however, MinDelay typically has superior performance. Thus, the MinDelay and VIP algorithms complement each other in delivering superior delay performance across the entire range of request arrival rates. Finally, Figure 4 depicts the average total cache hits of the network (in requests/node/second) versus the per-node arrival rate (in request/seconds) for the Abilene, GEANT, Tree, and Ladder topologies, respectively. It can be seen that the cache hit performance of MinDelay is competitive but not necessarily superior to those of the other algorithms. This follows form the fact that MinDelay is designed with the objective of decreasing total network delay, and not explicitly with the objective of increasing cache hits. VI. CONCLUSION In this work, we established a new unified framework for minimizing congestion-dependent network cost by jointly choosing node-based forwarding and caching variables. Relaxing integer constraints on caching variables, we used a version of the conditional gradient algorithm to develop MinDelay, an adaptive and distributed joint forwarding and caching algorithm for the original mixed-integer optimization problem. The MinDelay algorithm elegantly yields feasible routing variables and integer caching variables at each iteration, and can be implemented in a distributed manner with low complexity and overhead. Simulation results show that while the VIP algorithm performs well in high request arrival rate regions, MinDelay typically has significantly better delay performance in the low to moderate request rate regions. Thus, the MinDelay and VIP algorithms complement each other in delivering superior delay performance across the entire range of request arrival rates. The elegant simplicity and superior performance of the MinDelay algorithm raise many interesting questions for future work. Specifically, we are interested in analytically characterizing the time-asymptotic behavior of MinDelay, as well as providing guarantees on the gap between the MinDelay performance and the theoretically optimal performance for the joint forwarding and caching problem.
2017-10-14T04:01:59.000Z
2017-10-14T00:00:00.000
{ "year": 2017, "sha1": "e76de16b2bf362d524b1e77c3430476fb7141360", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c4bff14270460f94583427473f5d123a90aec219", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
21535297
pes2o/s2orc
v3-fos-license
Eigenstate thermalization within isolated spin-chain systems The thermalization phenomenon and many-body quantum statistical properties are studied on the example of several observables in isolated spin-chain systems, both integrable and generic non-integrable ones. While diagonal matrix elements for non-integrable models comply with the eigenstate thermalization hypothesis (ETH), the integrable systems show evident deviations and similarity to properties of noninteracting many-fermion models. The finite-size scaling reveals that the crossover between two regimes is given by a scale closely related to the scattering length. Low-frequency off-diagonal matrix elements related to d.c. transport quantities in a generic system also follow the behavior analogous to the ETH, however unrelated to the one of diagonal elements. The thermalization phenomenon and many-body quantum statistical properties are studied on the example of several observables in isolated spin-chain systems, both integrable and generic non-integrable ones. While diagonal matrix elements for non-integrable models comply with the eigenstate thermalization hypothesis (ETH), the integrable systems show evident deviations and similarity to properties of noninteracting many-fermion models. The finite-size scaling reveals that the crossover between two regimes is given by a scale closely related to the scattering length. Low-frequency off-diagonal matrix elements related to d.c. transport quantities in a generic system also follow the behavior analogous to the ETH, however unrelated to the one of diagonal elements. Many-body quantum (MBQ) systems and models have been extensively studied in the last decades in connection with novel materials, offering a fresh view on the fundamentals and the interpretation of statistical mechanics. The systematic analysis of the phenomena of thermalization and the limitations of a statistical treatment within isolated MBQ systems have been recently motivated by experiments on cold atoms in optical lattices, revealing very slow relaxation to thermal equilibrium [1,2], but as well by prototype integrable MBQ systems as the one-dimensional Heisenberg model realized in real materials [3]. Specific for lattice MBQ systems discussed in the above connection is (in contrast to single-body quantum systems) the exponential growth of the Hilbert space and the number of eigenstates N st with the lattice size L. Here, one of the fundamental questions is to what extent even a single eigenstate or a single chosen initial wave-function could be the representative of the canonical ensemble average within the given system, both for static and dynamical quantities. For generic MBQ systems one of the central statements is the eigenstate thermalization hypothesis (ETH) [4,5] that for a few-body observable A diagonal matrix elements (ME) A αα at a given energy show only exponentially (in L) small deviations from the average, being a smooth function of the energy only. Since at the same time the off-diagonal ME are as well exponentially small, the time-average of the observable is determined by diagonal terms only. Therefore for any initial wave-function |ψ 0 with a small energy uncertainty the long-time average is also equal to the thermal average, this being the general condition for the quantum thermalization process [6]. We note that such a hypothesis is also underlying some numerical methods for the calculation of finite-temperature properties, in particular the microcanonical Lanczos method (MCLM) [7,8] for T > 0 static and dynamical properties of MBQ lattice systems. It seems also evident that the ETH is intimately related to general properties of eigenenergy spectra, i.e. level statistics and dynamics in generic MBQ systems, which reveal Wigner-Dyson level statistics with the origin in level repulsion and analogy to random matrix spectra [9,10]. The deviations from the ETH and normal thermalization have been detected is several directions. The hypothesis is not obeyed in integrable MBQ systems [6,[11][12][13][14], although some observables can still thermalize, i.e., approach the equilibrium (canonical ensemble average) value, in particular if the Gibbs statistical ensemble is generalized to include all local conserved quantities in this case [11,14]. The thermalization can become very slow and the validity of ETH can become restricted if an initial state is far from equilibrium [12,[15][16][17] as relevant for sudden quenches in cold-atom systems. The latter question is intimately related to the deviation from integrability [13] and the size of isolated MBQ systems [6,15,17]. On the other hand, the ETH does not resolve the question of the relation to off-diagonal ME (even in generic non-integrable systems) which are, e.g., relevant for transport properties and dissipation in the d.c. limit [18][19][20]. In this Letter we study the validity of the ETH and thermalization within a quantum spin-chain system in one dimension (1D), i.e., the antiferromagnetic and anisotropic S = 1/2 Heisenberg model (AHM), including integrable (I) and nonintegrable (NI) cases. While we confirm in the generic NI case the ETH for diagonal ME of several local observables, we find large deviations and fluctuations for the I case. In particular, we show that the spread of diagonal ME can be qualitatively and even quantitatively understood from the XX model equivalent to the model of noninteracting fermions. With the aim to resolve the problem of the breakdown of the ETH in finite systems we perform the finite-size scaling in NI systems revealing that the crossover from the I regime to the ETHconsistent behavior is determined by a single scale L * , coinciding with the transport scattering length. Another finding is that the low-frequency off-diagonal and diagonal ME are not universally related even in NI systems, hence the ETH does not directly address the low-ω dynamics and the d.c. transport quantities, and the generalization of the ETH is necessary. As the prototype model we study in the following the anisotropic S = 1/2 Heisenberg model on a chain with L sites and periodic boundary conditions, where S α i (α = x, y, z) are spin S = 1/2 operators at site i and ∆ represents the anisotropy. The nearest-neighbor XXZ model is an integrable one and we introduce the next-nearestneighbor zz-interaction with ∆ 2 = 0 in order to break its integrability. It should be reminded that the Hamiltonian (1) can be mapped on a t-V -W model of interacting spinless fermions. A consequence of the integrability at ∆ 2 = 0 is the existence of a macroscopic number of conserved local quantities and operators Q n , n = 1 − L commuting with the Hamiltonian, [Q n , H] = 0. For the 1D AHM a nontrivial example is Q 3 = J E representing the energy current and leading directly to its non-decaying behavior [21,22] and dissipationless thermal conductivity [3]. In order to study eigenstate and ME properties of the AHM we choose some simple local (but q = 0) quantities involving only few sites. Evident candidates are nontrivial quantities involving n = 2 sites, where we consider the "kinetic" energy H kin containing the first two terms in Eq. (1) and the spin current J s , while for a representative of n = 3 operators we consider the energy-current J E (not including the ∆ 2 term), The choice is motivated by different properties of the considered operators. While J E is a strictly conserved quantity for the I case, J s is not, but still leads to dissipationless (nondecaying) spin transport. Both are current operators with ME distributed around the ensemble average J s,E nm = 0. On the other hand, H kin has not such a specific property. In the following we present results reachable via the exact diagonalization of the model, Eq. (1), on chains up to L = 20. The total spin S z tot = M is fixed to M = −1 (in order to avoid "particle-hole" symmetry) while we consider both, the representative sector with wavevector k = 2π/L and the whole k-average as well. First, we present results for the distribution of diagonal ME, i.e., J s nn , J E nn , and H kin nn , as they arise varying eigenenergies E = E n . In Fig. 1 we show corresponding 2D plots obtained within the gapless regime (∆ = 0.5) and for the magnetization M = −1 (due to "particle-hole" symmetry J s nn vanishes at M = 0). Figure 1 reveals an evident difference between the NI example with ∆ 2 = 0.5 and the I case with ∆ 2 = 0. All quantities show for the NI example a narrow distribution around the averageĀ On contrary, for the I case distributions are much wider with a weaker size dependence, clearly not obeying the ETH. The distribution for J s and J E is intimately related to the anomalous T > 0 spin and energy-current stiffness (without degen- eracies), respectively, for the I model [18,19,22], whereβ s = β,β E = β 2 with β = 1/T . It is evident that the existence of D s,E (T > 0) > 0 implies that currents as J s,E do not thermalize to their thermal average J s,E = 0. In particular, their correlation functions do not decay to zero, J s,E (t → ∞)J s,E = 0, and their time evolution depends crucially on the ensemble of initial states. The same appears to be the case for H kin , although a physical interpretation is less familiar. With values of D s,E (T ) known from the Bethe Ansatz [21], and moreover for the energy-current stiffness D E (T → ∞) obtained easily via the high-T expansion, one can evaluate the distribution widths σ s,E d (E) ∝ √ L. Since analogous quantities to stiffness are not known in general, one can use in the gapless regime (∆ < 1) as a semiquantitative guide results for the ∆ = 0 XX model. The latter can be mapped to the model of non-interacting fermions, H = k ǫ k n k , ǫ k = J cos k, being trivially integrable with all n k = 0, 1 as constants of motion, with corresponding currents J s,E = k (1, ǫ k )v k n k . The calculation of σ s,E d (E) at fixed magnetization M = k (n k − 1/2) averaged over energies E is for L → ∞ equivalent to the grand-canonical averaging in the limit β → 0 yielding for the unpolarized case N = L/2: σ s On the other hand, instead of H kin (being within the XY limit equal to H) one can treat in an analogous way the complementary potential term H ∆ with the result σ ∆ d = J∆ √ L/4 [23]. We note that the above estimates for the widths σ α d represent well the numerical results in Fig. 1 for the I case with ∆ > 0. Next we investigate the crossover from an I to a NI system obeying the ETH. In a finite system fluctuationsσ α d = . This coincides with the well defined and nontrivial D s,E (T → ∞). In particular D E (T → ∞)/β s,E andσ E d can be related to the high-T sum rule (σ E d ) 2 = (1 + 2∆ 2 )/32 [21]. This is, however, not the case for the NI case ∆ 2 = 0. Here, there is an evident decrease with L and crossover to an exponential decrease with L, i.e., ETH-consistent behavior above the crossover scale L > L * . L * crucially depends on the perturbation strength ∆ 2 but apparently is quite universal for all quantities, at least for those considered here. In the case of currents the "thermalization length" L * may be plausibly interpreted in terms of the transport mean free path. The latter can be determined by a standard hydrodynamic relation, 1/(q 2 D) ≫ 1/γ [24], involving the diffusion constant D and the current scattering rate γ. Identifying the mean free path as L * = π/q then yields In the case of the spin current, using for ∆ = ∆ 2 = 0. β → 0 [20], one finds L * ∼ 10. This value turns out to agree well with the scale observed in the inset of Fig. 2. Moreover, γ s → 0 as ∆ 2 → 0 is consistent with a diverging L * . Finally, let us address the relation between off-diagonal and diagonal ME. Since for the I system the behavior can be very singular [20], we concentrate on the generic NI cases satisfying the ETH. In Fig. 3 we present the probability distribution of off-diagonal ME ReJ s,E nm and ReH kin nm , evaluated for ∆ = 0.5 and ∆ 2 = 0.5 in the energy window E = [−δE/2, δE/2] with various δE. Resulting distributions do clearly not depend on δE and appear to be Gaussian with the width σ α od , again in analogy to ETH exponentially decreasing with L. It is a nontrivial observation that both off-diagonal and diagonal ME follow the same scaling with L, as shown in the inset of Fig. 4. It is therefore important and well defined to investigate the ratio of off-diagonal and diagonal ME fluctuations Results for the spin and energy current are presented in Fig. 4, shown vs. E for ∆ 2 = 0.5 and ∆ = 0.5, 1.0. They indicate that r α (E) is not universal (depends on α and model parameters) and smoothly varies with E, but most important is the independence of L. We can conclude that for the cases considered here r α are quite far from the predictions of the random-matrix theory [9,19] implying generally r = 2. On the other hand, the ratio still remains within an order of magnitude in contrast to the I case where in the gapless regime the ratio appears to vanish leaving finite only diagonal ME [20]. The above observation becomes relevant in the evaluation of d.c. transport quantities, which are within linear response theory related to the low-ω absorption [25], e.g., the spin conductivity (diffusivity) and thermal conductivity, respectively, are in analogy to Eq. (3), where the d.c. limit should be considered as C α 0 = C α (ω → 0) and can be expressed as where ρ(E) is the MBQ density of states. From our analysis it follows that in generalσ α od (E) cannot be represented by diagonalσ α d (E), although the qualitative behavior appears closely related. Note that for the case of J s diagonal ME can be also expressed as the sensitivity of many-body levels to a fictitious flux φ (or boundary conditions), i.e. J s nn ∝ ∂E n /∂φ, and the latter relation has been previously employed to evaluate the d.c. transport in, e.g., disordered systems [10,26]. Let us in conclusion summarize our results, which appear to be generic beyond spin-chain systems. The behavior of the considered NI systems we find consistent with the ETH for all considered quantities. If we consider the time evolution of an observable, it can be in terms of (finite-system) eigenstates represented as In a system obeying ETH, the off-diagonal contribution vanishes for long times t → ∞, due to the exponential smallness of off-diagonal ME as well due to dephasing [6]. If the initial state |Ψ 0 is a microcanonical one with a narrow distribution δE [with (δE) 2 = n |c n | 2 (E n −Ē) 2 ], and due to ETH A nn ∼ A (Ē), the first term leads to the microcanonical av-erageĀ(t) = A (Ē) in a large system coinciding with the canonical thermodynamical average at a finite T > 0, where E(T ) =Ē. Such a scenario is then consistent with the "normal" quantum thermalization. In an I spin chain the distribution of diagonal ME is large, the long-time average [still neglecting off-diagonal terms in Eq. (8)] in general depends on |Ψ 0 and corresponding c n , even for a small energy uncertainty δE. In order to satisfȳ A(t → ∞) = A one needs assumptions on the distribution of coefficients c n . E.g., in a large enough system randomly chosen c n would plausibly be adequate. In fact, the numerical MCLM method for the evaluation of T > 0 properties [7,8], based on the microcanonical states and the Lanczos procedure, contains such a choice achieved by random sampling. Hence, a random microcanonical state in a large MBQ system would mostly obey the thermalization process. Still, this is not at all the case for particular states as, e.g., reached by (strong) quenching in an I system, but as well not in a generic system [13,17] since the initial state after the quench is not necessarily the microcanonical one with small δE. Analyzing the extent of the validity of the ETH and thermalization in a finite-size MBQ system, we find effectively that perturbed I systems beyond the crossover length L * behave as generic NI ones. Since in a "normal" spin system only total spin and energy are conserved, one can design two relevant diffusion scales and plausibly the largest would determine L * , which then appears to dominate the scaling of all quantities, as shown in Fig. 3. The understanding and the determination of L * is evidently an important theoretical goal, relevant also for experiments dealing with systems close to integrability [3,27]. The ETH addresses thermalization and statistical description of static quantities in MBQ systems, with the behavior determined by diagonal ME. On the other hand, d.c. transport quantities and low-ω dynamics involve only off-diagonal ME. We note that in a generic system, properties analogous to the ETH can be defined for off-diagonal ME close in energy, in particular obeying the Gaussian distribution and exponential dependence on size. Also, the relation between diagonal and off-diagonal ME is independent of size L, but still the ratio is not universal. In this sense, our results show that for such considerations the generalization of the ETH is needed but also is straightforward, and it can include the response to weak external fields and dissipation phenomena in MBQ systems. This research was supported by the RTN-LOTHERM project and the Slovenian Agency grant No. P1-0044.
2013-01-16T15:59:16.000Z
2012-08-30T00:00:00.000
{ "year": 2012, "sha1": "c0db2797737938454344fa67083979e60268c66e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1208.6143", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c0db2797737938454344fa67083979e60268c66e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
1623
pes2o/s2orc
v3-fos-license
Error-Driven Pruning of Treebank Grammars for Base Noun Phrase Identification Finding simple, non-recursive, base noun phrases is an important subtask for many natural language processing applications. While previous empirical methods for base NP identification have been rather complex, this paper instead proposes a very simple algorithm that is tailored to the relative simplicity of the task. In particular, we present a corpus-based approach for finding base NPs by matching part-of-speech tag sequences. The training phase of the algorithm is based on two successful techniques: first the base NP grammar is read from a ``treebank'' corpus; then the grammar is improved by selecting rules with high ``benefit'' scores. Using this simple algorithm with a naive heuristic for matching rules, we achieve surprising accuracy in an evaluation on the Penn Treebank Wall Street Journal. Introduction Finding base noun phrases is a sensible first step for many natural language processing (NLP) tasks: Accurate identification of base noun phrases is arguably the most critical component of any partial parser; in addition, information retrieval systems rely on base noun phrases as the main source of multi-word indexing terms; furthermore, the psycholinguistic studies of Gee and Grosjean (1983) indicate that text chunks like base noun phrases play an important role in human language processing. In this work we define base NPs to be simple, nonrecursive noun phrases --noun phrases that do not contain other noun phrase descendants. The bracketed portions of Figure 1, for example, show the base NPs in one sentence from the Penn Treebank Wall Street Journal (WSJ) corpus (Marcus et al., 1993). Thus, the string the sunny confines of resort towns like Boca Raton and Hot Springs is too complex to be a base NP; instead, it contains four simpler noun phrases, each of which is considered a base NP: the sunny confines, resort towns, Boca Raton, and Hot Springs. Previous empirical research has addressed the problem of base NP identification. Several algorithms identify "terminological phrases" --certain base noun phrases with initial determiners and modifiers removed: Justeson & Katz (1995) look for repeated phrases; Bourigault (1992) uses a handcrafted noun phrase grammar in conjunction with heuristics for finding maximal length noun phrases; Voutilainen's NPTool (1993) uses a handcrafted lexicon and constraint grammar to find terminological noun phrases that include phrase-final prepositional phrases. Church's PARTS program (1988), on the other hand, uses a probabilistic model automatically trained on the Brown corpus to locate core noun phrases as well as to assign parts of speech. More recently, Ramshaw & Marcus (In press) apply transformation-based learning (Brill, 1995) to the problem. Unfortunately, it is difficult to directly compare approaches. Each method uses a slightly different definition of base NP. Each is evaluated on a different corpus. Most approaches have been evaluated by hand on a small test set rather than by automatic comparison to a large test corpus annotated by an impartial third party. A notable exception is the Ramshaw & Marcus work, which evaluates their transformation-based learning approach on a base NP corpus derived from the Penn Treebank WSJ, and achieves precision and recall levels of approximately 93%. This paper presents a new algorithm for identifying base NPs in an arbitrary text. Like some of the earlier work on base NP identification, ours is a trainable, corpus-based algorithm. In contrast to other corpus-based approaches, however, we hypothesized that the relatively simple nature of base NPs would permit their accurate identification using correspondingly simple methods. Assume, for example, that we use the annotated text of Figure 1 as our training corpus. To identify base NPs in an unseen text, we could simply search for all occurrences of the base NPs seen during training -it, time, their biannual powwow, ..., Hot Springs --and mark them as base NPs in the new text. However, this method would certainly suffer from data sparseness. Instead, we use a similar approach, but back off from lexical items to parts of speech: we identify as a base NP any string having the same part-of-speech tag sequence as a base NP from the training corpus. The training phase of the algorithm employs two previously successful techniques: like Charniak's (1996) statistical parser, our initial base NP grammar is read from a "treebank" corpus; then the grammar is improved by selecting rules with high "benefit" scores. Our benefit measure is identical to that used in transformation-based learning to select an ordered set of useful transformations (Brill, 1995). Using this simple algorithm with a naive heuristic for matching rules, we achieve surprising accuracy in an evaluation on two base NP corpora of varying complexity, both derived from the Penn Treebank WSJ. The first base NP corpus is that used in the Ramshaw & Marcus work. The second espouses a slightly simpler definition of base NP that conforms to the base NPs used in our Empire sentence analyzer. These simpler phrases appear to be a good starting point for partial parsers that purposely delay all complex attachment decisions to later phases of processing. Overall results for the approach are promising. For the Empire corpus, our base NP finder achieves 94% precision and recall; for the Ramshaw & Marcus corpus, it obtains 91% precision and recall, which is 2% less than the best published results. Ramshaw & Marcus, however, provide the learning algorithm with word-level information in addition to the partof-speech information used in our base NP finder. By controlling for this disparity in available knowledge sources, we find that our base NP algorithm performs comparably, achieving slightly worse precision (-1.1%) and slightly better recall (+0.2%) than the Ramshaw & Marcus approach. Moreover, our approach offers many important advantages that make it appropriate for many NLP tasks: * Training is exceedingly simple. . The base NP bracketer is very fast, operating in time linear in the length of the text. . The accuracy of the treebank approach is good for applications that require or prefer fairly simple base NPs. . The learned grammar is easily modified for use with corpora that differ from the training texts. Rules can be selectively added to or deleted from the grammar without worrying about ordering effects. * Finally, our benefit-based training phase offers a simple, general approach for extracting grammars other than noun phrase grammars from annotated text. Note also that the treebank approach to base NP identification obtains good results in spite of a very simple algorithm for "parsing" base NPs. This is extremely encouraging, and our evaluation suggests at least two areas for immediate improvement. First, by replacing the naive match heuristic with a probabilistic base NP parser that incorporates lexical preferences, we would expect a nontrivial increase in recall and precision. Second, many of the remaining base NP errors tend to follow simple patterns; these might be corrected using localized, learnable repair rules. The remainder of the paper describes the specifics of the approach and its evaluation. The next section presents the training and application phases of the treebank approach to base NP identification in more detail. Section 3 describes our general approach for pruning the base NP grammar as well as two instantiations of that approach. The evaluation and a discussion of the results appear in Section 4, along with techniques for reducing training time and an initial investigation into the use of local repair heuristics. 2 The Treebank Approach Figure 2 depicts the treebank approach to base NP identification. For training, the algorithm requires a corpus that has been annotated with base NPs. More specifically, we assume that the training corpus is a sequence of words wl, w2,..., along with a set of base NP annotations b indicates that the NP brackets words i through j: [NP Wi, ..., W j]. The goal of the training phase is to create a base NP grammar from this training corpus: 1. Using any available part-of-speech tagger, assign a part-of-speech tag ti to each word wi in the training corpus. 2. Extract from each base noun phrase b(ij) in the training corpus its sequence of part-of-speech tags tl .... ,tj to form base NP rules, one rule per base NP. The resulting "grammar" can then be used to identify base NPs in a novel text. 1. 2. Assign part-of-speech tags tl, t2,.., to the input words wl, w2, • • • Proceed through the tagged text from left to right, at each point matching the NP rules against the remaining part-of-speech tags ti,ti+l,.., in the text. With the rules stored in an appropriate data structure, this greedy "parsing" of base NPs is very fast. In our implementation, for example, we store the rules in a decision tree, which permits base NP identification in time linear in the length of the tagged input text when using the longest match heuristic. Unfortunately, there is an obvious problem with the algorithm described above. There will be many unhelpful rules in the rule set extracted from the training corpus. These "bad" rules arise from four sources: bracketing errors in the corpus; tagging errors; unusual or irregular linguistic constructs (such as parenthetical expressions); and inherent ambiguities in the base NPs --in spite of their simplicity. For example, the rule (VBG NNS), which was extracted from manufacturing/VBG titans/NNS in the example text, is ambiguous, and will cause erroneous bracketing in sentences such as The execs squeezed in a few meetings before [boarding/VBG buses/NNS~ again. In order to have a viable mechanism for identifying base NPs using this algorithm, the grammar must be improved by removing problematic rules. The next section presents two such methods for automatically pruning the base NP grammar. 3 Pruning the Base NP Grammar As described above, our goal is to use the base NP corpus to extract and select a set of noun phrase rules that can be used to accurately identify base NPs in novel text. Our general pruning procedure is shown in Figure 3. First, we divide the base NP corpus into two parts: a training corpus and a pruning corpus. The initial base NP grammar is extracted from the training corpus as described in Section 2. Next, the pruning corpus is used to evaluate the set of rules and produce a ranking of the rules in terms of their utility in identifying base NPs. More specifically, we use the rule set and the longest match heuristic to find all base NPs in the pruning corpus. Performance of the rule set is measured in terms of labeled precision (P): p _-# of correct proposed NPs # of proposed NPs We then assign to each rule a score that denotes the "net benefit" achieved by using the rule during NP parsing of the improvement corpus. The benefit of rule r is given by B~ = C, -E, where C~ Boca Raton, Hot as a noun phrase, so its score is -1. Rule (NNP) incorrectly identifies Springs, but it is not held responsible for the error because of the previous error by (NNP NNP, NNP / on the same original NP Hot Springs: so its score is 0. Finally, rule (NNP NNP) receives a score of 1 for correctly identifying Palm Beach as a base NP. The benefit scores from evaluation on the pruning corpus are used to rank the rules in the grammar. With such a ranking, we can improve the rule set by discarding the worst rules. Thus far, we have investigated two iterative approaches for discarding rules, a thresholding approach and an incremental approach. We describe each, in turn, in the subsections below. 1 This same benefit measure is also used in the R&M study, but it is used to rank transformations rather than to rank NP rules. Threshold Pruning Given a ranking on the rule set, the threshold algorithm simply discards rules whose score is less than a predefined threshold R. For all of our experiments, we set R = 1 to select rules that propose more correct bracketings than incorrect. The process of evaluating, ranking, and discarding rules is repeated until no rules have a score less than R. For our evaluation on the WSJ corpus, this typically requires only four to five iterations. Incremental Pruning Thresholding provides a very coarse mechanism for pruning the NP grammar. In particular, because of interactions between the rules during bracketing, thresholding discards rules whose score might increase in the absence of other rules that are also being discarded. Consider, for example, the Boca Raton fragments given earlier. In the absence of (NNP NNP , NNP), the rule (NNP NNP / would have received a score of three for correctly identifying all three NPs. As a result, we explored a more fine-grained method of discarding rules: Each iteration of incremental pruning discards the N worst rules, rather than all rules whose rank is less than some threshold. In all of our experiments, we set N = 10. As with thresholding, the process of evaluating, ranking, and discarding rules is repeated, this time until precision of the current rule set on the pruning corpus begins to drop. The rule set that maximized precision becomes the final rule set. Human Review In the experiments below, we compare the thresholding and incremental methods for pruning the NP grammar to a rule set that was pruned by hand. When the training corpus is large, exhaustive review of the extracted rules is not practical. This is the case for our initial rule set, culled from the WSJ corpus, which contains approximately 4500 base NP rules. Rather than identifying and discarding individual problematic rules, our reviewer identified problematic classes of rules that could be removed from the grammar automatically. In particular, the goal of the human reviewer was to discard rules that introduced ambiguity or corresponded to overly complex base NPs. Within our partial parsing framework, these NPs are better identified by more informed components of the NLP system. Our reviewer identified the following classes of rules as possibly troublesome: rules that contain a preposition, period, or colon; rules that contain WH tags; rules that begin/end with a verb or adverb; rules that contain pronouns with any other tags; rules that contain misplaced commas or quotes; rules that end with adjectives. Rules covered under any of these classes were omitted from the human-pruned rule sets used in the experiments of Section 4. 4 Evaluation To evaluate the treebank approach to base NP identification, we created two base NP corpora. Each is derived from the Penn Treebank WSJ. The first corpus attempts to duplicate the base NPs used the Ramshaw & Marcus (R&M) study. The second corpus contains slightly less complicated base NPs -base NPs that are better suited for use with our sentence analyzer, Empire. 2 By evaluating on both corpora, we can measure the effect of noun phrase complexity on the treebank approach to base NP identification. In particular, we hypothesize that the treebank approach will be most appropriate when the base NPs are sufficiently simple. For all experiments, we derived the training, pruning, and testing sets from the 25 sections of Wall Street Journal distributed with the Penn Treebank II. All experiments employ 5-fold cross validation. More specifically, in each of five runs, a different fold is used for testing the final, pruned rule set; three of the remaining folds comprise the training corpus (to create the initial rule set); and the final partition is the pruning corpus (to prune bad rules from the initial rule set). All results are averages across the five folds. Performance is measured in terms of precision and recall. Precision was described earlier --it is a standard measure of accuracy. Recall, on the other hand, is an attempt to measure coverage: # of correct proposed NPs P = # of proposed NPs # of correct proposed NPs R = # of NPs in the annotated text Table 1 summarizes the performance of the treebank approach to base NP identification on the R&M and Empire corpora using the initial and pruned rule sets. The first column of results shows the performance of the initial, unpruned base NP grammar. The next two columns show the performance of the automatically pruned rule sets. The final column indicates the performance of rule sets that had been pruned using the handcrafted pruning heuristics. As expected, the initial rule set performs quite poorly. Both automated approaches provide significant increases in both recall and precision. In addition, they outperform the rule set pruned using handcrafted pruning heuristics. 2Very briefly, the Empire sentence analyzer relies on partial parsing to find simple constituents like base NPs and verb groups. Machine learning algorithms then operate on the output of the partial parser to perform all attachment decisions. The ultimate output of the parser is a semantic case frame representation of the functional structure of the input sentence. Throughout the table, we see the effects of base NP complexity --the base NPs of the R&M corpus are substantially more difficult for our approach to identify than the simpler NPs of the Empire corpus. For the R&M corpus, we lag the best published results (93.1P/93.5R) by approximately 3%. This straightforward comparison, however, is not entirely appropriate. Ramshaw & Marcus allow their learning algorithm to access word-level information in addition to part-of-speech tags. The treebank approach, on the other hand, makes use only of part-ofspeech tags. Table 2 compares Ramshaw & Marcus' (In press) results with and without lexical knowledge. The first column reports their performance when using lexical templates; the second when lexical templates are not used; the third again shows the treebank approach using incremental pruning. The treebank approach and the R&M approach without lecial templates are shown to perform comparably (-1.1P/+0.2R). Lexicalization of our base NP finder will be addressed in Section 4.1. R&M (1998) ]" R&M (1998) Finally, note the relatively small difference between the threshold and incremental pruning methods in Table 1. For some applications, this minor drop in performance may be worth the decrease in training time. Another effective technique to speed up training is motivated by Charniak's (1996) observation that the benefit of using rules that only occurred once in training is marginal. By discarding these rules before pruning, we reduce the size of the initial grammar --and the time for incremental pruning --by 60%, with a performance drop of only -0.3P/-0.1R. Errors and Local Repair Heuristics It is informative to consider the kinds of errors made by the treebank approach to bracketing. In particular, the errors may indicate options for incorporating lexical information into the base NP finder. Given the increases in performance achieved by Ramshaw & Marcus by including word-level cues, we would hope to see similar improvements by exploiting lexical information in the treebank approach. For each corpus we examined the first 100 or so errors and found that certain linguistic constructs consistently cause trouble. ( represent an initial exploration into the effectiveness of employing lexical information in a post-processing phase rather than during grammar induction and bracketing. While we are investigating the latter in current work, local repair heuristics have the advantage of keeping the training and bracketing algorithms both simple and fast. The effect of these heuristics on recall and precision is shown in Table 3. We see consistent improvements for both corpora and both pruning methods, achieving approximately 94P/R for the Empire corpus and approximately 91P/R for the R&M corpus. Note that these are the final results reported in the introduction and conclusion. Although these experiments represent only an initial investigation into the usefulness of local repair heuristics, we are very encouraged by the results. The heuristics uniformly boost precision without harming recall; they help the R&M corpus even though they were designed in response to errors in the Empire corpus. In addition, these three heuristics alone recover 1/2 to 1/3 of the improvements we can expect to obtain from lexicalization based on the R&M results. Conclusions This paper presented a new method for identifying base NPs. Our treebank approach uses the simple technique of matching part-of-speech tag sequences, with the intention of capturing the simplicity of the corresponding syntactic structure. It employs two existing corpus-based techniques: the initial noun phrase grammar is extracted directly from an annotated corpus; and a benefit score calculated from errors on an improvement corpus selects the best subset of rules via a coarse-or fine-grained pruning algorithm. The overall results are surprisingly good, especially considering the simplicity of the method. It achieves 94% precision and recall on simple base NPs. It achieves 91% precision and recall on the more complex NPs of the Ramshaw & Marcus corpus. We believe, however, that the base NP finder can be improved further. First, the longest-match heuristic of the noun phrase bracketer could be replaced by more sophisticated parsing methods that account for lexical preferences. Rule application, for example, could be disambiguated statistically using distributions induced during training. We are currently investigating such extensions. One approach closely related to ours --weighted finite-state transducers (e.g. (Pereira and Riley, 1997)) --might provide a principled way to do this. We could then consider applying our error-driven pruning strategy to rules encoded as transducers. Second, we have only recently begun to explore the use of local repair heuristics. While initial results are promising, the full impact of such heuristics on overall performance can be determined only if they are systematically learned and tested using available training data. Future work will concentrate on the corpusbased acquisition of local repair heuristics. In conclusion, the treebank approach to base NPs provides an accurate and fast bracketing method, running in time linear in the length of the tagged text.. The approach is simple to understand, implement, and train. The learned grammar is easily modified for use with new corpora, as rules can be added or deleted with minimal interaction problems. Finally, the approach provides a general framework for developing other treebank grammars (e.g., for subject/verb/object identification) in addition to these for base NPs.
1998-08-26T10:39:07.000Z
1998-08-10T00:00:00.000
{ "year": 1998, "sha1": "f5bb34e38e3403054d4396fc48882f02eae1ffcc", "oa_license": null, "oa_url": "http://dl.acm.org/ft_gateway.cfm?id=980881&type=pdf", "oa_status": "BRONZE", "pdf_src": "ACL", "pdf_hash": "b8784cec7219b8caec1671ca7a145badc7f1128e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
58919593
pes2o/s2orc
v3-fos-license
Constraints on intrinsic alignment contamination of weak lensing surveys using the MegaZ-LRG sample Correlations between the intrinsic shapes of galaxies and the large-scale galaxy density field provide an important tool to investigate galaxy intrinsic alignments, which constitute a major astrophysical systematic in cosmological weak lensing (cosmic shear) surveys, but also yield insight into the formation and evolution of galaxies. We measure galaxy position-shape correlations in the MegaZ-LRG sample for more than 800,000 luminous red galaxies, making the first such measurement with a photometric redshift sample. In combination with a re-analysis of several spectroscopic SDSS samples, we constrain an intrinsic alignment model for early-type galaxies over long baselines in redshift (z ~ 0.7) and luminosity (4mag). We develop and test the formalism to incorporate photometric redshift scatter in the modelling. For r_p>6 Mpc/h, the fits to galaxy position-shape correlation functions are consistent with the scaling with r_p and redshift of a revised, nonlinear version of the linear alignment model for all samples. An extra redshift dependence proportional to (1+z)^n is constrained to n=-0.3+/-0.8 (1sigma). To obtain consistent amplitudes for all data, an additional dependence on galaxy luminosity proportional to L^b with b=1.1+0.3-0.2 is required. The normalisation of the intrinsic alignment power spectrum is found to be (0.077 +/- 0.008)/rho_{cr} for galaxies at redshift 0.3 and r band magnitude of -22 (k- and evolution-corrected to z=0). Assuming zero intrinsic alignments for blue galaxies, we assess the bias on cosmological parameters for a tomographic CFHTLS-like lensing survey. Both the resulting mean bias and its uncertainty are smaller than the 1sigma statistical errors when using the constraints from all samples combined. The addition of MegaZ-LRG data reduces the uncertainty in intrinsic alignment bias on cosmological parameters by factors of three to seven. (abridged) Introduction Within the past decade, cosmic shear, the weak gravitational lensing effect by the large-scale structure of the Universe, has evolved from its first detections (Bacon et al. 2000;Kaiser et al. 2000;van Waerbeke et al. 2000;Wittman et al. 2000) into a wellestablished and regularly used cosmological probe (see for instance Benjamin et al. 2007;Fu et al. 2008;Schrabback et al. 2010 for recent results). Cosmic shear is considered one of the most promising techniques to unravel the properties of dark matter, dark energy and new gravitational physics (e.g. Albrecht et al. 2006;Peacock et al. 2006;Kitching et al. 2008;Thomas et al. 2009;Daniel et al. 2010) and acts as a major science driver for ongoing, upcoming and future ambitious surveys like Pan-STARRS 1 , DES 2 , LSST 3 , JDEM 4 , and Euclid 5 . The strength of weak lensing lies in its sensitivity to both the geometry of the Universe and the growth of structure, as well as its 'clean' relation to the underlying theory which is, except for the smallest scales, solely determined by gravitational interaction. Hence, most efforts in the past years to prepare the analysis of cosmic shear data for high-precision measurements have concentrated on technical issues, in particular the unbiased extraction of shapes from galaxy images (e.g. Bridle et al. 2010;Bernstein 2010;Cypriano et al. 2010;Voigt & Bridle 2010) and the estimation of galaxy redshifts from multi-colour photometry (e.g. Abdalla et al. 2007Abdalla et al. , 2008Zhang et al. 2010;Bernstein & Huterer 2010). Most astrophysical sources of systematic errors such as the limited accuracy of N-body simulations (e.g. Hilbert et al. 2009) and baryonic effects on structure growth (e.g. White 2004) are currently subdominant and can, like the technical systematics, largely be reduced by more powerful simulations, specifically designed instrumentation, and optimised analysis techniques. However, this statement does not apply to the intrinsic alignment of galaxies, which is a major astrophysical systematic affecting all two-point and higher-order weak lensing statistics. The estimation of gravitational shear correlations from the observed ellipticities of galaxy images is simple if one assumes that the intrinsic ellipticity of one galaxy is not correlated with either the intrinsic ellipticity of or the gravitational shear acting on another galaxy. Both of these assumptions are incorrect in the presence of intrinsic alignments which are e.g. caused by tidal torquing and stretching of galaxies by the large-scale gravitational field. These effects depend intricately on galaxy formation and evolution including baryonic physics, thus hampering modelling via analytical calculations or N-body simulations. The bias on cosmological parameters when ignoring intrinsic alignments can be severe; e.g. for the dark energy equation of state parameter, Bridle & King (2007) find a deviation of 50 % from a fiducial value of −1 for a Euclid-like survey if intrinsic alignment contamination is not accounted for in the analysis. Correlations between the intrinsic ellipticities of two galaxies (hereafter II) can occur if the galaxies are subject to the tidal forces of the same local or large-scale matter structures. Thus, a pair of galaxies with II correlations must be physically close, i.e. at small separation both in redshift and on the sky, which allows for a relatively straightforward removal of the II signal from cosmic shear data if photometric redshifts are sufficiently accurate (King & Schneider 2002Heymans & Heavens 2003;Takada & White 2004). The mutual alignment of halo shapes and spins has been studied extensively in N-body simulations (Splinter et al. 1997;Onuora & Thomas 2000;Faltenbacher et al. 2002;Hopkins et al. 2005;Faltenbacher et al. 2007Faltenbacher et al. , 2008; Lee et al. 2008), to a limited degree also including the effect of baryonic physics (van den Bosch et al. 2002;Bett et al. 2010;Hahn et al. 2010). Models for II correlations have been developed analytically or via fits to simulations (Croft & Metzler 2000;Heavens et al. 2000;Catelan et al. 2001;Crittenden et al. 2001;Jing 2002;Mackey et al. 2002;, with widely varying results. The II signal can either be observed using galaxy ellipticity correlations at very low redshift where cosmic shear is negligible (Brown et al. 2002 in SuperCOSMOS) or by selecting galaxy pairs with small physical separation (Mandelbaum et al. 2006;Hirata et al. 2007;Brainerd et al. 2009 in various SDSS samples). identified gravitational shearintrinsic ellipticity cross-correlations (hereafter GI) as a further, potentially more serious contaminant of cosmic shear surveys. GI correlations are generated by matter that aligns a nearby foreground galaxy and at the same time contributes to the gravitational lensing signal of a background galaxy. Thus, GI correlations are not restricted to close (along the line-of-sight) pairs. This type of intrinsic alignment has a redshift dependence that is very similar to that of lensing, so that GI correlations could be particularly prominent in the deep data of future cosmic shear surveys. The underlying correlations between halo shapes or orientations and the surrounding matter structure have been detected on a large range of mass scales in simulations (Bailin & Steinmetz 2005;Altay et al. 2006;Heymans et al. 2006;Kuhlen et al. 2007). However, as for II correlations, modelling attempts (Hui & Zhang 2002;Heymans et al. 2006; do not currently yield robust predictions. In the absence of a compelling model for GI correlations, methods with a weak dependence on the intrinsic alignment model are required to eliminate biases on cosmology in cosmic shear analyses. Joachimi & Schneider (2008) made use of the characteristic redshift dependence of the GI signal to null it in a fully model-independent way, while King (2005), Bridle & King (2007), Bernstein (2009), and Joachimi & Bridle (2010) introduced very general parametrisations of the intrinsic alignment contributions containing parameters that are then marginalised over (see Kirk et al. 2010 for an application to data). Both approaches feature a considerable loss of information, substantially weakening constraints on cosmological model parameters. Observational constraints on the GI contribution are thus crucial, as they can tighten priors on the intrinsic alignment parameters that need to be marginalised over, and shed light on the underlying physical processes by constraining intrinsic alignment models. To study the GI signal, one must measure the correlations between the matter distribution and the intrinsic shear, i.e. the correlated part of the intrinsic galaxy shapes (see e.g. . Assuming linear biasing, and constructing correlation functions with negligible gravitational lensing contributions, this measurement involves cross-correlating galaxy number densities and ellipticities (Mandelbaum et al. 2006;Hirata et al. 2007;Mandelbaum et al. 2010). These galaxy number density-intrinsic shear correlations were detected by Mandelbaum et al. (2006) in SDSS spectroscopic data at low redshift (z ∼ 0.1). Hirata et al. (2007) extended the analysis to SDSS Luminous Red Galaxies (LRGs) at slightly higher redshift (0.15 < z < 0.35), finding evidence for a dependence on galaxy type and for an increase of the intrinsic alignment signal of early-type galaxies with luminosity. They also considered a small sample of galaxies at intermediate redshifts (z ∼ 0.5) from the 2dF-SDSS LRG and Quasar Survey (2SLAQ, Cannon et al. 2006), but only marginally detected a signal in a bright subsample due to poor statistics. Recently, Mandelbaum et al. (2010) increased the redshift baseline for position-shape correlation measurements of blue galaxies out to z ∼ 0.7 by incorporating a large number of spectroscopic redshifts from the WiggleZ Survey (Drinkwater et al. 2010), reporting a null detection for all subsamples. Analogous to Mandelbaum et al. (2010), this work presents observational constraints for early-type galaxies (out to z 0.7), using SDSS shape measurements together with photometric redshift information from the MegaZ-LRG catalogue Abdalla et al. 2008). In combination with previously analysed red galaxy samples, we cover a wide range of redshifts and galaxy luminosities with high statistical precision, which allows us to narrow down the redshift and luminosity evolution of galaxy number density-intrinsic shear correlations. The longer baselines in redshift and luminosity enable a meaningful extrapolation to typical parameter values found for galaxies in cosmic shear surveys, so that we can estimate the contamination due to intrinsic alignments for present-day surveys. For the first time, we include a galaxy sample that only has photometric redshift information into the analysis, and we develop the corresponding formalism. In particular, we account for the spread of the number density-intrinsic shear correlations along the line of sight due to photometric redshift scatter, and determine the importance of other signals such as galaxy-galaxy lensing in the observables. This paper is structured as follows: In Sect. 2 we provide an overview of the galaxy samples used in the analysis. Section 3 contains the methodology for the correlation function measurement. We develop the modelling of photometric redshift number density-intrinsic shear correlations in Sect. 4, deferring the technical aspects of the derivation, as well as a revision of the redshift dependence of the linear alignment model, to two appendices. In Sect. 5 the results of our analysis are presented, including constraints on an intrinsic alignment model and a discussion of systematic tests. These findings are then applied in Sect. 6 to a prediction of the intrinsic alignment contamination of a generic present-day tomographic cosmic shear survey, before we summarise and conclude in Sect. 7. Where appropriate, we follow the notation of Mandelbaum et al. (2006), Hirata et al. (2007), and Mandelbaum et al. (2010). Since in contrast to these foregoing works we have to consider various contributing signals, we denote the observable by 'galaxy position-shape' correlations and otherwise follow the indexing scheme of Joachimi & Bridle (2010). Throughout we will assume as our cosmological model a spatially flat ΛCDM universe with parameters Ω m = 0.25, Ω b = 0.05, σ 8 = 0.8, h = 0.7, and n s = 1.0. While h = 0.7 is used for e.g. power spectrum calculations, all distances etc. will be given in units that are independent of h, i.e. in h −1 Mpc. We use the AB magnitude system and specify luminosities of the galaxy samples under consideration in the SDSS r filter. Absolute magnitudes are consistently given in terms of h = 1 and typically k + e-corrected to z = 0 unless stated otherwise. Data All data used in this paper come from the Sloan Digital Sky Survey (SDSS). The SDSS (York et al. 2000) imaged roughly π steradians of the sky, and followed up approximately one million of the detected objects spectroscopically (Eisenstein et al. 2001;Richards et al. 2002;Strauss et al. 2002). The imaging was carried out by drift-scanning the sky in photometric conditions (Hogg et al. 2001;Ivezić et al. 2004), in five bands (ugriz) (Fukugita et al. 1996;Smith et al. 2002) using a speciallydesigned wide-field camera (Gunn et al. 1998). These imaging data were used to create the galaxy shape measurements that we employ in this paper. All of the data were processed by completely automated pipelines that detect and measure photometric properties of objects, and astrometrically calibrate the data (Lupton et al. 2001;Tucker et al. 2006). The SDSS has had seven major data releases, and is now complete (Stoughton et al. 2002;Abazajian et al. 2003Abazajian et al. , 2004Abazajian et al. , 2005Finkbeiner et al. 2004;Adelman-McCarthy et al. 2006, 2007Abazajian et al. 2009). MegaZ-LRG The MegaZ-LRG sample ) is based on SDSS five-band (ugriz) imaging data. It contains more than a million luminous red galaxies at intermediate redshifts between 0.4 and 0.7, i.e. beyond the redshifts of the LRGs already targeted with the SDSS spectrograph (z 0.45, Eisenstein et al. 2001). While the original catalogue was selected from the 4th SDSS data release, we use the updated version presented in Abdalla et al. (2008) based on data release 6 (Adelman- McCarthy et al. 2008). The photometry in five bands is used to determine photometric redshifts for the MegaZ-LRG sample. For a subset of the galaxies, spectroscopic redshift information is required for calibration and cross-checking, which is provided by the 2SLAQ survey (Cannon et al. 2006). Consequently, the selection criteria of MegaZ-LRG have been designed to match those of 2SLAQ, using a series of magnitude and colour cuts (for details see Cannon et al. 2006;Collister et al. 2007). These criteria have an efficiency of 95 % in detecting LRGs in the redshift range 0.4 ≤ z ≤ 0.7, the failures being almost entirely due to M-type stars. In Sect. 3 we will describe how we account for this contamination in our analysis. The 2SLAQ selection criteria fluctuated a little at the beginning of the survey. Specifically, the faint limit of the i Note that the MegaZ-LRG histograms rely on photometric redshift estimates, and that they have been downscaled by a factor of 20 to facilitate the comparison with the other samples. band magnitude i deV (the total magnitude estimated using a de Vaucouleurs profile), and the minimum value of d ⊥ = (r − i) − (g − r)/8 (a colour variable used to select LRGs), were varied slightly. For the majority of the 2SLAQ survey, the criteria i deV ≤ 19.8 and d ⊥ ≥ 0.55 were used. For further details on this see Cannon et al. (2006). However, for the full MegaZ-LRG sample described in Collister et al. (2007), the flux limit is i deV ≤ 20, which means that roughly 1/3 of the sample is fainter than the 2SLAQ flux limit. Details about the photometric redshift estimation are provided in Sect. 2.4. As summarised in Table 1, we use about 860,000 galaxies with a mean redshift of 0.54 in the full MegaZ-LRG sample to compute galaxy number densities. The total number of galaxies is less than that of the full MegaZ-LRG catalogue by Collister et al. (2007) because a fraction of the area comes from imaging data that was not yet processed by the shape measurement pipeline used to estimate galaxy shapes. As will be discussed in Sect. 2.5, accurate shape measurements could be obtained for Notes. The columns give the sample name, the survey area covered, the number of galaxies N gal used to compute both the galaxy number densities and the shapes, the mean redshift z , and the mean luminosity L /L 0 in terms of a fiducial luminosity L 0 that corresponds to a k + e-corrected (to about 50 % of the MegaZ-LRG galaxies, which are then used to trace the intrinsic shear field. We also compute r band luminosities, taking into account galactic dust extinction (using reddening maps from Schlegel et al. 1998 and extinction-to-reddening ratios from Stoughton et al. 2002), and k+e-correcting the model magnitudes to z = 0 using the same templates as in Wake et al. (2006). Luminosities are given in terms of a fiducial luminosity L 0 , corresponding to a k+e-corrected absolute r band magnitude of M 0 = −22 mag. Due to this procedure, the corrected luminosity acts as a tracer for the total stellar mass, and consequently also for the total mass, of the galaxy. For reference, Blanton et al. (2003b) obtain a restframe magnitude at z = 0.1 of M * r = −20.44 for a purely fluxlimited sample of SDSS spectroscopic galaxies at z 0.2. Using Wake et al. (2006) to k + e-correct this value to z = 0, we find M * r = −20.32 or L * = 0.21L 0 . Due to a magnitude-dependent redshift success rate and a 0.2 magnitude difference in the i deV cuts (see above), the MegaZ-LRG sample is somewhat fainter in absolute magnitude than the 2SLAQ sample. Thus, we define the z = 0 absolute magnitudes using the MegaZ-LRG photometric redshift values, but we also multiply them by a correction factor that is derived from 2SLAQ. In 2SLAQ, we find that if we use the photometric redshift to define the luminosity rather than the spectroscopic redshift, the mean sample luminosity appears to be 5% too low. Possible sources of this bias are discussed in Sect. 2.4. Thus, we correct the mean L /L 0 for the MegaZ-LRG sample, as defined using photometric redshifts, upwards by 5% to account for the impact of photometric redshift errors, the corrected values being given in Table 1. In addition, with a cut at photometric redshift z = 0.529, we split the sample into two redshift bins, each containing roughly the same number of galaxies. We show the MegaZ-LRG photometric redshift distribution, as well as the luminosity distributions of both MegaZ-LRG subsamples in Fig. 1, where the latter are also based on photometric redshift estimates. As is evident from the figure, the redshift cut for the MegaZ-LRG sample also segregates the galaxies in luminosity since the sample is magnitude limited. Spectroscopic LRGs We also use the SDSS DR4 spectroscopic LRG sample (Eisenstein et al. 2001), for which measurements of galaxy position-shape correlations were presented in Hirata et al. (2007). No new measurements of this sample are made for the current work; instead, we combine the previous measurements with the new MegaZ-LRG data to provide constraints over a longer redshift baseline, see Fig. 1 and Table 1. The SDSS spectroscopic LRG sample has a flux limit of r < 19.2 and colour cuts to isolate LRGs. We include these galaxies in the redshift range 0.16 < z < 0.35, for which the sample is approximately volume-limited, and includes 36 278 galaxies total. In order to study variation within this sample, we use cuts on several parameters. First, we construct luminosities using the r band model magnitudes in the same way as for the MegaZ-LRG sample (Sect. 2.1), and define three luminosity subsamples with M r < −22.6 ('bright'), −22.6 ≤ M r < −22.3 ('medium'), and M r ≥ −22.3 ('faint'), see Table 1. The absolute magnitude cuts are defined in terms of h = H 0 /(100 km s −1 Mpc −1 ) such that one can implement the cuts without specifying the value of H 0 . The magnitudes have been corrected for galactic extinction, and are k + e-corrected to z = 0 using the same templates as in Wake et al. (2006). Each luminosity subsample is in addition split at z = 0.27 into a high-and a low-redshift bin. Main sample Furthermore we consider galaxies at a more typical luminosity, the same subsamples of the DR4 SDSS Main spectroscopic sample as in Mandelbaum et al. (2006), divided by luminosity and other properties. For this work, we use the red galaxies in L3 (from roughly 1.5 to 0.5 magnitudes fainter than typical L * galaxies) and L4 (from 0.5 magnitudes fainter to 0.5 magnitudes brighter than L * ) 6 . The sample properties were described in full in that paper; for this work, we mention only that the luminosities described to select the sample are Petrosian magnitudes, extinction-corrected using reddening maps from Schlegel et al. (1998) with the extinction-to-reddening ratios given in Stoughton et al. (2002), and k-corrected to z = 0.1 using kcorrect v3_2 software as described by Blanton et al. (2003a). For consistency with previous work, L3 and L4 were initially selected with respect to this type of absolute magnitude. However, in order to compare this sample with the others, we must also 6 We refrain from including the L5 and L6 samples as well because of their significant overlap with the spectroscopic LRG sample. compute absolute magnitudes in the same way as for those (model instead of Petrosian magnitudes, k + e-corrected to z = 0 instead of 0.1 using specifically red galaxy templates). To use the k + e-corrections as in the previous sections, we must first ensure that we are selecting a properly red galaxy sample for which they are appropriate. Hirata et al. (2007) used an empirically-determined redshiftdependent colour separator of u − r = 2.1 + 4.2z, which uses observer-frame rather than rest-frame colours; within these luminosity bins, the fractions of red galaxies were 0.40 (L3), 0.52 (L4), 0.64 (L5), and 0.80 (L6). For the following analysis, we want to define a red-sequence sample that is comparable to the higher-redshift samples defined previously (which are typically more strictly defined, though see Wake et al. 2006 and Sect. 5.4 for a discussion of the differences between MegaZ-LRG and SDSS LRGs). Thus, for this work we use a different colour separator and recompute the GI correlations for the new 'red' samples. To generate the new colour separator, we first k-correct the extinction-corrected model magnitudes using kcorrect to z = 0. The reason for beginning in this fashion, rather than using the Wake et al. (2006) templates as for the SDSS LRG and MegaZ-LRG samples, is that those k + e-corrections are not actually applicable to the majority of the Main sample since it is fluxlimited and therefore has a significant fraction of blue galaxies. Thus, we begin with kcorrect which imposes a templatedependent k-correction. Then, we place a cut in the distribution of rest-frame g − r, to strictly isolate the reddest galaxies. The cut value was motivated by Padmanabhan et al. (2004), which shows typical values for pressure-supported elliptical galaxies in the Main sample; however the actual cut value is shifted because the k-correction in that paper is to z = 0.1 rather than 0. The new red fractions in L3 and L4 are 22 % and 26 %, respectively. For the galaxies satisfying this cut, we then get absolute magnitudes using the Wake et al. (2006) k + e-corrections, which we find are typically 0.2 magnitudes brighter than the original Petrosian z = 0.1 magnitudes used to define the L3 and L4 samples. The detailed issue of consistency between the different samples in this paper (MegaZ-LRG, spectroscopic LRGs, and Main L3 and L4 red samples) is irrelevant to any attempts to simply fit each sample to an intrinsic alignment amplitude. However, before attempting to combine the measurements in the different samples, we must address this issue. Thus, a detailed comparison will be given in Sect. 5.4. Photometric redshifts The MegaZ-LRG catalogue is selected from the SDSS imaging database using a series of colour and magnitude cuts ) that match the default selection criteria of the 2SLAQ survey (Cannon et al. 2006) except for going 0.2 magnitudes fainter. The spectroscopic redshifts available from 2SLAQ were used to train and test the photometric redshift code, which was applied to the entire set of LRGs selected from the SDSS imaging database. Around 13,000 spectroscopic objects were obtained in the 2SLAQ survey, of which about 8000 were used to train a neural network (Collister & Lahav 2004) to produce photometric redshifts, leaving approximately 5100 galaxies to verify the estimates Abdalla et al. 2008). The quality of the photometric redshifts is very good because of the large 4000 Angstrom break present in luminous red galaxies. Accurate photometric redshifts are paramount for our analysis in order to ob- tain good signal-to-noise, and to ensure that the matter-intrinsic correlations dominate the observed signals, see Sect. 4.2. We plot in Fig. 2 the quality of the photometric redshifts that can be tested with the verification sample of ∼ 5000 2SLAQ spectra. We find that the distribution of the differences between photometric redshift estimate and spectroscopic redshift has a mean of zero. As a consequence, the mean trend of the photometric redshift distribution, given a spectroscopic redshift, is indistinguishable from a one-to-one relation. The number of galaxies with a difference between photometric redshiftz and spectroscopic redshift z largely exceeding the typical scatter (5σ or more) is always less than 3 % for any photometric redshift bin in the rangez < 0.65. The distribution of differences between photometric and spectroscopic redshift is well fit by a Gaussian with width 0.024(1 + z) (corresponding to the scatter shown in the figure), in good agreement with the results by Collister et al. (2007) who find a very similar scatter in the range 0.45 <z < 0.50 in which most galaxies of our sample reside. Their scatter increases by up to 50 % for higher photometric redshifts in the range 0.60 <z < 0.65. While the distribution of spectroscopic redshifts given a photometric redshift, i.e. p(z|z) is unbiased on average, the distributions p(z|z) have significant systematic offsets, as is evident from Fig. 2. Photometric redshift estimates for galaxies at low redshift are biased high, those for galaxies at high redshift are biased low, leading to a distribution of photometric redshifts that is more compact than the corresponding spectroscopic distribution. Since faint galaxies are preferentially found at low redshift, and bright galaxies at high redshift, the luminosity distribution of MegaZ-LRG galaxies based on photometric redshifts as displayed in Fig. 1 also appears more compact than the true distribution. We are not directly affected by this change in the shape of the distribution because we base our analysis on mean sample luminosities. However, as was discussed in Sect. 2.1, the redistribution of galaxies due to photometric redshifts also leads to a small change in the mean luminosity of the MegaZ-LRG MegaZ-LRG shape sample MegaZ-LRG shape sample + colour cut histogram for full MegaZ-LRG sample Fig. 3. Fraction of galaxies in the MegaZ-LRG sample with high-quality shape measurements, as a function of photometric redshift (top left panel), rest-frame absolute r band magnitude M r (top right panel), i band de Vaucouleurs magnitude i deV which was used as a selection criterion for the MegaZ-LRG catalogue (bottom left panel), and apparent observer-frame r band magnitude used to impose a magnitude limit on the shape catalogue (bottom right panel). The red histograms show the match fraction for the full MegaZ-LRG shape sample, and the blue histograms for shape sample with the additional colour cut that will be discussed in Sect. 5.4. For reference we have added to each panel the histogram of the full MegaZ-LRG sample with arbitrary normalisation as black dotted lines. Note that the fraction of galaxies with shape information does not depend strongly on redshift and r band magnitude. samples which we correct for via the photometric versus spectroscopic redshift relation from 2SLAQ. Note that we employ the photometric versus spectroscopic redshift histogram of Fig. 2 directly in the correlation function models as an approximation to p(z|z), despite the 0.2 magnitudes fainter catalogue used in this work. We will show in Sect. 5.1 that this is not a significant cause of systematic error in our analysis, as expected because the photometric redshift properties are only slightly extrapolated to i deV = 20. We note furthermore that, due to the small field of the 2SLAQ survey, cosmic variance limits the accuracy of using this photometric versus spectroscopic redshift relation for MegaZ-LRG as well. Galaxy shape measurements To measure the intrinsic shears, we use PSF-corrected shape measurements from SDSS, specifically the galaxy ellipticity measurements by Mandelbaum et al. (2005), who obtained shapes for more than 30 million galaxies in the SDSS imaging data down to extinction-corrected model magnitude r = 21.8 using the Reglens pipeline. We refer the interested reader to Hirata & Seljak (2003) for an outline of the PSF correction technique (re-Gaussianisation) and to Mandelbaum et al. (2005) for all details of the shape measurement. The two main criteria for the shape measurement to be considered high quality are that galaxies must (a) have extinction-corrected r band model magnitude r < 21.8, and (b) be well-resolved compared to the PSF size in both r and i bands. The fraction of MegaZ-LRG galaxies with high-quality shape measurements is 50 %, nearly independent of photometric redshift and r band magnitude, see Fig. 3. We also plot this match fraction as a function of the i band de Vaucouleurs magnitude, where i deV ≤ 20 was used as the magnitude cut for the MegaZ-LRG catalogue, and do not find a significant evolution either. It is interesting to note that the observer-frame r − i deV strongly increases with redshift, exceeding unity around z = 0.5, i.e. approximately at the peak of the MegaZ-LRG redshift distribution. Therefore all galaxies close to the r band magnitude limit have to be located at high redshift, given the limit on i deV for MegaZ-LRG, so that any evolution of the match fraction with redshift should imply an evolution with r. Since both are roughly constant in spite of the size cuts in galaxy shape measurements, the decrease in apparent galaxy size due to the larger distance at higher redshift has to be balanced by an increase of the physical dimensions of these galaxies. This can be explained by the positive correlation between galaxy size and absolute magnitude, where for a flux-limited survey the galaxies at higher redshift are intrinsically brighter on average. The match fraction as a function of absolute rest-frame r band magnitude M r displays a moderate decrease towards intrinsically fainter galaxies. Since in this case the effects of different galaxy distances have been removed, the galaxy size-luminosity relation causes the more luminous galaxies to have generally a higher share of good shape measurements. Note that in Fig. 3 we also show match fractions for the MegaZ-LRG sample with an additional colour cut that removes the bluest galaxies from the sample, see Sect. 5.4. This cut predominantly removes galaxies which are faint in the r and i deV bands, without a significant dependence on redshift. Moreover we find that the fraction of galaxies with highquality shape measurements exhibits a considerable dependence on observing conditions due to the resolution cut. We have generated mock catalogues that account for all of these variations in the ability to measure galaxy shapes with magnitude and observing conditions. Measurement of correlation functions The software for the computation of galaxy position-shape correlation functions is one of the codes used in Mandelbaum et al. (2006), Hirata et al. (2007), and Mandelbaum et al. (2010). Here we summarise the methodology, and refer the reader to Mandelbaum et al. (2010) for details. This software finds pairs of galaxies (with one belonging to the shape-selected sample used to trace the intrinsic shear field, and the other belonging to the full sample used to trace the density field) using the SDSSpix package 7 . We measure the correlations as a function of comoving transverse separation r p and comoving line-of-sight separation Π of the galaxy pairs, over the complete range of redshifts covered by a sample. The correlation functions must be computed using a large range of Π because of photometric redshift error (see Sect. 4); we divide this range into bins of size ∆Π = 10 h −1 Mpc. We adopt a variant of the estimator presented in Mandelbaum et al. (2006), which is given bŷ where S + D stands for the correlation between all galaxies in the full MegaZ-LRG catalogue, tracing the density field, and those from the subset with shape information, Here, e + ( j|i) denotes the radial component of the ellipticity of galaxy j measured with respect to the direction towards galaxy i out of the number density sample. A similar equation holds for S + R, but in this case the galaxies of the number density sample are taken from a random catalogue compliant with the selection criteria of the MegaZ-LRG data. The subtraction of S + R is meant to remove any spurious shear component, as described in previous works. The shear responsitivity R represents the response of our particular ellipticity definition to a shear, and for an ensemble with rms ellipticity per component of e rms , is roughly 1 − e 2 rms ≈ 0.85. For more details on the random catalogue generation and treatment and the shear responsitivity R see Mandelbaum et al. (2006). The normalisation of (1) is given by the number of pairs D S R with one real galaxy from the shape subsample D S and one random galaxy from the full random catalogue R. All these pair counts are done for galaxies with transverse comoving separation r p and comoving line-of-sight separation Π, for all redshifts z in the sample. To deduce matter-intrinsic shear correlations from ξ g+ , we also measure galaxy clustering for MegaZ-LRG viâ where D S D denotes the number of galaxy pairs between the full MegaZ-LRG catalogue (used to trace the density field) and the intrinsic shear sample, and D S R is the number of pairs with one real galaxy in the sample used to trace the intrinsic shear and one in the (full) random catalogue representing the density field sample. Again, all these galaxies are selected from bins in r p and Π over all redshifts in the sample. By cross-correlating the MegaZ-LRG full and shape samples in (3), we intend to mitigate the effect of a residual star contamination f contam = 0.05 in the sample (see Sect. 2.1) which should enter (3) only linearly, as will be detailed in Sect. 5.2. Determining galaxy clustering from auto-correlations of the full galaxy sample would result in a contamination which is quadratic in f contam to leading order. The projected correlation function is then computed by summation of the correlation functions in (1) and (3) along the line of sight, multiplied by ∆Π. This calculation is done in N bin = 10 logarithmically spaced bins in transverse separation in the range 0.3 < r p < 60 h −1 Mpc. We re-bin the existing correlation functions in r p for the SDSS LRG and Main samples accordingly. For the spectroscopic samples, bins in line-of-sight separation have a width of ∆Π = 4 h −1 Mpc. The cut-off in this stacking process is Π max = 60 h −1 Mpc for the spectroscopic data sets, capturing virtually all of the signal (Mandelbaum et al. 2006;Hirata et al. 2007;Padmanabhan et al. 2007;Mandelbaum et al. 2010). MegaZ-LRG correlation functions are computed for Π max = 90 h −1 Mpc and Π max = 180 h −1 Mpc, where we will investigate the effect of these truncations in more detail in Sect. 5.1. Note that these values of Π max were chosen to roughly coincide with the 1σ and 2σ photometric redshift scatter. Covariance matrices for MegaZ-LRG are determined using a jackknife with 256 regions, in order to account properly for shape noise, shape measurement errors, and cosmic variance. For the total survey area probed by MegaZ-LRG the maximum comoving transverse separations contained in each jackknife region are well above 100 h −1 Mpc at z = 0.5, and thus considerably larger than the maximum r p used for the analysis, so that the jackknife samples are independent. 50 jackknife regions were used to obtain covariances for the SDSS LRG and Main samples (Hirata et al. 2007), where the smaller number of regions is a consequence of the lower mean redshift and the smaller survey area covered. We address the issue of noise in the covariance matrices below in Sect. 4.4. To compute correlation functions as a function of the comoving separations r p and Π, one needs to assume a fiducial cosmology to transform the observable galaxy redshifts and angular separations. The cosmological model for this conversion differs from our fiducial cosmology in the value Ω m = 0.3 for all samples under consideration, in order to maintain consistency with the signals computed for the SDSS LRG sample from Hirata et al. (2007). Since we use our default value of Ω m = 0.25 for all model calculations, this discrepancy could bias our results. We evaluate the effect of the difference in Ω m for the MegaZ-LRG high-redshift bin which is the sample at the highest redshift and should thus be affected most. The change in Π can be safely neglected while r p changes by 2 % at z = 0.59. Converting the transverse separation of the observed correlation functions from Ω m = 0.3 to Ω m = 0.25, we find changes in the fit results for the galaxy bias and the intrinsic alignment model amplitude below 1 % each, and thus conclude that the discrepancy in Ω m can be neglected in our analysis. Modelling While the methodology for the analysis of spectroscopic galaxy samples is already well established (Mandelbaum et al. 2006;Hirata et al. 2007), we consider for the first time a sample with only photometric redshift information, obtained from the MegaZ-LRG catalogue. In this section and in Appendix A we derive the models that are later compared to the observational data, incorporating photometric redshift uncertainty. The photometric redshift scatter has two major effects on our measurements: First, the truncation of the observed correlation function at large line-of-sight separations Π has to be taken into account explicitly in the model. Second, we must assess the importance of contributions other than galaxy number density-intrinsic shear correlations to the observations. For reasons of optimum signalto-noise and a simplified physical interpretation, the observations are expressed as line-of-sight projected correlation functions as a function of comoving transverse separation between galaxy pairs, and we transform the model accordingly. The number density-intrinsic shear correlation function As a first step, we compute the correlation function between galaxy number density and intrinsic shear (hereafter gI), ξ phot gI (r p ,Π,z m ). As before,r p denotes the comoving transverse separation, andΠ the comoving line-of-sight separation of galaxy pairs which are located at a mean redshiftz m . Note that we assign a bar to all quantities derived from photometric redshift estimates. In the following we will refer to correlations of the form ξ(r p , Π, z) as the three-dimensional correlation function. We follow the notation of Joachimi & Bridle (2010) in denoting the different signals that contribute to the observations 8 . As a reminder, for each of the galaxy samples consid-ered, the galaxy pairs used to compute ξ phot gI (r p ,Π,z m ) consist of one galaxy out of the density sample tracing the galaxy distribution and one galaxy out of the subsample with shape information used to trace the intrinsic shear. In Appendix A we derive an approximate procedure to model ξ phot gI (r p ,Π,z m ) to good accuracy. As we show there, ξ phot gI (r p ,Π,z m ) can be obtained via a simple coordinate transformation, see (A.10), from the angular correlation function where the angular gI power spectrum C gI is given in terms of the underlying three-dimensional power spectrum P gI by the Limber equation Here, χ denotes comoving distance, integrated up to the comoving horizon distance χ hor . We have introduced the probability distributions of comoving distances p n for the galaxy density sample and p ǫ for the galaxy shape sample. They are related via p x (χ|χ(z i )) = p x (z|z i ) dz/dχ to the probability of a redshift z given the photometric redshift estimatez i . The latter can be extracted from the histogram in Fig. 2 by a vertical section atz i . Note that we use these distributions, extracted from the 2SLAQ photometric versus spectroscopic relation and linearly interpolated, for both the shape-selected and the full number density sample as we find that their redshift distributions agree to good accuracy. This agreement is not obvious because the images of the galaxies selected for shape measurement need to have a certain minimum angular size. We trace the agreement of the two redshift distributions for the MegaZ-LRG sample back to a rough balance between the effect that galaxies at higher redshift, i.e. at larger distance, appear smaller due to the larger angular diameter distance, and the counteracting effect that for a given range of apparent magnitudes, galaxies at higher redshift are on average intrinsically brighter and thus have larger intrinsic physical sizes, see also Sect. 2.5. Throughout, we will assume that the galaxy bias b g is scaleindependent. Then P gI can be related to the matter-intrinsic power spectrum via P gI (k, z) = b g P δI (k, z), where we calculate P δI according to the linear alignment model (Catelan et al. 2001; where D(z) denotes the linear growth factor, normalised to unity today. Equation (6) differs from the result derived by in the dependence on redshift, with the latter expression featuring an additional term (1 + z) 2 . In Appendix B we derive the correct scaling with redshift shown in (6); see also Hirata & Seljak (2010). We absorb the dimensions of the normalisation into the constant C 1 which is set to C 1 ρ cr ≈ 0.0134, following and Bridle & King (2007) who matched the amplitude of the linear alignment model to SuperCOSMOS observations at low redshift. This choice is (2010) in that these works used the term 'GI' for both the measured galaxy number density-ellipticity cross-correlations (gI in this paper) and the derived GI intrinsic alignment term. Fig. 4. Three-dimensional correlation function models of a sample with the MegaZ-LRG photometric redshift error as a function of comoving line-of-sight separation Π and comoving transverse separation r p at z m ≈ 0.5. The galaxy bias has been set to 1.9 in all panels. Top panel: Galaxy clustering correlation (gg). Contours are logarithmically spaced between 1 (yellow shading) and 10 −5 (violet shading) with three lines per decade. Centre panel: Galaxy number density-intrinsic shear correlations (gI). Contours are logarithmically spaced between 10 −3 (yellow shading) and 10 −6 (violet shading) with three lines per decade. Bottom panel: Galaxy-galaxy lensing (gG). For ease of direct comparison, the contours are encoded exactly like in the centre panel. Note that the galaxy-galaxy lensing signal is not symmetric around Π = 0, in contrast to the gg and gI terms. Also, it is negative, so that the modulus is plotted. For an illustration of the effect of photometric redshift scatter see common but of no particular relevance for our study, and the normalisation is in principle arbitrary. Thus, we introduce a dimensionless amplitude parameter A which is free to vary. If not stated otherwise, we use A = 1 to demonstrate our model in the following subsections. The original derivation of the intrinsic alignment model requires the linear matter power spectrum in (6), see e.g. Appendix B, but following Bridle & King (2007) we use the full P δ with nonlinear corrections, which provides satisfactory fits to existing data. Thus we refer to (6) as the nonlinear version of the linear alignment (NLA) model henceforth. The matter power spectrum is computed for our default cosmological model, a spatially flat ΛCDM universe with parameters Ω m = 0.25, σ 8 = 0.8, h = 0.7, and n s = 1.0. The transfer function is calculated according to Eisenstein & Hu (1998) using Ω b = 0.05, and nonlinear corrections are included following Smith et al. (2003). For illustration, in Fig. 4, centre panel, the predicted gI correlation function for the MegaZ-LRG sample (assuming alignments consistent with those in SuperCOSMOS, and including photometric redshift errors) is plotted, for z m ≈ 0.5 and assuming b g = 1.9 which is roughly in agreement with the results obtained for the full MegaZ-LRG sample by Blake et al. (2007) and close to the value we determine, see Sect. 5.2. The signal is strongest around Π = 0, but extends far out along the lineof-sight direction due to the photometric redshift scatter. The correlations have a maximum at r p ∼ 0.5 h −1 Mpc and decrease for larger r p due to the diminishing physical interaction between galaxies at large separation, and for small r p since the separation vector between pairs of galaxies gets close to the line-of-sight direction, see also Fig Note that throughout this work we do not include redshiftspace distortions in our modelling. Since for both spectroscopic and photometric data we integrate the correlation functions over the line-of-sight separation out to at least 60 h −1 Mpc and 90 h −1 Mpc, respectively, redshift-space distortions should have a negligible influence on the integrated signals (see also the discussions in Padmanabhan et al. 2007;Blake et al. 2007;Mandelbaum et al. 2010). In addition, in the latter case, the redshift space distortions should be subdominant compared to the size of the photometric redshift errors. Contribution of other signals Due to the photometric redshift scatter, contributions to galaxy position-shape correlations other than the gI term may become important. In the weak lensing limit, the measured ellipticity of a galaxy image is the sum of the intrinsic ellipticity and the gravitational shear, while the number density is determined by an intrinsic term (whose two-point correlation is the usual galaxy clustering) plus modifications by lensing magnification effects. Hence, in terms of angular power spectra one can write (for details see Bernstein 2009;Joachimi & Bridle 2010) for each set of galaxy samples that is correlated. Apart from the gI signal, contributions from galaxy-galaxy lensing (gG), magnification-shear correlations (mG), and magnificationintrinsic correlations (mI) occur. If z 1 ≈ z 2 , the gI term is expected to dominate 9 , whereas galaxy-galaxy lensing is the dominant term in C nǫ (ℓ; z 1 , z 2 ) if a number density and a shape sample at significantly different z 1 < z 2 are correlated. In addition, correlations between lensing magnification and gravitational shear can have a contribution, e.g. if a matter overdensity causes both tangential shear alignment and an apparent boost in the number density of background galaxies. Likewise this overdensity could tidally align surrounding galaxies and thus create correlations between magnification and the intrinsic galaxy shapes. All these additional signals are related to the threedimensional correlation function via relations of the form (4), so that, to assess the importance of their contributions, it is sufficient to compare the angular power spectra. We restrict the consideration to power spectra that correlate galaxy shapes and number densities at the same redshift (redshift autocorrelations), and that hence are representative of correlation functions at small line-of-sight separations. Note that for larger values of |Π|, the scaling of the different signals with redshift becomes important, so that e.g. the amplitudes of the gG and mG signals relative to the gI term increase if bigger values for Π max are chosen. The corresponding Limber equations of the additional signals read (e.g. Joachimi & Bridle 2010) where α = −d log N(> L) /d log L is the logarithmic slope of the cumulative galaxy luminosity function of the density sample (e.g. Narayan 1989; Bartelmann & Schneider 2001). Moreover we have defined the lensing weight function where a denotes the cosmic scale factor. To determine α, we calculate the cumulative distribution of i band de Vaucouleurs magnitudes used for selection of the MegaZ-LRG samples, and fit the slopes s = d log N(< i deV ) /di deV at the faint end of each distribution. The slope of the cumulative galaxy luminosity function is then given by α = 2.5 s (e.g. van Waerbeke 2010). We find α = 2.26 for the full MegaZ-LRG sample, α = 1.29 for the low-redshift sample, and α = 3.04 for the high-redshift sample. In this comparison, we assume that the intrinsic alignment signal follows the corrected NLA model with the normalisation from SuperCOSMOS. Since the strength of the intrinsic alignment signals for LRGs are expected to be significantly higher, see e.g. the results for SDSS LRGs by Hirata et al. (2007), this should be a conservative assumption. Note that the SuperCOSMOS normalisation employed in foregoing work was based on the redshift scaling of the II term given in . Since SuperCOSMOS has a mean redshift of 0.1, Ratio of the aforementioned signals over the gI correlations, with the same coding of the curves as above. The grey region covers angular scales that do not contribute significantly to the correlation functions. Galaxy-galaxy lensing and possibly mG correlations yield a relevant contribution to number density-shape correlations besides the gI signal. the amplitudes of the II signals in the original and corrected version of the linear alignment model should differ by about a factor of 1.1 4 (see also Appendix B). Thus we retain the normalisation to SuperCOSMOS for the corrected version by setting A = 1.1 2 = 1.21. In addition, we choose a galaxy bias b g = 1.9, where the latter value is close to the actual fit results for MegaZ-LRG, see Sect. 5.2. The resulting angular power spectra for all four contributions to (7) are shown in Fig. 5. We plot the redshift auto-correlation power spectra, usingz 1 =z 2 = 0.5 and z 1 =z 2 = 0.59 corresponding approximately to the mean redshifts of the two MegaZ-LRG redshift-binned samples. For the photometric redshift accuracy of the MegaZ-LRG sample, the gI signal still clearly dominates the position-shape correlations. It has a slightly lower amplitude atz = 0.59 than atz = 0.5 due to the broader redshift distribution at the higher photometric redshift. To verify that it is indeed the width of the distribution, and not the shift of its mean redshift, that causes the depletion, we shift the redshift distribution atz = 0.4525 to a mean of 0.6 and re-compute the gI signal which then has a similar, slightly higher amplitude compared to the gI correlations atz = 0.4525. The other signals are less affected by the width of the contributing redshift distributions since they depend on lensing and thus have a much broader kernel in the line-of-sight integration, see (8). The mI signal never attains more than a few per mil of the gI term and is hence irrelevant for our purposes. Due to the steep slopes of the galaxy luminosity functions, magnification-shear correlations can contribute up to 20 per cent of the gI term over a wide range of angular frequencies, and even become the dominant contamination of the gI signal at small ℓ. However, due to the Bessel function J 2 in the kernel of (4), contributions from small ℓ are largely suppressed in ξ phot gI (r p ,Π,z m ). The first maximum of J 2 is at ℓθ ∼ 3, and the maximum angle probed by our analysis is 3.1 deg corresponding to r p = 60 h −1 Mpc at z = 0.4, so that only angular frequencies ℓ 60 are important, as indicated by the grey region in Fig. 5. Still, the mG term could add of the order 10 % to the total signal under pessimistic assumptions, which is in the regime of the expected model parameter errors, so that we include the mG signal into our modelling. We note that this contribution is likely to be even more relevant for future intrinsic alignment analyses of surveys with higher statistical power and/or less accurate photometric redshifts, provided that the faint-end slope of the luminosity function has similar steepness. Since we assume the NLA model, for which P δI has the same angular dependence as the matter power spectrum, the differences between the signals shown in Fig. 5 can only be due to the weights in the Limber equations (8). The gI, mI, and gG signals all have at least one term p(χ) in the kernel which is thus relatively compact. Only the weight of the mG correlations features a product of lensing efficiencies (9) which smears out the information over comoving distance due to its width and in addition shifts the features in the projected power spectrum because q(χ) peaks at half the source distance, see (9). Therefore the mG signal has a different angular dependence, causing the waviness in the ratio mG/gI. Galaxy-galaxy lensing has a scale dependence that is similar to the gI term, thereby yielding a nearly constant contribution of about 30 %. Therefore we need to incorporate the gG term into our model, mainly affecting the amplitude of the model correlation function due to the almost constant ratio gG/gI. Note that contrary to the usual approach to galaxy-galaxy lensing studies, we have defined the correlation function such that radial alignment produces a positive signal. Hence, the inclusion of the gG term into the model will increase the measured gI amplitude. The modulus of the three-dimensional gG correlation function is shown in Fig. 4, bottom panel. Due to the lensing contribution, the gG correlation is not symmetric with respect to Π = 0, even if the redshift distributions of the galaxy shape and density samples are identical. We note in passing that if the measurements included correlations between galaxies at largely different redshifts to which galaxy-galaxy lensing would be the dominant contribution, it would be possible to simultaneously measure the gG and gI signals, see e.g. Joachimi & Bridle (2010). However, such joint analyses are beyond the scope of the present work. The above statements hold only if the amplitude of the intrinsic alignment signal is of the order found in the SuperCOSMOS survey, because we have used A ∼ 1 in (6). If the contribution by intrinsic alignments were weaker, the importance of the gG and mG signals would further increase. However, Hirata et al. (2007) have demonstrated that LRGs show a strong intrinsic alignment signal at z ∼ 0.3, so unless we find a strong decline of intrinsic alignments with redshift, A ∼ 1 should be a conservative assumption. We also consider galaxy clustering (hereafter gg) which will be used to determine the galaxy bias of the different samples. Since in the case of MegaZ-LRG the gg signal is affected in the same way by the photometric redshift scatter as number densityshape cross-correlations, we proceed in exact analogy and compute the three-dimensional correlation function ξ phot gg (r p ,Π,z m ) from (e.g. Hu & Jain 2004) again by means of (A.11), where the angular power spectrum is related to the matter power spectrum via Note that one of the redshift probability distributions is determined from the shape sample because we use cross-correlations between the full and shape samples to measure galaxy clustering in this analysis, see Sect. 3. We show the three-dimensional correlation function of galaxy clustering in the top panel of Fig. 4. The strong spread of the gg signal along the line of sight demonstrates that in the case of the MegaZ-LRG sample, photometric redshift scatter and the corresponding effect of a truncation at large Π when computing the projected correlation function has to be modelled with similar care as for the gI term. Since galaxy clustering produces a strong signal, we can safely neglect potential contributions by lensing magnification effects in this case. Projection along the line of sight As in the spectroscopic case, the quantity that is actually compared to the data is the projected galaxy position-shape correlation function w g+ , obtained by integrating the three-dimensional correlation function ξ phot gI (r p ,Π,z m ) overΠ. In addition we take the average over a range of photometric redshiftsz m which e.g. corresponds to the two redshift bins defined for the MegaZ-LRG sample, resulting in where the truncation at Π max , taken to be the same as for the data, has now been written explicitly. The average overz m contains the weighting W(z) over redshifts as derived by Mandelbaum et al. (2010), which is given by where p(z) is in this case the unconditional probability distribution of photometric redshifts determined from the MegaZ-LRG sample 10 (or its redshift-binned subsamples). Here, χ ′ (z) denotes the derivative of comoving distance with respect to redshift. Note that the denominator χ 2 (z) χ ′ (z) in (13) is proportional to the derivative of the comoving volume V com with respect to redshift. Equation (13) can be illustrated by considering a volume-limited sample for which p(z) = dV com /dz holds. Then W(z) = p(z), as expected for a simple average over redshift in 10 Note that using the photometric redshift distribution obtained from the 2SLAQ verification sample instead has no significant influence on the models. (12). A flux-limited sample like MegaZ-LRG misses faint galaxies at high redshifts compared to a volume-complete sample, so these redshifts are downweighted accordingly by (13) in the averaging process. Note that this procedure of computing the threedimensional correlation function for each redshift and subsequently averaging over redshift with the weighting (13) is equivalent to our treatment of the data, where the correlation function was computed as a function of r p and Π for all redshifts, thereby obtaining an average over the full range of redshifts covered by the sample. In the case of galaxy clustering, the projected correlation function w gg (r p ) is determined in exact analogy to (12). In principle the integral overΠ in (12) should extend to infinity, see (A.6), but it can be truncated at some maximum valueΠ max because correlations between pairs of galaxies do not extend to infinite separation. However, the photometric redshift scatter for the MegaZ-LRG sample causes significant contributions to the line-of-sight integral in (12) from Π 300 h −1 Mpc, although signal-to-noise requirements enforce relatively small values of Π max . For a consistency check on the modelling of the line-of-sight truncation of the correlation functions in case of the MegaZ-LRG sample, we compute the model and the observed correlation functions forΠ max = 90 h −1 Mpc andΠ max = 180 h −1 Mpc. For the number density-shape correlations as well as the galaxy clustering signal, we compare the ratios of the correlation functions with these two cut-offs in Sect. 5.1, finding good agreement between model and observational data. We use cut-offs in the signal integration along the line-of-sight at either 180 h −1 Mpc or 90 h −1 Mpc for the fits to the full MegaZ-LRG sample and find consistent results for the intrinsic alignment fit parameters with errors of the same order, see Sect. 5.3. The signals for w gg and w g+ both have similar signal-to-noise when truncating at these two values ofΠ max . The correlation functions for the two MegaZ-LRG redshift-binned samples have been cut off at Π max = 90 h −1 Mpc throughout. In addition to the photometric MegaZ-LRG sample, we will also reconsider spectroscopic samples from SDSS. As discussed before, the line-of-sight truncation can be ignored in the case of spectroscopic data, so that the projected correlation function is simply given by where (A.6) and the same redshift averaging procedure as in (12) were used. Similarly, one obtains for the spectroscopic galaxy clustering signal (e.g. Hirata et al. 2007) Fitting method We perform the fits to the data via weighted least squares minimisation, using the reduced χ 2 at the minimum to quantify the goodness of fit. The data are compared to signals computed for the NLA model (6) Since we have noisy jackknife covariances obtained from a finite number of realisations, the inverse of these covariances, required for the χ 2 , is biased Hartlap et al. 2007). We employ the corrected estimator for the inverse covariance presented in Hartlap et al. (2007), where d is the dimension of the data vector and n the number of realisations used to estimate the covariance. Equation (16) was derived under the assumption of statistically independent data vectors with Gaussian errors. For the SDSS samples (d = 10, n = 50) we find F ≈ 0.776, and for MegaZ-LRG (d = 10, n = 256) F ≈ 0.957, the latter result being in excellent agreement with the results obtained from the simulations described in Appendix D of . To study the characteristics of the covariances, we compute the correlation coefficient between different transverse separation bins, In Fig. 6 we have plotted r corr of both w gg and w g+ for the two MegaZ-LRG samples at low and high redshift. While w g+ decorrelates quickly with only moderate correlation between neighbouring bins on the largest scales, w gg features strong positive, long-range correlation particularly on the larger scales that are used for the fits. The correlation coefficients have similar values for the two redshift bins, with the z < 0.529 bin showing slightly higher correlation in most cases. For the SDSS samples we find a similar correlation structure for w g+ , but much weaker correlations in w gg . The difference in correlation length between w gg and w g+ is caused by the different kernels in the Hankel transformation between power spectrum and correlation function, see (4) and (10). Since J 2 (x) decreases faster than J 0 (x) for increasing x, we expect w gg to generally feature stronger correlations. A given transverse separation r p between galaxies is observed under a smaller angle if these galaxy pairs are located at higher redshift, and it is this angle which enters the argument of the Bessel functions. Therefore the correlation present in w gg is more pronounced in the MegaZ-LRG samples, which are at considerably higher redshift than the other SDSS samples. Scaling with line-of-sight truncation To test whether the data and the model show the same behaviour when varyingΠ max , we compute both w g+ and w gg for the MegaZ-LRG sample according to (12) forΠ max = 90 h −1 Mpc andΠ max = 180 h −1 Mpc. Then we compare the ratio of w g+ with cut-offΠ max = 90 h −1 Mpc over w g+ with cut-offΠ max = 180 h −1 Mpc (and likewise for w gg ) obtained from the model to the corresponding ratio computed from the observations. Since the projected correlation functions with the different cut-offs are strongly correlated, we compute the errors on the ratio again via jackknifing. Note that due to these correlations the actual errors on the ratio are significantly smaller than if one assumed them to be independent. Note furthermore that the ratio also inherits a significant correlation between different transverse separations from the individual projected correlation functions, in particular for w gg , see Fig. 6. In Fig. 7 we have plotted the ratios of the projected correlation functions with the different cut-offs. The model prediction for the ratio of the galaxy clustering signals is in fair agreement with the data, yielding a loss of about 40 % when reduc-ingΠ max from 180 h −1 Mpc to 90 h −1 Mpc. On scales where the galaxy bias becomes nonlinear, indicated by the grey shaded region, bias effects may not cancel in the ratio anymore, so that deviations from the model prediction are expected. On scales r p 20 h −1 Mpc one observes an apparently significant trend in the data to fall below the model prediction. However, note that w gg at large transverse separations features very strong cross- 7. Effect of the cut-off in Π in the projection of the threedimensional correlation functions along the line of sight. Shown is the ratio of the projected correlation function computed forΠ max = 90 h −1 Mpc over the correlation function obtained withΠ max = 180 h −1 Mpc, for both the galaxy clustering signal (gg, in black) and number density-shape correlations (g+, in red). Points are computed from the MegaZ-LRG data, using the full range in redshifts. Note that the black points have been slightly offset horizontally for clarity. The black line is obtained from the model for w gg . The red hatched region comprises the range of ratios for w g+ with different relative strengths of the galaxy-galaxy lensing contribution. The red solid line indicates the ratio resulting for the best-fit intrinsic alignment amplitude determined in Sect. 5.3. Note that the error bars at different transverse separations are correlated, in particular for w gg at large r p , see Fig. 6. correlations between the data points, and this property is inherited by the ratio. A fit of the model ratio to the observed ratio for the five data points with r p ≥ 6 h −1 Mpc yields a reduced χ 2 of 1.75, corresponding to a p-value of 0.12, and hence data and model are still marginally consistent. The prediction for the ratio of the position-shape correlation functions with cut-offs at 90 h −1 Mpc and 180 h −1 Mpc, respectively, depends on the relative strength of the galaxy-galaxy lensing contribution with respect to the gI signal (we neglect the mG contribution whose effect should be very small in this case). We shade the possible range of ratios in Fig. 7 between the lowest ratio resulting for a negligible gG term which almost coincides with the ratio for galaxy clustering, and the highest ratio resulting for the set of parameters we used in Sect. 4.2 which we take as a conservative lower limit on the intrinsic alignment signal. We also show the curve obtained with the best-fit intrinsic alignment amplitude from the fits in Sect. 5.3 below, which is in good agreement with the data, yielding a ratio of 0.64. In this case the least-squares analysis results in a reduced χ 2 of 0.77 and a p-value of 0.57. Note furthermore that both models and data are consistent with the fact that the loss of signal due to the smaller cut-off in Π is roughly constant in transverse separation. The general agreement of the observed and modelled ratios confirms that we model the effect of Π max on photometric redshift data correctly. Besides, it supports our use of the 2SLAQ photometric redshift error distribution despite the slightly different apparent magnitude limits of the sample, as discussed in Sect. 2.4. Galaxy bias To relate the observed galaxy number density-intrinsic correlations (plus the corrections due to galaxy-galaxy lensing and magnification-shear correlations) to the matter-intrinsic correlations that generate intrinsic alignments, the galaxy bias b g for the density tracer sample needs to be measured. As described in Sect. 3, we compute a galaxy clustering signal that represents the cross-correlation between the density tracer sample and the sample used to trace the intrinsic shear. Given that the latter is a subset of the former with nearly the same properties (redshift and luminosity distributions, see Fig. 3), we assume that the two have the same galaxy bias. Thus, we compute b g from this galaxy clustering signal, assuming a linear bias model, but using the full matter power spectrum which should extend the validity of the fits into the quasi-linear regime (see also Hirata et al. 2007, who test several methods to determine the galaxy bias in a similar context). Note that all our considerations rely on the hypothesis that we have assumed the correct cosmological model, in particular σ 8 = 0.8. The redshift averaging and the projection along the line of sight of w gg is performed according to (12) for the photometric MegaZ-LRG samples and following (15) for the SDSS LRG samples. We do not repeat the bias measurement for the SDSS L3 and L4 samples but adopt the values determined by Hirata et al. (2007), rescaled to our value of σ 8 by employing b g ∝ σ −1 8 , which results in b g = 1.04 and b g = 1.01, respectively. Note that the assumption of that same bias, despite the use of different colour cuts for the intrinsic shear tracers, is acceptable because the bias we need is that of the density tracer sample, which has not changed. To all model projected correlation functions w gg , we add a constant C as a further fit parameter to account for the undetermined integral constraint on the numerator of the estimator (3) due to the unknown mean galaxy number density (Landy & Szalay 1993; see also Hirata et al. 2007 where w gg is given by (15) or the analogue of (12) in the case of the MegaZ-LRG samples. Note that we have made the dependence on the galaxy bias explicit in the foregoing equation. For the fit we discard scales r p < 6 h −1 Mpc, i.e. the five data points at the smallest r p where the assumption of a linear bias is expected to break down (Tasitsiomi et al. 2004;Mandelbaum et al. 2006). For the MegaZ-LRG sample, there is one additional nuisance in this modelling, which is the 5 % stellar (M star) contamination fraction. As shown in Mandelbaum et al. (2008), the imposition of the apparent size cut that is needed for a robust galaxy shape measurement is sufficient to remove this contamination to within 1%. Thus, the galaxy clustering signal as defined (a cross-correlation between the shape and density samples) is diminished by a single power of the contamination fraction f contam = 0.05. The bias determined by the fits is actually 1 − f contam b g , so we must correct it upwards to account for that. Then, since w g+ is reduced by a factor of 1 − f contam , instead of dividing by just b g to get to w δ+ , we divide by (1 − f contam ) b g . In Fig. 8, we show the projected correlation functions w gg for the two MegaZ-LRG redshift bins and the two SDSS LRG redshift bins. Note that the SDSS LRG samples have not been split further into luminosity bins because the full SDSS LRG sample, divided into the two redshift bins, is used to trace the galaxy number density field. In each case, we also plot the best-fit models, indicating that the model is a good description of the data on scales where nonlinear bias is not important. At smaller transverse separations (the grey region), which have been excluded from the fits, the data have increasingly larger positive offsets with respect to the model, caused by nonlinear clustering effects. The best-fit values for b g , marginalised over C, are summarised in Table 2. We find good agreement within the 1σ limits for the best-fit galaxy bias, determined for the different Π max in the MegaZ-LRG data, again confirming that we are correctly modelling the truncation in the line-of-sight projection. Splitting the MegaZ-LRG data into two redshift bins at z = 0.529, we obtain a stronger bias for the bin at higher redshift. This finding is expected, as the bin with z > 0.529 contains on average significantly more luminous galaxies that are more strongly biased, see Fig. 1. Only the high-redshift MegaZ-LRG sample yields a reduced χ 2 that significantly exceeds unity which we trace back to the strong correlations between errors as the plot in Fig. 8 suggests a good fit. Indeed the reduced χ 2 drops well below unity if we repeat the fit ignoring the off-diagonal elements in the covariance. We compare our results with the galaxy bias obtained by Blake et al. (2007) who also studied MegaZ-LRG, albeit with slightly different selection criteria. They used the cuts i deV ≤ 19.8 and d ⊥ ≥ 0.55 throughout, as well as additional stargalaxy separation criteria that reduced the stellar contamination to 1.5 %. The different selection criteria hinder direct comparison, e.g. the Blake et al. (2007) criteria (driven mostly by the i deV cut) shift the r band absolute magnitude range 0.15 mag brighter. Hence, we expect that the galaxy bias obtained from our analysis should be smaller, and indeed, after rescaling the bias given in Table 2 of Blake et al. (2007) to σ 8 = 0.8, we find b g = 1.89 and b g = 2.10 for the two redshift slices roughly coinciding with our MegaZ-LRG low-redshift sample, and b g = 2.18 and b g = 2.44 for the two redshift slices closer to our high-redshift sample. The SDSS LRG samples yield a similar galaxy bias compared to the full MegaZ-LRG sample, with no significant evolution in redshift. Given that the SDSS LRG galaxies have on average a higher luminosity and are located at considerably lower redshift, this finding hints at a stronger bias in the past for galaxies at fixed luminosity. Using again the fact that the bias scales as b g ∝ σ −1 8 , our findings for the SDSS LRG samples can be compared to the results for the equivalent bias model in Hirata et al. (2007) who use σ 8 = 0.751. Rescaling the values of Table 2 to this value of σ 8 , we get b g = 2.00 ± 0.11 for the low-redshift sample and b g = 2.01 ± 0.07 for the high-redshift sample. These values agree (within 1σ) with b g = 2.01 ± 0.12 for z < 0.27 and b g = 1.97 ± 0.07 for z > 0.27 as found by Hirata et al. For the MegaZ-LRG sample with photometric redshifts smaller than 0.529 (black) and with photometric redshifts larger than 0.529 (red). Note that the red points have been slightly offset horizontally for clarity, and that the error bars are strongly correlated. In addition we show the best-fit models as black and red curves, respectively. Only the data points outside the grey region have been used for the fits to avoid the regime of nonlinear bias. (2007). Note that the latter analysis used a narrower range in transverse separation with r p = 7.5 − 47 h −1 Mpc compared to r p = 6 − 60 h −1 Mpc considered in this work. Intrinsic alignment model fits to individual samples With the galaxy bias in hand, we can now proceed to fit models of intrinsic alignments to w g+ . The NLA model features a single free parameter for the amplitude, A. Within the physical picture of this model, the amplitude quantifies how the shape of a galaxy responds to the presence of a tidal gravitational field. It is likely that this response depends on the galaxy population under consideration, and thus possibly features an additional evolution with time and hence redshift dependence (on top of the one inherent to the NLA model), and a variation with galaxy luminosity. Therefore we will investigate a more flexible prescription for the gI power spectrum in Sect. 5.5. In this section we use (6) as the intrinsic alignment model, with the single fit parameter A. We keep the original SuperCOSMOS normalisation, i.e. C 1 ρ cr ≈ 0.0134. To allow for a comparison with foregoing work, we also present some fits with models based on the NLA version with the redshift de- pendence given in . Note that all intrinsic alignment models applied in this work have a fixed dependence on transverse scales. Since the assumption of a linear bias also enters the model, we again limit the parameter estimation to scales r p > 6 h −1 Mpc. Note that we do not explicitly propagate the errors on the galaxy bias determined in the foregoing section through to the uncertainty on intrinsic alignment parameters, as they are marginal compared to the measurement error in w g+ (which is dominated by shape noise). In Fig. 9, the projected correlation functions for the full MegaZ-LRG sample as well as for the two MegaZ-LRG redshift bins, split at z = 0.529, are plotted. The fit results for A are presented in Table 3. On the scales usable for the fit, the best-fit gI model, which is also plotted in Fig. 9 for each case, traces the data points well with reduced χ 2 -values below one, whereas for r p 1 h −1 Mpc points lie several σ above and below the model curve, possibly indicating strongly nonlinear effects. The nature of these deviations is unknown, but since they occur on scales near the virial radius of LRGs, one may hypothesise that at these ranges of r p , complicated dependencies on the tidal field or a change in the intrinsic alignment mechanism play a role. Moreover we find very good agreement between the best-fit amplitudes obtained for the full MegaZ-LRG sample with different values of Π max . In addition, we show in Fig. 9 models for w g+ that have been calculated using the linear matter power spectrum instead of the nonlinear one in (6), holding all other model parameters fixed. As expected, the signals for linear and nonlinear power spectrum coincide on the largest scales, but already at r p ∼ 10 h −1 Mpc, w g+ computed from linear theory drops below the correlation function that includes nonlinear clustering and yields a worse fit to the data in case of the full and the high-redshift MegaZ-LRG sample. Thus, although our analysis is still restricted to relatively large scales, non-linear effects in the intrinsic alignment of galaxies must be taken into account. We also perform the analysis on w g+ for the SDSS LRG data, which is divided into three luminosity bins in addition to the two redshift bins split at z = 0.27, see Table 1. As redshifts are determined spectroscopically in this case, we use (14) to compute Notes. Top section: Fits results for all samples considered in this work. For reference, we give the means z and L /L 0 for each sample in the second and third column. The fourth to sixth columns contain the best-fit intrinsic alignment amplitude, and the corresponding reduced χ 2 and p-values of the fit for 4 degrees of freedom. To facilitate comparisons with preceding work, we also fit the amplitude of the version of the NLA model based on , shown in the last column. Note that these fits produce the same χ 2 as the foregoing ones. We also show the fit results for MegaZ-LRG samples with a colour cut as introduced in Sect. 5.4. The numbers in parentheses indicate Π max in units of h −1 Mpc for the MegaZ-LRG samples. Bottom section: Amplitude fits to MegaZ-LRG samples neglecting the contributions by galaxy-galaxy lensing (gG) and magnification-shear cross-correlations (mG). The rightmost column contains best-fit amplitudes when including the gG but not the mG signal. Due to the very similar r p -dependence of the gI, gG, and mG signals, the χ 2 -values of these fits are almost identical. w g+ . For reasons of simplicity we use the redshift distribution for all SDSS LRG luminosities combined because we find that employing the individual distributions for the faint, medium, and bright subsamples instead leads only to sub-per cent changes in the signal on all scales. The resulting correlation functions and their best-fit models are shown in Fig. 10, and the resulting parameter constraints on A listed in Table 3. In Table 3 we additionally present constraints on A using the NLA model with the redshift dependence as derived by . If the redshift distribution of a galaxy sample is sufficiently compact, the amplitudes of the two NLA models considered are approximately related by a factor (1 + z ) 2 , caused by the different redshift dependencies. Our findings for the SDSS LRG samples are compatible with the results of the power-law fits by Hirata et al. (2007), yielding a maximum intrinsic alignment amplitude of close to ten times that found in SuperCOSMOS, for the bright high-redshift SDSS LRG sample. In contrast, the resulting values for A using the SDSS L3 and L4 samples are small for both NLA models, the constraints being consistent with zero at the 2σ-level. By default we include both galaxy-galaxy lensing and magnification-shear correlations in the modelling for the photometric redshift MegaZ-LRG data, but the bottom panel of Table 3 also lists results for A when dropping either the mG term only or both additional contributions. Since the gG and mG signals yield a negative contribution to w g+ , a lower amplitude A than in the case including these contributions is needed to get a good fit to the data. Dropping the mG term causes a drop in A for all samples which is below the 2 % level and hence much smaller than the 1σ error on the amplitude. Since the intrinsic alignment amplitude is about three to four times higher than assumed in the prediction of Sect. 4.2, which yielded a maximum mG contribution of the order 10 % on relevant scales, effects at the per-cent level by the mG signal are indeed expected. The change in amplitude when discarding all additional signals ranges between 7 % for the low-redshift MegaZ-LRG sample and 13 % for the high-redshift sample. As Fig. 5 suggests, the relative contribution of the gG to the gI signal is approximately constant as a function of redshift for MegaZ-LRG if the intrinsic alignment amplitude is fixed. Since we obtain a larger A for the low-redshift than for the high-redshift sample, the contribution by galaxy-galaxy lensing is correspondingly smaller for z < 0.529. In Sect. 4.2 we calculated a 30 % gG/gI ratio for A = 1.21, which is again in good agreement with our fit results. It is interesting to note that our default analysis yields nearly perfect agreement between the best-fit values for A obtained from the full MegaZ-LRG samples with cut-offs at 90 h −1 Mpc and 180 h −1 Mpc, respectively, but that, when excluding galaxygalaxy lensing, one observes a moderate discrepancy in the fit results. The galaxy-galaxy lensing increases for larger line-ofsight separation of the galaxy pairs correlated, whereas the gI term diminishes, so that the subsample with Π max = 180 h −1 Mpc is affected more strongly. This finding again confirms that we are modelling the effect of photometric redshift scatter and the trun- cation of signals at large Π correctly. Besides, both additional signals that contribute to galaxy position-shape correlations have a dependence on transverse separation that is very similar to the one of the gI term. Thus only the amplitude A is affected while the goodness of fit remains almost unchanged when the gG and mG terms are included. In general, the dependence on r p given by the NLA model describes the data reasonably well, yielding reduced χ 2 values of order unity (or sometimes below). Only the high-redshift SDSS LRG samples tend to a χ 2 that significantly exceeds unity, which is caused by an excess signal around r p = 10 h −1 Mpc of unknown origin, as can be seen in Fig. 10, bottom panel. The pvalues remain above, but are close to, the significance level of 0.05. In all cases except the two SDSS Main samples the intrinsic alignment amplitude is higher than for the original SuperCOSMOS normalisation, which would correspond to A = 1.21 in the corrected linear alignment model and A = 1 when fitting the original version. The SDSS Main samples are at similar redshift to SuperCOSMOS, and consistent with the results by Brown et al. (2002), although only at the 2σ level in the case of the L3 sample which prefers a higher amplitude (but note the possible excess signal at r p ∼ 10 h −1 Mpc in the top panel of Fig. 10). As is obvious from the compact and mutually inconsistent posterior probabilities for A shown in Fig. 11, the different galaxy samples are inconsistent with an intrinsic alignment model that has only A as a free parameter. The SDSS LRGs span a very similar and relatively short range in redshifts, so that no strong evolution with redshift is expected in these subsamples. Then it is evident from the fit results in Table 3 that the intrinsic alignment amplitude increases with galaxy luminosity, with the brightest sample attaining a high amplitude of A ≈ 16. Moreover, despite a mean luminosity that is 20 % higher than that of the low-redshift MegaZ-LRG sample, the high-redshift MegaZ-LRG sample has a smaller amplitude parameter A, indicative of a decrease of the intrinsic alignment amplitude with redshift beyond the redshift dependence inherent to the NLA model. However, note that the amplitudes of the two samples are Fig. 12. Redshift, colour, and magnitude properties of all samples used in this paper. Top panel: k + e-corrected (to z = 0) M r used to define the luminosities relative to L 0 , versus redshift z. Galaxies from MegaZ-LRG are shown in green, those from the SDSS LRG samples in blue, and galaxies from the red SDSS Main L4 (L3) sample in red (black). To avoid confusion, a subset of the points from each sample was used. Bottom panel: The colour-redshift relation. The colour coding is the same as above. As shown, the full MegaZ-LRG sample is the only one with significant contributions below 0.2 (g − i) = 1.7. Thus, we define the cut sample (pink points) using all MegaZ-LRG galaxies that are redder than this value, for consistency with the other samples. still consistent with each other at about the 1σ level, so that the NLA model is still in agreement with these findings for MegaZ-LRG. Compatibility of the different samples As has been shown in Hirata et al. (2007) and many other papers, red and blue galaxies behave differently with respect to intrinsic alignments. Thus, in order to obtain combined constraints from these samples and to understand their results in some unified way, we now address the sample definitions as regards the resulting colour properties in greater detail. Wake et al. (2006) performed a comparison of the SDSS spectroscopic LRG and the MegaZ-LRG sample definition, with the goal of creating a subset of the MegaZ-LRG sample that would pass the SDSS LRG joint colour-magnitude cuts if it were shifted to redshift z = 0.2. The reason for this choice of comparison redshift is that, with the MegaZ-LRG sample concentrated at z = 0.55, this difference in redshift corresponds to shifting the SDSS filters over by one, and thus the k-corrections are less prone to systematic error. As shown in their comparison, when using the g − i colour shifted to z = 0.2, only ∼ 30 % of the MegaZ-LRG galaxies would pass the joint colour-magnitude cut of the SDSS spectroscopic LRGs. However, this cut is not what we want to impose on our sample. The reason for this is that, as shown in Wake et al. (2006) Fig. 9. Bottom panel: Same as above, but for the cut MegaZ-LRG sample split into the two photometric redshift bins, where results for z < 0.529 are shown in black, and for z > 0.529 in red. Dotted lines again indicate the best-fit model for the full MegaZ-LRG samples, respectively. Note that the red points have been slightly offset horizontally for clarity, and that the error bars are correlated. Only the data points outside the grey region have been used for the fits. exclude more and more of the red sequence. In contrast, for this paper, we want to keep all of the red sequence without regard for matching the luminosity ranges (indeed, we would like to study samples on a wide luminosity baseline in order to measure the variation of intrinsic alignments with luminosity). Thus, we wish to define a minimum 0.2 (g − i) that corresponds roughly to that for the SDSS spectroscopic LRGs and our revised definition of the SDSS Main L3 and L4 red samples. The best choice in this context appears to be a cut at 0.2 (g − i) > 1.7, which should ensure consistency with the other samples within the limits of our uncertainty in k + e-corrections. To illustrate this cut, we present Fig. 12, which shows twodimensional projections of the relationship between redshift, colour, and absolute rest-frame magnitude of the samples. As shown, they span a wide range of redshifts (0.05 < z < 0.7) and of luminosities (four magnitudes), and with the imposition of this new colour cut, the colour ranges are quite similar. MegaZ-LRG shows the largest scatter to redder colours; however, this is expected given that, as the highest redshift sample, they have the largest photometric errors which significantly widens the colour distribution at the red end where the g band flux is often only weakly detected, especially once the 4000Å break moves from g to r band. The result of this cut is to reduce the MegaZ-LRG sample to 70 % of its original size. The typical sample redshift does not change, and the mean luminosity increases marginally by 2 % compared to the values of the full MegaZ-LRG sample, see Table 3. We repeat the intrinsic alignment amplitude fits of Sect. 5.3 for the cut MegaZ-LRG samples, including the contributions by galaxy-galaxy lensing and magnification-shear correlations. We continue to use the relation between photometric and spectroscopic redshifts from the 2SLAQ verification sample of Sect. 2.4 because we do not observe any significant effect by the colour cut on this relation, e.g. neither the mean nor the scatter of the distribution of differences between photometric and spectroscopic redshifts change beyond the 1σ level. The resulting correlation functions with the best-fit models are plotted in Fig. 13, and the corresponding best-fit values for A listed in Table 3. For comparison we also show the best-fit models to the uncut MegaZ-LRG samples in the figure. Since the cut and uncut samples have the same redshifts and luminosities to high accuracy, we can ascribe any difference in the signals to a dependence of intrinsic alignments on galaxy colour. For both the high-and low-redshift MegaZ-LRG samples as well as the full sample we find an increase in A for the cut sample which has higher 0.2 (g−i), thus being in line with the expectation that redder galaxies have stronger intrinsic alignments. The increase amounts to 11 % for the low-redshift sample and 28 % for the high-redshift sample (the corresponding error bars feature a similar increase), suggesting a stronger colour dependence at higher redshift. However, it should be noted that all observed changes due to the colour cut in g − i remain within the 1σ errors and are therefore not statistically significant. Intrinsic alignment model fits to combined samples Having addressed the question of the compatibility of the samples, we repeat the fits to w g+ for different combinations of galaxy samples, now allowing for an additional redshift and luminosity dependence according to the extended model where z 0 = 0.3 is an arbitrary pivot redshift, and L 0 is the pivot luminosity corresponding to an absolute r band magnitude of −22 (passively evolved to z = 0). The matter-intrinsic power spectrum P δI is given by the NLA model with the modified redshift dependence discussed in Appendix B, including the normalisation to SuperCOSMOS. This model contains three free parameters {A, β, η other }, and, as before, has a fixed dependence on transverse scales. The amplitude parameter A and the luminosity term can be taken out of all integrations leading to w g+ because they neither depend on redshift nor comoving distance, so that the parameters A and β can be varied in the likelihood analysis with low computational cost. The extra redshift term containing η other depends on the integration variable in (5) though. To facilitate the likelihood analysis for the MegaZ-LRG samples with photometric redshifts, we assume that this term can be taken out of the integration and is evaluated at the meanz m = (z 1 +z 2 )/2 of the two redshifts entering (5). This approximation holds to fair accuracy because the corresponding redshift probability distributions are sufficiently narrow, given the small photometric redshift uncertainty for the MegaZ-LRG sample. The additional redshift dependence is then integrated over in the averaging process in (12) and (14) for photometric and spectroscopic samples, respectively. As also the luminosity distributions of the galaxy samples under consideration are compact and narrow, we use the mean luminosity in the luminosity term in (19) instead of integrating (L/L 0 ) β over the full distribution. This is a good approximation even for the full MegaZ-LRG sample, which features the broadest luminosity distribution, the deviation being below 2 % close to the best-fit values for β that we determine below. We consider joint fits to several combinations of the six SDSS LRG subsamples, the two MegaZ-LRG low-and highredshift samples with the colour cut, and the two SDSS Main L4 and L3 samples. The resulting two-dimensional marginalised confidence contours and marginal one-dimensional posterior distributions for the parameter set {A, β, η other } are shown in Fig. 14. The corresponding marginal 1σ errors on these parameters and the goodness of fit are given in Table 4. In the computation of marginalised constraints we assumed by default flat priors in the ranges A ∈ [0; 20], η other ∈ [−10 : 10], and β ∈ [−5 : 5]. For the combination of the MegaZ-LRG and SDSS Main samples, which yields weak and degenerate constraints, we extend the prior range of η other to ±20. Note that in this case the posterior has not yet decreased very close to zero at β = 5, but still we expect the influence of the β-prior on the marginal constraints to be negligible. Combining all SDSS LRG samples we can constrain β well, i.e. the power-law slope of the luminosity evolution of the in- It is not a priori clear that the intrinsic alignment model determined for the LRG samples also holds for the fainter, non-LRG SDSS Main samples. The validity of this model for galaxies with luminosities around and below L * is paramount for its applicability to intrinsic alignments in cosmic shear data that has many more faint red galaxies than LRGs. To affirm consistency, we read off the combination of intrinsic alignment parameters that yield the minimum χ 2 for the joint fit to the SDSS LRG and MegaZ-LRG samples. Then we compare the χ 2 of this parameter combination for the fit to the combined L3 and L4 samples to the minimum χ 2 obtained for fitting to this sample combination. We find a difference ∆χ 2 = 2.01 to the minimum χ 2 of the latter samples in the parameter range A ∈ [0; 20], η other ∈ [−10 : 10], and β ∈ [−5; 5], corresponding to a p-value of 0.57. Thus the SDSS Main samples are fully consistent with the LRG data sets, but note that the L4 and L3 data are noisy and hence not very conclusive, e.g. we also compute a reduced χ 2 of 1.19 (L3) and 0.89 (L4) for a fit to zero. For illustration, we have plotted both the best-fit models for a fit to L3 and L4 only, and for the fit to the SDSS LRG and MegaZ-LRG samples in Fig. 15. The joint fit of all considered samples clearly favours an increase of the intrinsic alignment signal with galaxy luminosity; indeed we find that β < 0.5 can be excluded at more than the 4σ level. The data are perfectly consistent with η other = 0, i.e. with the redshift evolution inherent to the corrected NLA model as discussed in Appendix B. The combination of MegaZ-LRG and SDSS LRG samples is the main driver for these constraints on the redshift dependence. Using the NLA model with the redshift dependence of sic alignment normalisation A = 5.8 ± 0.6, which e.g. translates into an amplitude of 90 % of the standard NLA model with corrected redshift dependence and SuperCOSMOS normalisation for a typical red galaxy with L = 0.2 L 0 ≈ L * at redshift z = 0.5 (see Appendix C for a justification of these values). Using the NLA model with the redshift dependence of , which has been employed in most weak lensing studies hitherto, yields an amplitude of about 40 % of the SuperCOSMOS normalisation. Our findings for the marginalised parameter constraints on β from the fits to the SDSS LRG samples are in good agreement with the results presented in Hirata et al. (2007), despite a different binning in transverse separation and a fit of a pure power-law dependence on z and r p instead. The latter yields a scaling with r p which is comparable to the NLA model. Due to these differences in modelling, the intrinsic alignment amplitude parameters are not directly compatible, while the Hirata et al. (2007) power-law index for the redshift term should roughly correspond to η other − 1, see (B.7). The results for the redshift dependence are also consistent at the 1σ level, albeit with large error bars. In addition Hirata et al. (2007) considered joint constraints from SDSS LRG and 2SLAQ samples, but did not impose colour cuts on 2SLAQ, so that a quantitative comparison is difficult. However, we observe similar tendencies in the bestfit values of the intrinsic alignment parameters when adding the 2SLAQ and MegaZ-LRG samples, respectively. Besides, parameter errors from the joint analysis of SDSS LRG and 2SLAQ, or SDSS LRG and MegaZ-LRG samples are of the same order of magnitude. Dependence on k + e-corrections We have specified our intrinsic alignment model in terms of the physically meaningful rest-frame r band luminosity (passively evolved to z = 0), which renders our fit results dependent on the k + e-corrections employed. Recently, Banerji et al. (2010) have utilised new corrections based on improved stellar population synthesis models (Maraston et al. 2009) that perform better than those of Wake et al. (2006) at describing observed colours of LRGs. We compare these different k + e-corrections in the r band in Fig. 16, finding significant differences in particular at high redshift. The difference between the results from Wake et al. (2006) and Banerji et al. (2010) has a negligible effect on the luminosities computed for the SDSS Main samples at low redshift, but amounts to approximately 0.1 mag for the SDSS LRG samples and, the details depending on the population synthesis model used, to about 0.2 mag at the mean redshift of the MegaZ-LRG sample. Hence, switching to luminosities calculated according to Banerji et al. (2010) would imply a 10 % increase in the intrinsic alignment amplitude A for the fits to the combined SDSS LRG samples, using the corresponding best-fit value for β. While this change is subdominant to the 1σ error, a change by 0.2 mag for the MegaZ-LRG sample leads to a shift by about 20 % in the term (L/L 0 ) β , which clearly exceeds the 1σ errors e.g. of the joint fit by all galaxy samples. Besides, the effect is redshiftdependent, so that all three fit parameters A, β, and η other would be affected in that case. However, in principle the definition of luminosity used for (19) is arbitrary, as long as it is consistent for all samples. This is the case since we employ the k+e-corrections according to Wake et al. (2006) throughout. As demonstrated above, the transformation of our intrinsic alignment model to a different convention for galaxy luminosity has a considerable effect on all parameters and needs to be executed with care. The actual quantity of interest is not the power spectrum (19) but rather the observable intrinsic alignment signals, in particular those contaminating cosmic shear surveys. Changes in the values of the intrinsic alignment fit parameters are only meaningful in how they modify these observables. The observable intrinsic alignment signals are determined by observer-frame apparent magnitude limits of a survey and can consistently be calculated from (19) if the same k + e-corrections as used in the computation of galaxy luminosities for the model are applied. In Sect. 6 we will use such a procedure to predict the contamination of a cosmic shear survey by intrinsic alignment signals derived from our best-fit model. As will be detailed in there, there are sources of uncertainty, especially concerning the choice of luminosity functions, that are likely to be more important than the effect of k + e-corrections. Besides the luminosities of the galaxy samples, their 0.2 (g−i) colours used to assess the compatibility of the samples are expected to depend on the k + e-corrections as well. Again, we have made sure that identical k + e-corrections were employed to produce Fig. 12, so that the MegaZ-LRG samples are consistent with the other data also in their colour cuts. Note however that, contrary to the luminosity dependence, our results cannot readily be reformulated for galaxy colours obtained from other versions of k+e-corrections due to the imposed colour cut which is dependent on the Wake et al. (2006) templates. As shown in Fig. 16, we have compared the differences between the k + e-corrections in the g and i bands obtained from Wake et al. (2006) and Maraston et al. (2009, see Banerji et al. 2010 for details) as a function of redshift. We find that the latter are in good agreement with the two variants of colour corrections derived from Wake et al. (2006). The Maraston et al. (2009) k + e-corrections display a slightly steeper increase with redshift, yielding g − i corrections that are 0.03 lower at the redshift of the SDSS Main samples and about 0.1 higher at the mean redshift of the full MegaZ-LRG sample compared to the mean of the Wake et al. (2006) models. Consequently, the MegaZ-LRG samples would on average be bluer with respect to the other samples if we had used the k+e-corrections based on Maraston et al. (2009) instead. While a difference of 0.1 in the colour appears to be substantial compared to the typical spread of the MegaZ-LRG sample in 0.2 (g − i) of 0.5, the uncertainty in the colour cut due to the fuzziness in the lower g − i limit of the SDSS LRG and Main samples is of the same order, see Fig. 12. Besides, as Fig. 16 top panel suggests, this level of uncertainty due to the differences between the templates by Wake et al. (2006) and Maraston et al. (2009) is of the same order as the uncertainty in the evolutionary model chosen for a galaxy within a given set of templates. Systematics tests As in Mandelbaum et al. (2006), Hirata et al. (2007), and Mandelbaum et al. (2010) we perform several tests to ensure that our results from the MegaZ-LRG data are not contaminated by instrumental or other effects. We also repeat the systematics tests for the re-defined red SDSS Main L3 and L4 samples. No signatures of systematics were found during the previous analyses of the SDSS LRG and the original SDSS Main samples (see the aforementioned references). First, we compute the correlation function w g× using the cross-component of the shear instead of the radial component. The cross-component and thus w g× change sign under parity transformations, measuring a net curl of the galaxy shape distribution. Since we do not expect that galaxy formation and evolution violates parity symmetry, any non-vanishing w g× serves as an indicator for systematics, such as residual PSF distortions. The resulting signals for the full MegaZ-LRG sample, as well as for the two MegaZ-LRG redshift bins, are shown in the centre panel of Fig. 17. All signals are comfortably consistent with zero, and fits of a zero line to the data in the full range of r p yield reduced χ 2 values well below unity and correspondingly large p-values, as shown in Table 5. We repeat this analysis for the same MegaZ-LRG samples but with the cut in g − i imposed, again finding no evidence for systematics, see the bottom panel of Fig. 17. Furthermore we consider w g+ computed only for large lineof-sight separations Π at which one does not expect astrophysical correlations anymore. A non-zero signal in this measure could for instance be caused by an artificial alignment of galaxy images in the telescope optics. Due to the photometric redshift scatter in MegaZ-LRG data, we use much larger values of |Π| for this systematics test than preceding works, integrating the three-dimensional correlation function along the line of sight for 270 < |Π|/[h −1 Mpc] < 315. Still, gI correlations might not be completely negligible. We estimate the signal from the bestfit intrinsic alignment model obtained in the foregoing section, finding amplitudes below 0.004 for r p > 6 h −1 Mpc and below 0.018 for all r p scales considered. Thus, residual gI correlations should be negligible in this case, and indeed the resulting w g+ is consistent with zero. Note that we cannot apply this test to w gg because the much stronger galaxy clustering signal is not negligible even in the extreme range of Π that we have chosen. We also compute w g× , as well as w g+ in the range 100 < |Π|/[h −1 Mpc] < 150, for the SDSS L3 and L4 samples with the new colour cut to isolate red galaxies, as shown in the top panel of Fig. 17. Note that for these spectroscopic samples we can resort to much smaller values of |Π| to obtain a range which is not expected to contain physical correlations anymore. All systematics tests for the SDSS Main L3 and L4 samples are consistent with zero, see Table 5. Although some points deviate from zero clearly outside the 1σ limit, and reduced χ 2 values slightly exceed unity, the p-values indicate that there is no significant signal in the data. As expected, we find that error bars on w g× are of similar size as for w g+ when calculated for the same range in line-of- sight separation; compare e.g. the centre panel of Fig. 17 to Fig. 9 (note the different scaling of the ordinate axes). Since the pro- One additional type of systematic is the calibration of the shear. A multiplicative calibration offset would manifest directly as a multiplicative factor in w g+ . Aside from any impact on the best-fitting intrinsic alignment amplitude A from the combined samples, there could also be some effect that is a function of galaxy properties (typically apparent size and S /N, see Massey et al. 2007a), which would manifest as a difference between the SDSS Main samples (very bright apparent magnitudes and wellresolved), the spectroscopic LRGs (moderately bright and wellresolved), and MegaZ-LRG (significantly fainter and less wellresolved). Because the samples occupy different places in both redshift and luminosity space, the effect of such a bias cannot be estimated in a straightforward way. However, the shape measurements used for this work were subjected to significant systematics tests in Mandelbaum et al. (2005), including tests for calibration offsets between different samples, and thus we do not anticipate that shear calibration is a significant systematic relative to others (such as photometric redshift error uncertainties, or k + e-corrections) or relative to the size of the statistical error bars on w g+ measurements. Constraints on intrinsic alignment contamination of cosmic shear surveys Intrinsic alignments constitute the potentially major astrophysical source of systematic uncertainties for cosmic shear surveys. If left untreated, they can severely bias cosmological parameters estimates (e.g. Bridle & King 2007). If the contamination by intrinsic alignments is well known, it can ideally be incorporated into the modelling by subtracting the mean intrinsic alignment signal from the lensing term and accounting for the residual uncertainty in the systematic by introducing nuisance parameters over which one can then marginalise. To elucidate the implications for cosmological constraints from cosmic shear surveys by our constraints on intrinsic alignments, we will optimistically assume that the mean systematic signal is indeed given by our best-fit model. In this approach, the decisive quantity is not the mean value of the bias on cosmological parameters, which can be easily corrected for by subtracting the mean intrinsic alignment signal, but the uncertainty in the bias due to uncertainty in intrinsic alignment model parameters, which directly affects the accuracy with which the cosmological model can be constrained when taking the systematics into account. We do not address uncertainty due to adoption of the generalised NLA model (19), i.e. the possibility that the underlying intrinsic alignments model is different, because we see no tension between the model and our data. We assess the range of possible biases on cosmological parameters that originate from intrinsic alignment signals using the constraints obtained from the foregoing investigation. Since the SDSS Main L3 and L4 samples proved to be fully consistent with the results for the two LRG samples, we will assume in the following that our intrinsic alignment model also holds for typical, less luminous early-type galaxies predominantly found in a cosmic shear survey. We emphasise that this study requires the extrapolation of the best-fit intrinsic alignment model to combinations of galaxy redshifts and luminosities that have not been probed directly by any of the galaxy samples analysed in this work. However, that extrapolation is less worrisome now that we have galaxy samples at z ∼ 0.6, given that the lower redshift galaxies in a cosmic shear survey will tend to be the greatest culprits in causing GI contamination due to the scaling of the effect with redshift separations of galaxy pairs. By means of a Fisher matrix analysis we compute the effect on a present-day, fully tomographic (i.e. including all independent combinations of redshift bins) cosmic shear survey, roughly following CFHTLS parameters (Hoekstra et al. 2006;Semboloni et al. 2006;Fu et al. 2008). To calculate the matter power spectrum, we use the same cosmology, transfer function, and nonlinear correction as outlined in Sect. 4.1. For computational simplicity we use the convergence power spectrum (the GG signal henceforth) as the observable cosmic shear two-point statistic, using the following Limber equation, where q (i) ǫ again denotes the lensing weight. Instead of specifying a photometric redshift for a redshift probability distribution, we switch here to the usual notation of using an index i that characterises a (broad) distribution p (i) (z) entering (9). The corresponding Limber equations for the GI and II signals can be readily formulated accordingly, yielding (e.g. ) where we assume that the intrinsic shear power spectrum can be described by an extension of our intrinsic alignment model analogous to (19), where P II (k, z) is given by the NLA model, i.e. (B.6) with the linear matter power spectrum replaced by the full, nonlinear one. Writing (23) with the square of the extra redshift and luminosity terms (again similar to Kirk et al. 2010) involves the assumption that the galaxies correlated are located at similar redshifts, which is valid because the II signal is restricted to physically close pairs; see also the narrow kernel in (21). Then the two galaxies of a pair also underlie the same luminosity distribution, and since we average (23) over this distribution, one can write L 2β to good approximation although the luminosities of the galaxies in an individual pair may be largely different. Note that we have not measured intrinsic ellipticity correlations in this work, so that the computation of the II signal is entirely based on the validity of the NLA model. However, the contribution of the II signal to the bias on cosmology will be smaller than the one by GI correlations in a CFHTLS-like survey, so that this assumption does not affect our conclusions substantially. We employ an overall redshift distribution according to Smail et al. (1994), with parameters α Smail = 0.836, β Smail = 3.425, and z Smail = 1.171 yielding a median redshift of 0.78 (Benjamin et al. 2007). We slice this distribution into 10 'photometric' redshift bins such that every bin contains the same number of galaxies. The corresponding redshift distribution for each bin is then computed via the formalism detailed in Joachimi & Schneider (2009), assuming a Gaussian photometric redshift scatter of width 0.05(1 + z) around every spectroscopic redshift. We compute Gaussian covariances for the power spectra including cosmic variance and shape noise (for details see , assuming a survey size of A survey = 100 deg 2 . Shape noise is incorporated with an overall galaxy number density of n Ω = 12 arcmin −2 and a dispersion of the absolute value of the complex intrinsic ellipticity of 0.3 (Hoekstra et al. 2006). We consider a parameter vector p = {Ω m , σ 8 , h, n s , Ω b , w 0 } for the cosmological analysis, for a flat universe with constant dark energy equation-of-state parameter w 0 . Assuming that the covariance is not dependent on these parameters (see Eifler et al. 2009), one obtains the Fisher matrix (Tegmark et al. 1997) where we use 40 logarithmically spaced angular frequency bins between ℓ = 10 and ℓ = 3000. With the Fisher matrix, one can calculate the bias on a cosmological parameter via (e.g. Kim et al. 2004;Huterer & Takada 2005;Huterer et al. 2006;Taylor et al. 2007;Amara & Réfrégier 2008;Kitching et al. 2009;Joachimi & Schneider 2009 where the systematic is given by the sum of II and GI power spectra. Note that the parameter bias is independent of the survey size while the statistical errors obtained from F µν are proportional to 1/ A survey . The intrinsic alignment analysis presented above only dealt with red galaxies, whereas a typical galaxy population in cosmic shear surveys is dominated by blue galaxies for which Mandelbaum et al. (2010) reported a null detection for a galaxy sample spanning a similar range of redshifts as in this paper. Thus we assume that only the red fraction f r of galaxies in the survey carries an intrinsic alignment signal. Consequently the II power spectrum is multiplied by a factor f 2 r , and the GI power spectrum by f r , resulting in the same model as used by Kirk et al. (2010). Note that this approach is overly simplistic in splitting the galaxy population into two disjoint groups with largely different intrinsic alignment properties although one expects the intrinsic alignment parameters to vary in a more continuous manner with galaxy colour. Moreover it is important to note that we ignore any uncertainty in the null measurement of blue galaxy intrinsic alignment which would add to the scatter of systematic errors on the cosmological parameters. In principle it should be feasible to take into account finite constraints on the intrinsic alignment parameters determined from blue galaxy samples, such as those studied by Mandelbaum et al. (2010). However, before incorporating them into this formalism, these samples would have to undergo the same compatibility tests as performed in this work for red galaxy samples, and then be combined to yield joint fits. These steps are beyond the scope of this analysis and left to future investigation. We expect that both f r and the distribution of luminosities of red galaxies depend on redshift and thus have different values in each photometric redshift bin of our fictitious cosmic shear survey. To estimate realistic values for these parameters, we make use of the luminosity functions provided by Faber et al. (2007). They fit Schechter functions φ(L, z) jointly to samples from SDSS, 2dF, COMBO-17, and DEEP2 in redshift bins out to z ∼ 1, considering early-and late-type galaxies individually. The criteria used by Faber et al. (2007) to separate red and blue galaxies differ from the ones employed in this work, but still we consider their samples as representative for differentiating between blue galaxies with negligible intrinsic alignments and red galaxies with an intrinsic alignment signal consistent with our best-fit model. We defer the technical details and also further discussion of this approach to Appendix C. In combination with a minimum galaxy luminosity L min (z, r lim ), computed from the apparent magnitude limit at each redshift, a set of luminosity functions also specifies the redshift probability distribution p tot (z). However, the luminosity functions by Faber et al. (2007) are unlikely to reproduce exactly the redshift probability distribution of CFHTLS because weak lensing surveys have additional galaxy angular size cuts and thus deviate from a purely flux-limited sample. Also, we must extrapolate to fainter magnitudes than the sample used to determine the luminosity function, which can be a source of significant uncertainty, particularly for blue galaxies given their steep faint-end slope. Besides, the blue galaxy sample of Faber et al. (2007) is composed of several galaxy types, so that it is unclear which k-corrections and filter conversions are applicable. Hence, we make the assumption that the red galaxy luminosity functions from Faber et al. (2007) are compatible with the selection criteria of a weak lensing survey and derive the total comoving volume density of galaxies from (24) by noting that n V = dN/dV com = dN/dz (dV com /dz) −1 . Then the fraction of red galaxies reads where the lower limit of the integration L min is determined by the magnitude limit of the survey which we assume to be r lim = 25, roughly compatible with CFHTLS (i lim = 24.5; Hoekstra et al. 2006). The redshift dependence of L min is introduced by the conversion of r lim to absolute magnitude and by the k + e-correction. The latter is again computed in the r band by means of the templates used in Wake et al. (2006) for early-type galaxies (specifically, by a k-correction that is between that of the two very closely related templates used in that paper, see also Fig. 16). In contrast to the various SDSS samples in which we measured intrinsic alignments, the distribution of galaxy luminosities in each photometric redshift bin of our mock cosmic shear survey is wide, so that our approximation that we can replace L by the mean luminosity in (19) and (23) is not sufficiently accurate anymore, in particular in those regions of the intrinsic alignment parameter space where β deviates significantly from unity. Instead, we have to average (19) and (23) over the luminosity function, which reduces to evaluating L β (z, r lim ) where we have taken into account that the rest-frame reference luminosity L 0 must be e-corrected back to redshift z, which we do via the redshift dependence of M * given in Faber et al. (2007); see again Appendix C for details. For every photometric redshift bin of the cosmic shear survey, we use the values of (L/L 0 ) β and f r at the median redshift of the underlying redshift distributions, which is a good assumption if the redshift distributions corresponding to the photometric redshift bins are sufficiently narrow. We further assess the accuracy and limitations of this ansatz in Appendix C by comparing results from sets of luminosity functions other than those from Faber et al. (2007). We also provide volume densities n V,red and mean luminosities for a range of redshifts and limiting r band magnitudes, which can be employed together with our best-fit intrinsic alignment model parameters to estimate the expected intrinsic alignment contamination of other surveys. The GI and II signals are then computed via (19) and (23), where the free intrinsic alignment parameters {A, β, η other } are determined as follows. We overlay the three-dimensional 1σ and 2σ volumes of the intrinsic alignment fits to four of the sample combinations shown in Table 4 and Fig. 14 with a square grid, containing N nodes in total. For the combination of {A, β, η other } on each grid node we compute the projected intrinsic alignment power spectra according to (21) and (22) and subsequently the parameter biases via (26). This way we obtain a bias vector b = b(p 1 ), .. , b(p N D ) τ , where N D is the number of cosmological parameters under consideration, in cosmological parameter space for every grid node within the 1σ or 2σ confidence volume in intrinsic alignment parameter space. We convert the ensemble of N parameter bias vectors {b 1 , .. , b N } into a distribution of bias values via Gaussian kernel density estimation, i.e. we approximate this distribution by where we use N D = 2 when considering the distribution in a twodimensional parameter plane, and N D = 1 when computing the one-dimensional distributions. The widths ∆ of the Gaussians in every dimension of cosmological parameter space are free parameters, and we choose them to take the minimum values which still produce a smooth distribution (except for small wiggles in some of the sparsely sampled regions). While we use six cosmological parameters to compute the biases on cosmology, we focus in our presentation of the uncertainty in the biases on a subset with three cosmological parameters of particular interest in cosmic shear analyses, {Ω m , σ 8 , w 0 }. For the tightly constrained parameters Ω m and σ 8 we use ∆ = 0.001 and in the dimension corresponding to w 0 we set ∆ = 0.005. Note that we use the same widths for all sample combinations considered in order not to distort the comparison between the resulting bias distributions. In Fig. 18 we show the contours comprising 99 % of the distribution (29) in the two-dimensional parameter planes spanned by all pair combinations in the set {Ω m , σ 8 , w 0 }, sampling from the posteriors of the intrinsic alignment parameters obtained for SDSS LRGs alone, SDSS LRGs and SDSS Main samples combined, SDSS LRGs and MegaZ-LRG combined, and the joint analysis of MegaZ-LRG, SDSS LRG and SDSS Main samples. In this figure we have given the parameter biases (and not the parameter values) on the axes such that in the absence of any intrinsic alignment contamination, the contours should be centred around (0; 0) in each panel. The general direction of parameter biases, for instance along the Ω m − σ 8 degeneracy, is in agreement with other predictions on biases due to intrinsic alignments (see for instance Joachimi & Schneider 2009) 11 . As is evident from (26), if the GI term dominates, which is expected for deep cosmic shear surveys, the bias is proportional to the amplitude parameter A of the intrinsic alignment model. Thus the remaining uncertainty in A explains the strong elongation of the contours, pointing approximately radially away from (0; 0). The large errors on intrinsic alignment parameters, in particular on η other , in the case of using the SDSS LRG samples alone allow for a vast region of possible parameter biases. Adding the SDSS L4 and L3 samples slightly narrows the contours, but does not reduce their radial extent. The contours tighten dramatically when adding in the MegaZ-LRG data which allowed us to fix the redshift dependence to good accuracy. The additional information provided by the SDSS Main samples constrains the total amplitude of the intrinsic alignment signal still better, thereby further reducing e.g. the intrinsic alignment model are sampled from the 1σ confidence region (thick lines) and the 2σ confidence region (thin lines) of our fits. The contours resulting from the SDSS LRG constraints are shown in red, the ones from the SDSS LRG + SDSS Main (L3 and L4) constraints in green, the ones from the MegaZ-LRG + SDSS LRG constraints in blue, and the contours from the joint constraints by the MegaZ-LRG, SDSS LRG, and SDSS Main samples in black. The grey regions indicate the 1σ confidence regions of the constraints on cosmological parameters. For this analysis the 6 parameters {Ω m , σ 8 , h, n s , Ω b , w 0 } were varied. The fraction f r of red galaxies and the distribution of luminosities of red galaxies were determined from the luminosity functions provided by Faber et al. (2007). Note that the contours corresponding to the 2σ confidence region for the SDSS LRG-only and SDSS LRG + SDSS Main constraints extend far beyond the plot boundaries. the extent of the 2σ contours by about a factor of two along the degeneracy direction. Note that, since the 2σ regions of the intrinsic alignment model fits to SDSS LRGs only and SDSS LRG and MegaZ-LRG samples combined do not completely overlap (see Fig. 14), the corresponding bias distributions partly cover different regions in cosmological parameter space. In Table 6 we list the 1σ marginalised statistical errors for the three cosmological parameters of interest, obtained from (25) via σ stat (p µ ) = (F −1 ) µµ . Moreover we give the size of the interval that contains 99 % of the one-dimensional distribution (29) when sampling from the 1σ and 2σ confidence volumes of the intrinsic alignment parameter fits, again as a measure for the spread of biases on cosmology. In agreement with the two-dimensional plots of Fig. 18 we find that adding the MegaZ-LRG samples to the SDSS LRG and SDSS Main data considerably shrinks the range of biases, e.g. by more than a factor three (seven) in the case of σ 8 when sampling from the 1σ (2σ) confidence volume. In combination with the SDSS Main L3 and L4 samples, the intervals decrease in size by roughly another 30 − 50 % (for the 2σ confidence volume), reaching values which are below the 1σ statistical errors. The reduction of intrinsic alignment bias to subdominant levels is also evident from the comparison with the 1σ confidence regions for constraints on the cosmological parameters plotted in Fig. 18. Hence, under the assumptions made for this prediction, and provided that the mean intrinsic alignment signal were accurately known and could be subtracted from the cosmic shear data, the uncertainty in the knowledge about the free intrinsic alignment parameters in (19) and (23) would be subdominant to the statistical errors in a CFHTLS-like survey, given the intrinsic alignment constraints from the joint fit to all galaxy samples considered in this work. It is also evident from Fig. 18 that the mean bias on the parameters Ω m , σ 8 , and w 0 due to our best-fit intrinsic alignment model is in each case appreciably smaller than the 1σ statistical errors. Consequently, it is possible that cosmology would not be significantly biased even if intrinsic alignments were simply ignored. One may ask if any of the existing cosmic shear anal- Notes. We have listed the 1σ statistical error σ stat , resulting from the Fisher matrix analysis after marginalising over all remaining parameters, in the second column, as well as the range of biases we obtained by sampling from the 1σ and 2σ confidence regions in the parameter space spanned by {A, β, η other }. In the third to sixth columns results from the fits to four sets of galaxy samples, which are also shown in Fig. 18, are given. These sets are (1) yses would be affected more seriously if subject to an intrinsic alignment signal that follows our best-fit model. Hitherto, weak lensing surveys have not been used in combination with photometric redshifts of individual galaxies to perform cosmic shear tomography, with the exception of the space-based COSMOS survey (Massey et al. 2007b;Schrabback et al. 2010). Non-tomographic surveys have much lower contamination by intrinsic alignments than tomographic studies because for a wide redshift distribution the probability of having a close galaxy pair is smaller than for an auto-correlation of narrow redshift bins, thereby lowering the II signal. The probability of having a large line-of-sight separation of galaxies in turn is smaller than for cross-correlations of a low-and high-redshift tomographic bin, thereby rendering the GI contribution less important. Hence, we conclude that any intrinsic alignment signal close to our best-fit model is irrelevant for existing surveys, unless they are significantly shallower, which places a typical galaxy of the survey sample at lower redshift and higher luminosity (see e.g. Fig. C.2), so that the intrinsic alignment contamination becomes stronger (see also Kirk et al. 2010). The COSMOS survey is deeper than the CFHTLS-like survey analysed above (i 814 < 26.7, Schrabback et al. 2010), so that a substantial part of the cosmic shear signal stems from high redshifts. In addition, red galaxies are likely to be less luminous on average, both effects tending to decrease the amplitude of intrinsic alignments. Besides, due to the small survey area of COSMOS, cosmological constraints are modest, further diminishing the importance of an intrinsic alignment bias. Indeed, the fully tomographic analysis by Schrabback et al. (2010) did not detect any effects due to intrinsic alignments by excluding autocorrelations of tomographic bins and bright red galaxies. Conclusions In this work we studied intrinsic alignments in the MegaZ-LRG galaxy sample, investigating for the first time an earlytype galaxy sample at intermediate redshifts up to z ∼ 0.6, and for which only photometric redshift information is available. We presented correlations between galaxy number densities and galaxy shapes as a function of the transverse comoving separation of the galaxy pairs for MegaZ-LRG and two subsamples at high and low redshift, as well as for several spectroscopic SDSS LRG and SDSS Main samples. In combination, these samples comprise wide ranges in redshift (z 0.7) and luminosity (4 mag) which have not been covered in a joint analysis before. We developed the formalism to incorporate photometric redshift scatter into the modelling of the correlation function, taking special care of the large line-of-sight spread of physical correlations and the effect of contributions by galaxy-galaxy lensing and lensing magnification-shear cross-correlations, which is introduced by photometric redshift uncertainty. Our model reproduces to good accuracy the scaling of the MegaZ-LRG data with the maximum line-of-sight separation included in the computation of the correlation functions, as well as the relative contribution by galaxy-galaxy lensing when varying this maximum separation. This supports the validity of our modelling ansatz and justifies the use of the photometric versus spectroscopic relation obtained from the 2SLAQ survey to quantify the photometric redshift scatter. Moreover we discussed a correction to the redshift dependence of the widely used linear alignment model . We then fitted the nonlinear version of this corrected linear alignment (NLA) model to all samples with a free overall amplitude. To allow for the transition from galaxyintrinsic to matter-intrinsic correlations, we also determined a linear galaxy bias from galaxy clustering signals. Due to the assumption of linear biasing and the expected breakdown of the NLA model on small scales, we limited the analysis to comoving transverse separations r p > 6 h −1 Mpc. We found that all samples are consistent with the scaling with r p that is inherent to the NLA model (which is identical to the scale dependence of the matter power spectrum), suggesting that the alignment of earlytype galaxies is indeed determined by the local tidal gravitational field. We did not test other theories of intrinsic alignments since the linear alignment model is, at least on the largest scales, physically motivated, widespread, and reasonable for elliptical galaxies whose shapes are expected to be subjected to tidal stretching by the surrounding gravitational field. The amplitudes of the intrinsic alignment signals that were obtained from the fits vary widely by more than an order of magnitude and are thus inconsistent with a one-parameter NLA model. We introduced additional power-law terms into the NLA model to account for an extra redshift and luminosity dependence, using in each case the power-law index as a further free parameter. With this three-parameter model, we demonstrated that the MegaZ-LRG, SDSS LRG, and SDSS Main samples under consideration are fully consistent with each other, but we add the caveat that the error bars for each sample are still quite large. We would particularly benefit from constraints from a lowluminosity sample that probes a larger volume since the statistics for the SDSS Main sample are relatively poor and thus the bright galaxies from the LRG samples dominate the fits. The joint fit to all ten samples strongly suggests an approximately linear increase of the intrinsic alignment amplitude with galaxy luminosity and is consistent with no extra redshift evolution beyond the (corrected) NLA model. Adding in the new MegaZ-LRG data is particularly beneficial in tightening the constraints on the redshift evolution of the intrinsic alignment signal due to the higher redshift of the samples. The normalisation of the NLA model is given by 0.077 ρ −1 cr (combining the parameters C 1 and A), with a 1σ uncertainty of roughly 10 %. In the joint analysis of galaxy samples, special attention was payed to homogenising the samples as regards the determination of rest-frame magnitudes/luminosities and the range of galaxy colours. As a consequence, we re-defined the colour separator of the red SDSS Main L4 and L3 samples compared to Hirata et al. (2007) to avoid a leakage by blue-cloud galaxies. Furthermore we discarded for the joint fits about 30 % of the MegaZ-LRG galaxies with colours 0.2 (g − i) < 1.7, the resulting redder subsample producing slightly higher intrinsic alignment amplitudes. We also discussed the dependence on the employed k + e-corrections, affecting our results via the computation of luminosities and rest-frame colour cuts. A range of systematics tests was applied to all new and redefined galaxy samples, finding no evidence for systematic effects in the correlation functions. Residual sources of uncertainty, e.g. in the calibration of galaxy shears or due to the statistical error on the galaxy bias, should be clearly subdominant to the uncertainty originating from the statistical errors on the correlation function measurements that were propagated into the errors on the intrinsic alignment parameters. In the linear alignment picture, the normalisation of the intrinsic alignment signal is determined by the response of the intrinsic shape of a galaxy to the gravitational potential of that galaxy at the time of its formation. Interpreting our constraints on intrinsic alignment parameters in this framework, we obtained no evidence for an extra redshift evolution and hence a time dependence of the coupling between intrinsic galaxy shape and gravitational potential. The dependence on luminosity we found can be interpreted as an increase of this coupling with galaxy mass. Comparing our results with Mandelbaum et al. (2010) who analysed blue galaxy samples out to similar redshifts, it is evident that intrinsic alignments depend strongly on the galaxy type. Whether there is a clear dichotomy between early-and late-type galaxies or a more continuous transition with galaxy colour, which the comparison between the MegaZ-LRG samples with and without colour cut may hint at, is still to be investigated. In a Fisher matrix analysis, we predicted the bias on cosmological parameters that results from our best-fit intrinsic alignment model. To this end, we used sets of luminosity functions measured by Faber et al. (2007) in order to derive the fraction of early-type galaxies and their luminosity distribution at each redshift for a given apparent magnitude limit of the survey. Assuming zero intrinsic alignments for blue galaxies (without uncertainty), we then computed the expected intrinsic-ellipticity (II) and shear-intrinsic (GI) correlations, sampling the parameters of the NLA model from the confidence volume of our intrinsic alignment fits. The accuracy of this approach is limited by the substantial extrapolation of the luminosity function data to faint and high-redshift galaxies, as well as the strong scatter in luminosity function parameters obtained from different works. For a fully tomographic CFHTLS-like survey both the mean bias and the scatter in the bias due to the uncertainty in the intrinsic alignment model are smaller than the predicted 1σ cosmological parameter errors, and we similarly expect subdominant systematic effects by intrinsic alignments for other cosmic shear surveys performed hitherto. The MegaZ-LRG data were crucial in reducing this systematic uncertainty. However, if external pri-ors on cosmological parameters, e.g. from the cosmic microwave background, are employed, which is likely to be the case in practice, the significance of the bias due to intrinsic alignments will be higher. For future ambitious weak lensing surveys such as Euclid, which has roughly comparable depth to CFHTLS but considerably higher statistical power (Laureijs et al. 2009), the same intrinsic alignment signal would constitute a severe systematic, and marginalising over the uncertainty in intrinsic alignment parameters would significantly degrade constraints on cosmology. However, the gradually increasing precision requirements by planned cosmic shear surveys are likely to be matched by intrinsic alignment studies that continuously improve and consolidate the models. Hence, constraints on intrinsic alignment models as provided by this work and succeeding ones will prove most useful, for instance to define prior ranges in nuisance parametrisations of intrinsic alignment signals (e.g. Bridle & King 2007). A straightforward test of the intrinsic alignment model obtained in this work could be provided by a similar analysis of galaxy shape correlations from the same set of samples. This way one can verify whether the II signal agrees with the prediction by the linear alignment paradigm (based on the GI signal in this work), and whether the extra redshift and luminosity dependencies of the II signal are consistent with the present results. Furthermore, to obtain unbiased intrinsic alignment measurements for a wide range of galaxy colours is another important but challenging task because many central selection criteria such as shape measurement quality, photometric redshift scatter, or spectroscopic redshift failure rates depend strongly on the galaxy type. Measurements of the type presented in this paper are restricted to quasi-linear scales although both cosmic shear and galaxy evolution studies have a vital interest in intrinsic alignments in the deeply nonlinear regime. A possible way forward would be the usage of a halo model approach for both galaxy bias and intrinsic alignment signals (see e.g. . However, note that, similar to the galaxy bias, observational constraints on intrinsic alignments may ultimately be limited by an intrinsic scatter that cannot be accounted for by a deterministic model. The generalisation of intrinsic alignment measurements to photometric redshift data has opened up a new regime of data sets which could be exploited to constrain intrinsic alignment models. For instance, all present or upcoming cosmic shear surveys with redshift information, or at least subsamples of them with low photometric redshift scatter, could be suited, thereby automatically extending the baselines in redshift and luminosity to scales most relevant for weak lensing. The higher the photometric redshift scatter, the more important become galaxygalaxy lensing and magnification contributions to the observed correlation functions, so that e.g. at some point cosmology will have to be varied in the intrinsic alignment analysis as well. Then it might be more fruitful to instead consider simultaneous measurements of galaxy shape and number density correlations in the manner proposed by Bernstein (2009) and investigated in Joachimi & Bridle (2010). Real data cannot provide the correlation function for arbitrarily large line-of-sight separations, so that a truncation of the integral in (A.6) is necessary. This formula is still applicable if one can stack observations for all values of Π for which galaxy pairs carry a signal. While this can easily be achieved for spectroscopic observations, photometric redshift scatter smears the signal in Π such that a cut-off Π max needs to be taken into account explicitly in the modelling. Of course it would be possible to compute the observed correlations out to very large Π max , but this way many uncorrelated galaxy pairs would enter the correlation function, thereby decreasing the signal-to-noise dramatically. Instead, we proceed from (A.5) by assuming that ξ gI is a real function, and write As can be seen from this equation, ξ gI is an even function in both r p and Π, so that it is sufficient to compute just one quadrant. Note that by definition r p ≥ 0, whereas Π can also attain negative values. Equation (A.7) yields the three-dimensional gI correlation function for exact or, to good approximation, spectroscopic redshifts. For the model described in Sect. 4 with SuperCOSMOS normalisation and b g = 1, we plot ξ gI (r p , Π, z) for z ≈ 0.5 in Fig. A.1, bottom panel. As expected, the correlation is strongest for small separation, in particular for |Π| close to zero. If spectroscopic data is available, essentially all information is captured when a cut-off Π max = 60 h −1 Mpc is used in the integration (A.6), as e.g. in Mandelbaum et al. (2010). Due to the definition (A.1), the gI correlation function measures the radial alignment of the galaxy shape with respect to the separation vector of the galaxy pair considered. Therefore the correlation function vanishes for all Π at r p = 0 since then the separation vector points along the of sight. Note that the contours do not approach the Π = 0-axis asymptotically, but cross this line at some value of r p , as expected for a differentiable correlation function. Throughout these considerations we have not taken the effect of redshift-space distortions into account. A.2. Incorporating photometric redshifts Photometric redshift errors cause the observed correlation function to be a 'smeared' version of (A.7), introducing a spread especially along the line of sight but to a lesser extent also in transverse separation (because an uncertain redshift is used to convert angular separation to physical separation). If we denote quantities determined via photometric redshifts by a bar, the actually measured three-dimensional correlation function reads where z m denotes the mean redshift of the galaxy samples used for the number density and the shape measurement. Here, p is the probability distribution of the true values of r p , Π, and z m , given photometric redshift estimates of these quantities. In words, (A.8) means that in order to obtain the observed correlation function, we integrate over ξ gI as given in (A.7), weighted by the probability that the true values for separations and redshift actually correspond to the estimates based on photometric redshifts. (6) with SuperCOSMOS normalisation has been used to model P δI in both cases. Redshift-space distortions have not been taken into account. In the second step it was assumed that the probability distributions of z 1 , z 2 , and θ are mutually independent, and that θ is exactly known, i.e. p(θ|θ) = δ D (θ −θ). We have introduced different redshift probability distributions for the galaxy sample with number density information p n and the one with shape information p ǫ . All quantities related to photometric redshifts have been (B.7) an additional term (1 + z) 2 , in the numerator. These modifications would correspond to a shift by −2 in η other in our models (19) and (23). Appendix C: Volume density and luminosities of red galaxies To make realistic predictions for the intrinsic alignment contamination of cosmic shear surveys, we must specify, at each redshift, the distribution of galaxy luminosities that enter (19). Since this intrinsic alignment model only holds for red galaxies, we additionally must estimate the fraction of early-type galaxies in the total weak lensing population as a function of redshift. In this paper, both quantities are determined using fits to the observed luminosity functions given in Faber et al. (2007). In this appendix, we present technical details about these calculations, assess the sensitivity of our results to this particular luminosity function choice, and provide data that can be used to forecast the intrinsic alignment contamination of other cosmic shear surveys (with different limiting magnitudes) besides that discussed in the main text. In all cases, we are extrapolating the luminosity functions to fainter magnitudes at a given redshift relative to the samples used to determine the luminosity function. We employ the Schechter luminosity function parameters for red galaxies from Faber et al. (2007), where φ * and M * are given as a function of redshift, and where the faint-end slope is fixed at α = −0.5. While we consistently use magnitudes in the r band, Faber et al. (2007) determine M * in the B band. Therefore we the estimate rest-frame B − r colour from the tables provided in Fukugita et al. (1995), finding B − r = 1.32 for ellipticals. This conversion from B to r takes into account that Faber et al. (2007) give B band magnitudes in the Vega-based system, whereas this work uses AB magnitudes throughout. Furthermore, we have assumed r ≈ r ′ , where r ′ is the filter listed by Fukugita et al. (1995). This assumption should hold to good accuracy 14 for typical colours of the galaxies in our samples, i.e. 0.2 r − i 0.6. For early-type galaxies, B − r shows little evolution between z = 0 and z ∼ 1 (Bruzual & Charlot 2003), so we assume the rest-frame colour to be constant in this redshift range, which we check via the following procedure. Since the Sloan g filter covers a similar wavelength range to the B band (although the peaks of the transmission curves differ, see Fukugita et al. 1995 for details), we use the evolution of g − r as determined from the Wake et al. (2006) templates as an approximation for the redshift dependence of B − r. We find a shift of 0.15 mag from z = 0 to z = 1, which has significantly less effect on our results than employing different observational results for luminosity functions, see Fig. C.1 and the corresponding discussion below. Finally, we correct for the fact that Faber et al. (2007) have computed absolute magnitudes assuming a Hubble parameter h = 0.7 while we give absolute magnitudes in terms of h = 1. With all of these caveats, the limiting absolute B band magnitude at redshift z for a given apparent magnitude limit in the r band is given by M min (z, r lim ) = r lim − 5 log 10 D L (z) 1 Mpc + 25 + k r,red (z) + (B − r) , where k r,red (z) is the k-correction of red galaxies for the r band (Wake et al. 2006). In line with our convention for absolute magnitudes, the luminosity distance D L is computed with h = 1. If absolute magnitudes are given for other values of the Hubble parameter, like e.g. in Faber et al. (2007), we convert these accordingly. The limiting absolute magnitude from (C.1) can then be transformed into the minimum luminosity entering (27) and (28), L min (z, r lim ) L 0 (z) = 10 −0.4(M min (z,r lim )−M 0 (z)) , (C.2) where M 0 (z) denotes the rest-frame absolute magnitude −22, evolution-corrected to redshift z using the redshift dependence of M * from Faber et al. (2007), which is given by −1.2z. Note that this dependence accounts for the redshift evolution in the B band, but since B − r is nearly constant as a function z, we can also apply the correction (to good approximation) to r band magnitudes. Denoting the luminosity corresponding to M * by L * , we obtain for (28) the expression L β (z, r lim ) L β 0 (z) = L * (z) L 0 (z) β Γ α + β + 1, L min (z,r lim ) L * (z) Γ α + 1, L min (z,r lim ) where the incomplete Gamma function Γ(α, x) = ∞ x dy y α−1 e −y was introduced. Analogously, we arrive at n V,red (z, r lim ) = φ * (z) Γ α + 1, L min (z, r lim ) L * (z) (C.4) 14 See http://www.sdss.org/dr6/algorithms/ jeg_photometric_eq_dr1.html#usno2SDSS for the transformation equation between r and r ′ . for the comoving volume density of red galaxies entering (27). In addition to the luminosity functions from Faber et al. (2007), we also consider fitted Schechter parameters presented in Giallongo et al. (2005), as well as the sets of luminosity functions published by Wolf et al. (2003), Willmer et al. (2006), and Brown et al. (2007). We determine fit functions to the redshift dependence of both M * and φ * for the latter three works because we have to extrapolate beyond the range of redshift analysed therein. We use linear functions for M * and various functional forms with two to three fit parameters for φ * , but note that since the fits rely on only five to six data points, the extrapolation has considerable uncertainty. All five references give B band luminosity functions, but the magnitude system and the convention for h vary, as well as the redshift ranges covered and the definition of red galaxies. In Fig. C.1 the red galaxy fraction f r and the mean luminosity L /L 0 for r lim = 25 are plotted as a function of redshift, making use of the different luminosity functions. We find fair agreement between the results based on Faber et al. (2007) and Brown et al. (2007), while the mean luminosities derived from Willmer et al. (2006) already deviate considerably at high z although Faber et al. (2007) and Willmer et al. (2006) partly use the same data. The Wolf et al. (2003) luminosity functions produce significantly lower f r and higher L at low redshifts which is caused by the very different value for the faint end slope, α = +0.52. We note that one of the three fields chosen by Wolf et al. (2003) contained two massive galaxy clusters, so that the large-scale structure in this field could strongly influence the luminosity function in particular of early-type galaxies. However, small red galaxy fractions can be compensated by higher luminosities in (19), so that even the Wolf et al. (2003) luminosity functions may yield intrinsic alignment signals of similar magnitude to the results of e.g. Faber et al. (2007). Applying the formalism to luminosity functions from Giallongo et al. (2005), we obtain very high f r at low redshift, which is clearly inconsistent with the other observations. The Notes. In the upper section values for L /L 0 are given; in the lower section n V,r is given in units of 10 −4 Mpc −3 . This data is plotted in Fig. C.2. red galaxy sample used for the fits of Giallongo et al. (2005) is very small and contains only galaxies with z > 0.4. While the resulting fit function captures the pronounced decrease in number density for high redshifts that Giallongo et al. (2005) observe, it can obviously not be used at z 0.4. In conclusion, we find that the sets of luminosity functions by Faber et al. (2007) who jointly analyse galaxy samples from four different surveys produce reasonable red galaxy fractions and luminosities although the uncertainty in the values of f r and L at any given redshift is still large. In Fig. C.2, the comoving volume density of red galaxies n V,r and the mean luminosity of red galaxies in terms of L 0 are plotted as a function of both r lim and redshift, using the default set of luminosity functions from Faber et al. (2007). For a fixed magnitude limit, L increases with redshift while n V,r decreases strongly at high redshifts. Both tendencies are more pronounced if the apparent magnitude limit is brighter. At low redshift, e.g. for z 0.3 at r lim = 25, a large number of faint blue galaxies are above the magnitude limit and cause f r to diminish for z approaching zero (see Fig. C.1) although n V,r continues to increase. This behaviour might change when explicitly taking into account the size cuts inherent to weak lensing surveys, but in any case, galaxies at these low redshifts constitute only a small fraction of the total survey volume and are expected to have a low luminosity and hence low intrinsic alignment signal on average, so they are unlikely to affect our results severely. The data shown in Fig. C.2, with the corresponding numbers collected in Table C.1, can be used in combination with the intrinsic alignment model fits presented in Sect. 5.5 to predict the effect of intrinsic alignments on cosmic shear surveys. The mean luminosity as a function of redshift for a given magnitude limit r lim can be inserted into the luminosity term in (19), which constitutes a fair approximation as long as values of β close to unity (which includes our best-fit value β = 1.13) are probed. Together with an overall redshift distribution p tot (z) for the cosmic shear survey under consideration, n V,r can be read off and used with (27) to compute the fraction of red galaxies as a function of redshift. Again, we emphasise the limitations of this approach which relies on a substantial amount of extrapolation in luminosity, especially for fainter limiting magnitudes, and which is subject to the large intrinsic uncertainty in the different sets of luminosity functions.
2011-07-29T17:18:46.000Z
2010-08-20T00:00:00.000
{ "year": 2010, "sha1": "6f51ae872580df83cf2987624217f4f0f69c8ba0", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2011/03/aa15621-10.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "6f51ae872580df83cf2987624217f4f0f69c8ba0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
13391718
pes2o/s2orc
v3-fos-license
Arterial Transit Time-corrected Renal Blood Flow Measurement with Pulsed Continuous Arterial Spin Labeling MR Imaging Purpose: The importance of arterial transit time (ATT) correction for arterial spin labeling MRI has been well debated in neuroimaging, but it has not been well evaluated in renal imaging. The purpose of this study was to evaluate the feasibility of pulsed continuous arterial spin labeling (pcASL) MRI with multiple post-labeling delay (PLD) acquisition for measuring ATT-corrected renal blood flow (ATC-RBF). Materials and Methods: A total of 14 volunteers were categorized into younger (n = 8; mean age, 27.0 years) and older groups (n = 6; 64.8 years). Images of pcASL were obtained at three different PLDs (0.5, 1.0, and 1.5 s), and ATC-RBF and ATT were calculated using a single-compartment model. To validate ATC-RBF, a comparative study of effective renal plasma flow (ERPF) measured by 99mTc-MAG3 scintigraphy was performed. ATC-RBF was corrected by kidney volume (ATC-cRBF) for comparison with ERPF. Results: The younger group showed significantly higher ATC-RBF (157.68 ± 38.37 mL/min/100 g) and shorter ATT (961.33 ± 260.87 ms) than the older group (117.42 ± 24.03 mL/min/100 g and 1227.94 ± 226.51 ms, respectively; P < 0.05). A significant correlation was evident between ATC-cRBF and ERPF (P < 0.05, r = 0.47). With suboptimal single PLD (1.5 s) settings, there was no significant correlation between ERPF and kidney volume-corrected RBF calculated from single PLD data. Conclusion: Calculation of ATT and ATC-RBF by pcASL with multiple PLD was feasible in healthy volunteers, and differences in ATT and ATC-RBF were seen between the younger and older groups. Although ATT correction by multiple PLD acquisitions may not always be necessary for RBF quantification in the healthy subjects, the effect of ATT should be taken into account in renal ASL–MRI as debated in brain imaging. Introduction Renal perfusion is one of the most important biological parameters for evaluating several renal diseases, including acute and chronic renal failure, kidney transplantation, and renovascular hypertension, as well as for pre-operative renal Transit Time-corrected ASL-MRI in Kidney MRI technique. It used magnetically-labeled water in blood as an endogenous tracer instead of externally-injected tracer, and it enabled non-invasive quantitative tissue blood flow measurements without ionizing radiation exposure and administration of contrast materials. 5 This robust technique has now become routine in neuroimaging and is used for evaluating cerebrovascular disease, brain tumors, dementia, and other central nervous system (CNS) diseases. [6][7][8] More recently, it has been applied to the kidney for several purposes, and a few study on the quantification of RBF for the evaluation of transplanted kidneys, renal artery stenosis, and renal tumors have been reported. [9][10][11] Also it has been debated in neuroimaging, since arterial transit time (ATT) affected quantitative blood flow measurements in ASL-MRI, ATT correction is necessary for precise blood flow quantification. 5,12,13 However, to the best of our knowledge, performance of ATT correction has been quite limited for renal imaging, and only a few studies have been reported so far; Cutajar et al. (2014) used a flow-sensitive alternating inversion recovery (FAIR) ASL with multiple inflow time (Tl) acquisitions and measured ATT-corrected RBF (ATC-RBF) for human kidneys. 14,15 In this study, methods to measure both ATT and ATC-RBF were developed using pulsed continuous arterial spin labeling (pcASL) MRI with multiple post-labeling delay (PLD) acquisitions. Compared to FAIR-ASL, pcASL was theoretically expected to produce higher signal-to-noise ratio (SNR) images due to the longer temporal duration of the labeled bolus and higher labeled magnetization deliveries. 5 The advantage of the pcASL technique may be that it allowed for accurate quantification, especially in lower RBF patients. For a proof of principle, this method was applied to two healthy subject groups, younger and older groups. Younger healthy subjects were generally expected to show faster ATT and higher RBF than older subjects. [16][17][18] Therefore, we hypothesized that the pcASL with the multiple PLD methods would show this difference between the younger and older groups. Furthermore, to validate the RBF quantification, ATC-RBF measured by pcASL MRI was compared with effective renal plasma flow (ERPF) measured by 99m Tc-mercaptoacetyltriglycine ( 99m Tc-MAG3) renography, which was widely used to assess renal function clinically. Subjects This study was approved by the Institutional Ethics Committee. Informed consent was obtained from all subjects. A total of 14 healthy male volunteers were enrolled and categorized into younger (n = 8: age range 22-39 years, mean = 27.0 years) and older groups (n = 6: age range 53-75 years, mean = 64.8 years). Younger and older groups were defined as 20-40 years old and 50-80 years old, respectively. All volunteers had no history of renal disease, and their serum creatinine levels were measured before imaging to calculate the estimated glomerular filtration rate (eGFR). The eGFRs of all volunteers were normal: 92.09 ± 9.17 (range: 81.5-110.3) mL/min/1.73 m 2 and 79.67 ± 11.45 (range: 62.3-95.8) mL/min/1.73 m 2 for the younger and older groups, respectively. Blood pressures were 120.9/66.4 ± 9.0/12.0 (range: 104/52-133/89) mmHg and 124.7/76.3 ± 18.0/13.6 (range: 93/64-144/100) mmHg for the younger and older groups, respectively. All volunteers were fasting during the entire protocol period. pcASL MR protocols MRI was performed using a 3.0-Tesla clinical scanner (Discovery MR750, GE Healthcare, Milwaukee, WI) with an 8-channel torso coil. The scout images were scanned with a gradient echo sequence in three planes through the center of each kidney. Coronal T 2 -weighted imaging covering the whole kidney was performed for anatomical and volume evaluation using a single shot fast spin echo (SSFSE) sequence with the following parameters: TR = 1123.2 ms, TE = 79.3 ms, image matrix 352 × 224, slice thickness: 5.0 mm, interval: 0 mm, flip angle: 90°, bandwidth: 83.33 kHz, and FOV = 38 × 38 cm. ASL images were then acquired for quantitative measurements of ATT and ATC-RBF by applying pcASL with optimized background suppression and the 2D spin-echo echo planar imaging (EPI) sequence. The precise parameters and settings for pcASL were described in the previous study. 19 The number of 180° pulses for background suppression was two, with each pulse applied for 1000 and 200 ms, respectively, before the beginning of EPI acquisition. The simulation confirmed that the first four slices were suppressed by <20%; using this condition, renal cysts were reasonably revealed as low non-perfusion areas (data not shown). For ASL imaging, multiple coronal slices covering the whole kidney were scanned with the following parameters: TR = 5500 ms, TE = 17.6 ms, image matrix: 96 × 128, slice thickness: 8 mm, interval: 0.5 mm, flip angle: 90°, bandwidth: 62.5 kHz, and FOV = 38 × 38 cm. Arterial labeling was performed with 2.0 s duration at the axial plane 10 cm superior to the center of the kidneys, and three different PLDs (0.5, 1.0, and 1.5 s) were set. A repetition time of 5.5 s was used to allow for recovery of the blood signal and for the subjects to breathe in during the quiet period between acquisitions, and a <17 s breath-hold was performed repeatedly at each TR for image acquisition. Nine averages of label and control were acquired for a total acquisition time of 3 min with each PLD setting. Measurement of the fully relaxed magnetization signal (reference images, M 0 ) was also obtained to quantify RBF. The tag and control images acquired with ASL imaging were subtracted in a pairwise manner, and an averaged image was calculated using the script process on the MRI console to obtain perfusion images (ΔM). ATT and RBF calculation with the single-compartment model All reference images (M 0 ) and perfusion images (ΔM) acquired with multiple PLDs were transferred to the standalone workstation (iMac, OS X; Apple Computer, Cupertino, CA), and cortical ROIs were placed on the slices showing the renal hilum (4th slice from the front) using the image analysis software (OsiriX, version 5.6, http://www. osirix-viewer.com/index.html). Since the renal cortex showed a very high signal with good spatial resolution on ASL images, the cortical ROIs were drawn over the renal cortex on ASL images, and those ROIs were copied and pasted to the corresponding reference images. Samples of cortical ROIs are shown in Fig. 1A. Regarding the ASL signal model, since the tissue structure of the kidney consists of vascular-rich components in the renal cortices, we considered that the single compartment which mainly simulated the micro vascular signal would be appropriate for the kidney perfusion analysis. The measured cortical ROI values both from perfusion and reference images were then applied to the single-compartment model as described by the following formulae with the assumption that, by the time of image acquisition, all the labeled spins had left the vessel and resided in the parenchyma: where f r is RBF (i.e., ATC-RBF) measured with ASL imaging; T 1tissue is the tissue relaxation time of water (1150 ms was used for renal cortex) 20 ; λ is the tissue blood partition coefficient of water; τ is labeling duration; and labeling efficiency (α) is assumed to be 0.75. 21 Representative pcASL images at different PLD time points and the % signal change curves of the compartment analysis of younger and older subjects are shown in Fig. 1. Signal intensities and peaks were generally stronger and faster in the younger volunteers than in the older volunteers. The mean ATC-RBF of the renal cortex of all subjects was 139.10 ± 37.93 mL/min/100 g. The younger group had significantly higher ATC-RBF (157.68 ± 38.37 mL/min/100 g) and shorter ATT (961.33 ± 260.87 ms) than the older group (117.42 ± 24.03 mL/min/100 g and 1227.94 ± 226.51 ms, respectively) (Fig. 2). A significant linear correlation was seen between ATC-cRBF and ERPF in all kidneys (r = 0.47, P < 0.05) (Fig. 3A). When the left and right kidneys were analyzed separately, a stronger linear correlation was observed for the right kidney (r = 0.58, P < 0.05), while no significant correlation was seen for the left kidney (r = 0.38, P = 0.20). (Fig. 3B, C) To compare the multi-PLD and single-PLD methods, cRBF with single-PLD methods (i.e., no ATC correction) was calculated assuming 1.0 s ATT from the same data sets. As shown in Fig. 4, single PLD acquisitions with 0.5 and 1.0 s PLDs showed significant linear correlations with ERPFs that were comparable to those with multiple PLD acquisitions. However, when a relatively long PLD (1.5 s) was used for the single PLD acquisitions, there was no significant correlation between ERPF and cRBF measured by single PLD acquisition. Discussion This study demonstrated the feasibility of ATT and ATTcorrected RBF measurements using pcASL with multiple PLD acquisition in healthy subjects. ATT correction of ASL-MRI relaxation time (assumed to be 1600 ms), and δ a is the transit time the labeled spin took to travel from the tagging plane to the capillary bed (i.e., ATT). Using the above formulae, RBF and transit time were calculated under the condition of minimized sum of squares deviation from each data point and the model solution using the solver function of a spreadsheet program (Excel, Microsoft Corporation, Redmond, WA). In every case, it was confirmed that the optimized values were not extreme outliers by visual inspection on the graph of a simulated line and the acquired data points were automatically plotted on the same EXCEL sheet. Figure 1B is exactly the same graph as appeared on the sheet. When a fixed δ a value was needed, it was only necessary to set the cell for f r as the variable cell and run the solver tool. RBFs calculated from single PLD data sets were also obtained using the same single-compartment model assuming 1.0 s ATT. For comparison with ERPF, ATC-RBF and RBF were corrected by kidney volume (ATC-cRBF or cRBF), since ERPF was estimated on a per kidney basis. Kidney volumes (cm 3 ) were calculated using image analysis software (Osirix) as follows: the area of the kidney on each T 2 -weighted coronal image covering the entire kidney was measured, and then the kidney areas were summed and multiplied by the slice thickness. Volume corrections were made with the following formulae: ATC-cRBF or cRBF = [ATC-RBF or RBF (mL/min/100 g)/100] × kidney volume (cm 3 ). Renal scintigraphy For ATC-RBF validation, renal dynamic scintigraphy was performed 1.5 h prior to the MRI scan using a clinical dualhead gamma camera (E.CAM, Siemens Healthcare, Erlangen, Germany) with a low-energy, high-resolution (LEHR) collimator. After the intravenous injection of 300 MBq of 99m Tc-MAG3 (FUJIFILM RI Pharma, Tokyo, Japan) in a supine position, serial images of 1.0 s per frame were obtained for the first 64 s, followed by 50 frames at 30 s per frame with a 128 × 128 matrix. Then, ROIs were placed over the left and right kidney each, and ERPF was calculated by a count-based gamma camera method 22 using a commercially available nuclear medicine analysis system (Siemens ICON, Siemens Healthcare, Erlangen, Germany). Statistical analysis All statistical analyses were performed using Graphpad Instat 3 (GraphPad Software Inc., La Jolla, CA, USA). Differences in ATT and ATC-RBF between groups were assessed by the Mann-Whitney test. The correlation between ERPF and ATC-cRBF or cRBF was tested by linear regression; P < 0.05 was considered significant. Results All image acquisitions and post-processings were successful, except in one young, physically lean subject due to extreme kidney image distortion. The technical success rate was 92%. for human kidney imaging has not been well evaluated so far, although it has been well debated in neuroimaging. To the best of our knowledge, few studies performing ATT correction for human kidney imaging by the FAIR-ASL technique have been reported, 14,15 and ATT itself has not yet been measured for human kidney. Regarding ATT in the brain, agerelated ATT prolongation has been reported, with the younger group showing significantly shorter ATTs in cerebral gray matter than the elderly group. 16,17 Moreover, ATTs obtained with multiple PLD ASL-MRI change dramatically in patients with chronic occlusive cerebrovascular disease. 23 Such ATT differences were well known to affect CBF quantification in the ASL signal model, because it assumed that all tagged signals had reached the acquisition plane. 13 When the timing of ASL signal acquisition (i.e., PLD) was earlier or later than the ATT, the regional ASL signal should diminish, resulting in CBF underestimation with ASL-MRI. To compensate for this problem, ATT correction with multiple PLD acquisitions was regarded as an essential process for precise CBF quantification in the brain ASL-MRI model. 12 In this study, age-related prolongation of ATT also occurred in renal ASL-MRI. Moreover, the ATC-cRBF showed a significant correlation to ERPF, but the cRBF with suboptimal single PLD settings resulted in a poorer correlation to ERPF. In the renal cortex, afferent arterioles branching from interlobular arteries form the glomerulus and then drain directly to efferent arterioles; they then finally flow into the peritubular capillary network located mainly in the renal medulla. 24 Moreover, renal arteries branch directly from the aorta with high pressure, and they were not tortuous like the internal carotid and vertebral arteries. Therefore, inflows and outflows of labeled water in renal cortex may be faster than in brain tissue where labeled water distributed into interstitial tissues. In such circumstances, acquisition timing with a suboptimally long PLD, such as 1.5 s, may be too late in some cases, and it could cause underestimation of RBF. On the other hand, single PLD acquisitions with PLD = 0.5 and 1.0 s showed comparable correlations to ERPF compared to multiple PLD acquisitions, because these PLDs may be appropriate acquisition timing to represent comparable % signal change curves fitted by multiple PLD data. Therefore, in the healthy populations tested in this study, ATT correction by multiple PLD acquisitions may not always be necessary for RBF quantification. However, the situation may be more complicated for clinical application. Thus, the actual ATT and optimal PLD may vary for each patient, resulting in suboptimal quantification of RBF. In such situations, ATT correction by multiple PLD acquisitions could facilitate precise RBF quantification by ASL-MRI. Moreover, the present method enabled ATT measurement, which was difficult to accomplish by other imaging methods. The utility of ATT itself for understanding the pathophysiologic status of cerebrovascular disease, as well as of CBF, has been reported. 23,25 Likewise, renal ATT had a potential to provide additional information for assessment of renovascular disease, where conventional MRI can only contribute to assessing the morphological changes at present. The correlation between ATC-cRBF and ERPF was significant, but more modest than expected. Such modest correlations of RBF measured by ASL-MRI against gold standards have been reported. For instance, Ritt et al. (2010) reported a similar modest correlation between para-aminohippuric acid (PAH) plasma clearance and FAIR-ASL in 24 metabolic syndrome patients, 26 while Wu et al. (2011) also reported a moderate correlation between pcASL-MRI and dynamic contrast-enhanced MRI in 19 healthy subjects. 27 One limitation of this study was that only healthy subjects were recruited, so that the range of renal function was relatively narrow. This may partly explain the modest correlation observed in this study. Another concern was that use of a tubular secretion tracer, such as MAG3 and PAH, may not always provide renal plasma flow, because clearance of such tracers was determined not only by renal plasma flow, but also by tubular secretory function. 4 Thus, ERPF does not represent true renal plasma flow in a subject with renal tubular dysfunction. Furthermore, the pharmacokinetics of MAG3 Transit Time-corrected ASL-MRI in Kidney differed from those of PAH or radioiodine-labeled hippurate (OIH, analogue of PAH), whose plasma clearances are regarded to be a good standard for renal plasma flow. MAG3 showed higher protein binding, slower blood clearance, higher extraction efficiency by tubular cells, and larger excretion into the bile than OIH and PAH. 28 However, all subjects enrolled in this study were healthy, without any history of renal disease. In such populations, plasma clearance of MAG3 showed an excellent correlation with that of OIH; thus, MAG3 is now widely used to evaluate renal function in clinics as an alternative to OIH. 28,29 In addition, since only linear correlations between ERPF and ATC-cRBF, not absolute values of renal plasma flow were evaluated in this study, this may not have been a critical problem. It was more likely that technical limitations of ASL-MRI may explain such a modest correlation between ATC-cRBF and ERPF, including the susceptibility effect around the kidneys, effects of pulsation and susceptibility at the labeling plane, and misregistration due to respiratory motion. As shown in the results, the left kidney showed a poorer correlation between ATC-cRBF and ERPF than the right kidney. The reason for this difference was still unknown, but one possible explanation was that acquisition of the ASL signal from the left kidney may be more hampered by susceptibility effects compared to the right kidney, because the left kidney was generally located closer to air in the stomach and lungs, which caused an inhomogeneous magnetic field. In this study, in particular, the 2D-EPI readout sequence was used in ASL-MRI, which was the most efficient usage of MR signal available per unit time, but it was more sensitive to susceptibility effects. Other readout sequences, such as fast spin echo (FSE) or balanced steady-state free precession (SSFP) sequence, may be more suitable under such circumstances, 30 because they were less sensitive to susceptibility effects. However, their lower SNR in ASL-MRI will be a trade-off compared to the EPI readout sequence. The second limitation of renal ASL was labeling efficiency at the labeling plane. For renal pcASL-MRI, spin labeling took place at the aorta around the diaphragm, where air in the lungs caused strong susceptibility effects; thus, labeling efficiency may vary depending on individuals. Moreover, unstable ASL tagging due to cardiac pulsation and the effects of flow dispersion have been reported in neuroimaging, 31,32 which may be more problematic in stronger pulsatile blood flows within the aorta. In fact, the peak flow velocity in the aorta may outrange the supposed flow range for CNS ASL imaging from the result of the simulated efficiency of pcASL. 19 Such effects may have affected RBF quantification in this study. Another big issue for renal ASL-MRI was how to deal with respiratory movements during acquisition. In this study, voluntary synchronized breathing, with multiple sessions of ~17 s breath-holds, was used, and the data were summed. Although no problem was seen, it may potentially cause blurring and misregistration, which have some effect on RBF quantification. For other approaches, Robson et al. (2009) reported that retrospective image sorting improved image quality with free breath acquisition, 21 while Tan et al. (2014) reported the feasibility of respiratory navigator-gated acquisition. 33 Such "subject-independent" techniques may be needed for clinical applications in the future, since many patients have difficulties with appropriate respiratory control. In addition, this may be another advantage of renal ASL. In this study, only three PLD time points were obtained to remain within an acceptable scanning time and decrease subjects' physical burden. However, more PLD time points would be desirable for more precise ATT and RBF measurements. Recently, brain ATT-corrected pcASL-MRI with low-resolution multiple PLD acquisitions has been reported. 34 This method enabled more PLD-time points to be obtained without elongating scanning time, leading to more precise ATT and ATT-corrected cerebral blood perfusion measurements. Combined with the development of free-breathing acquisition, such a technique could also be applied to renal ASL-MRI in the future. Conclusion Calculations of ATT and ATC-RBF by pcASL with multiple PLD were feasible in healthy volunteers. Even in healthy subjects, differences in ATT, as well as ATC-RBF, were seen between younger and older groups. Although ATT correction by multiple PLD acquisitions may not always be necessary for RBF quantification in the healthy subjects, the effect of ATT should be taken into account in renal ASL-MRI as debated in brain imaging. However, the significant but modest correlation between ATC-cRBF and ERPF observed in this study suggested that further technical development may be needed for more precise RBF quantification by pcASL-MRI.
2018-04-03T00:38:39.023Z
2016-05-09T00:00:00.000
{ "year": 2016, "sha1": "b7578369d088a1e591a111818919a2ac712e1ad6", "oa_license": "CCBY", "oa_url": "https://www.jstage.jst.go.jp/article/mrms/16/1/16_mp.2015-0117/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b7578369d088a1e591a111818919a2ac712e1ad6", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
220713377
pes2o/s2orc
v3-fos-license
ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM This paper presents ORB-SLAM3, the first system able to perform visual, visual-inertial and multi-map SLAM with monocular, stereo and RGB-D cameras, using pin-hole and fisheye lens models. The first main novelty is a feature-based tightly-integrated visual-inertial SLAM system that fully relies on Maximum-a-Posteriori (MAP) estimation, even during the IMU initialization phase. The result is a system that operates robustly in real-time, in small and large, indoor and outdoor environments, and is 2 to 5 times more accurate than previous approaches. The second main novelty is a multiple map system that relies on a new place recognition method with improved recall. Thanks to it, ORB-SLAM3 is able to survive to long periods of poor visual information: when it gets lost, it starts a new map that will be seamlessly merged with previous maps when revisiting mapped areas. Compared with visual odometry systems that only use information from the last few seconds, ORB-SLAM3 is the first system able to reuse in all the algorithm stages all previous information. This allows to include in bundle adjustment co-visible keyframes, that provide high parallax observations boosting accuracy, even if they are widely separated in time or if they come from a previous mapping session. Our experiments show that, in all sensor configurations, ORB-SLAM3 is as robust as the best systems available in the literature, and significantly more accurate. Notably, our stereo-inertial SLAM achieves an average accuracy of 3.6 cm on the EuRoC drone and 9 mm under quick hand-held motions in the room of TUM-VI dataset, a setting representative of AR/VR scenarios. For the benefit of the community we make public the source code. I. INTRODUCTION Intense research on Visual Simultaneous Localization and Mapping systems (SLAM) and Visual Odometry (VO), using cameras alone or in combination with inertial sensors, has produced during the last two decades excellent systems, with increasing accuracy and robustness. Modern systems rely on Maximum a Posteriori (MAP) estimation, which in the case of visual sensors corresponds to Bundle Adjustment (BA), either geometric BA that minimizes feature reprojection error, in feature-based methods, or photometric BA that minimizes the photometric error of a set of selected pixels, in direct methods. With the recent emergence of VO systems that integrate loop-closing techniques, the frontier between VO and SLAM is more diffuse. The goal of Visual SLAM is to use the sensors on-board a mobile agent to build a map of the environment and compute in real-time the pose of the agent in that map. In contrast, VO systems put their focus on computing the agent's ego-motion, not on building a map. The big advantage of a SLAM map is that it allows matching and using in BA previous observations performing three types of data association (extending the terminology used in [1]): • Short-term data association, matching map elements obtained during the last few seconds. This is the only data association type used by most VO systems, that forget environment elements once they get out of view, resulting in continuous estimation drift even when the system moves in the same area. • Mid-term data association, matching map elements that are close to the camera whose accumulated drift is still small. These can be matched and used in BA in the same way than short-term observations and allow to reach zero drift when the systems moves in mapped areas. They are the key of the better accuracy obtained by our system compared against VO systems with loop detection. • Long-term data association, matching observations with elements in previously visited areas using a place recognition technique, regardless of the accumulated drift (loop detection) or even if the tracking was lost (relocation). Long term matchings allow to reset the drift and to correct the loop using pose-graph (PG) optimization, or more accurately, using BA. This is the key of SLAM accuracy in medium and large loopy environments. In this work we build on ORB-SLAM [2], [3] and ORB-SLAM Visual-Inertial [4], the first visual and visual-inertial systems able to take full profit of short-term, mid-term and long-term data association, reaching zero drift in mapped areas. Here we go one step further providing multi-map data association, which allows us to match and use in BA map elements coming from previous mapping sessions, achieving the true goal of a SLAM system: building a map than can be used later to provide accurate localization. This is essentially a system paper, whose most important contribution is the ORB-SLAM3 library itself [5], the most complete and accurate visual, visual-inertial and multi-map SLAM system to date (see table I). The main novelties of ORB-SLAM3 are: • A monocular and stereo visual-inertial SLAM system that fully relies on Maximum-a-Posteriori (MAP) estimation, even during the IMU (Inertial Measurement Unit) initialization phase. The initialization method proposed was previously presented in [6]. Here we add its integration with ORB-SLAM visual-inertial [4], the extension to stereo-inertial SLAM, and a thorough evaluation in public datasets. Our results show that the monocular and stereo visual-inertial systems are extremely robust and significantly more accurate than other visual-inertial approaches, even in sequences without loops. • High-recall place recognition. Many recent visual SLAM and VO systems [2], [7], [8] solve place recognition using the DBoW2 bag of words library [9]. DBoW2 requires temporal consistency, matching three consecutive keyframes to the same area, before checking geometric consistency, boosting precision at the expense of recall. As a result, the system is too slow at closing loops and reusing previous maps. We propose a novel place recognition algorithm, in which candidate keyframes are first checked for geometrical consistency, and then for local consistency with three covisible keyframes, that in most occasions are already in the map. This strategy increases recall and densifies data association improving map accuracy, at the expense of a slightly higher computational cost. • ORB-SLAM Atlas, the first complete multi-map SLAM system able to handle visual and visual-inertial systems, in monocular and stereo configurations. The Atlas can represent a set of disconnected maps, and apply to them all the mapping operations smoothly: place recognition, camera relocalization, loop closure and accurate seamless map merging. This allows to automatically use and combine maps built at different times, performing incremental multi-session SLAM. A preliminary version of ORB-SLAM Atlas for visual sensors was presented in [10]. Here we add the new place recognition system, the visualinertial multi-map system and its evaluation on public datasets. • An abstract camera representation making the SLAM code agnostic of the camera model used, and allowing to add new models by providing their projection, unprojection and Jacobian functions. We provide the implementations of pinhole [11] and fisheye [12] models. All these novelties, together with a few code improvements make ORB-SLAM3 the new reference visual and visualinertial open-source SLAM library, being as robust as the best systems available in the literature, and significantly more accurate, as shown by our experimental results in section VII. We also provide comparisons between monocular, stereo, monocular-inertial and stereo-inertial SLAM results that can be of interest for practitioners. Table I presents a summary of the most representative visual and visual-inertial systems, showing the main techniques used for estimation and data association. The qualitative accuracy and robustness ratings included in the table are based, for modern systems, on the comparisons reported in section VII, and for classical systems, on previous comparisons in the literature [2], [52]. A. Visual SLAM Monocular SLAM was first solved in MonoSLAM [13], [14], [53] using an Extended Kalman Filter (EKF) and Shi-Tomasi points that were tracked in subsequent images doing a guided search by correlation. Mid-term data association was significantly improved using techniques that guarantee that the feature matches used are consistent, achieving hand-held visual SLAM [54], [55]. In contrast, keyframe-based approaches estimate the map using only a few selected frames, discarding the information coming from intermediate frames. This allows to perform the more costly, but more accurate, BA optimization at keyframe rate. The most representative system was PTAM [16], that split camera tracking and mapping in two parallel threads. Keyframe-based techniques are more accurate than filtering for the same computational cost [56], becoming the gold standard in visual SLAM and VO. Large scale monocular SLAM was achieved in [57] using sliding-window BA, and in [58] using a double-window optimization and a covisibility graph. Building on these ideas, ORB-SLAM [2], [3] uses ORB features, whose descriptor provides short-term and mid-term data association, builds a covisibility graph to limit the complexity of tracking and mapping, and performs loop-closing and relocalization using the bag-of-words library DBoW2 [9], achieving long-term data association. To date is the only visual SLAM system integrating the three types of data association, which we believe is the key of its excellent accuracy. In this work we improve its robustness in pure visual SLAM with the new Atlas system that starts a new map when tracking is lost, and its accuracy in loopy scenarios with the new high-recall place recognition method. Direct methods do not extract features, but use directly the pixel intensities in the images, and estimate motion and structure by minimizing a photometric error. LSD-SLAM [20] was able to build large scale semi-dense maps using high gradient pixels. However, map estimation was reduced to pose graph, achieving lower accuracy than PTAM and ORB-SLAM [2]. The hybrid system SVO [23], [24] extracts FAST features, uses a direct method to track features, and any pixel with nonzero intensity gradient, from frame to frame, and optimizes camera trajectory and 3D structure using reprojection error. SVO is extremely efficient, but being a pure VO method, it only performs short-term data association, which limits its accuracy. Direct Sparse Odometry DSO [27] is able to compute accurate camera poses in situations where point detectors perform poorly, enhancing robustness in low textured areas or against blurred images. It introduces local photometric BA that simultaneously optimizes a window of 7 recent keyframes and the inverse depth of the points. Extensions of this work include stereo [29], loop-closing using features and DBoW2 [59] [60], and visual-inertial odometry [46]. Direct Sparse Mapping DSM [31] introduces the idea of map reusing in direct methods, showing the importance of mid-term data association. In all cases, the lack of integration of short, mid, and long-term data association results in lower accuracy than our proposal (see section VII). B. Visual-Inertial SLAM The combination of visual and inertial sensors provide robustness to poor texture, motion blur and occlusions, and in the case of monocular systems, make scale observable. Research in tightly coupled approaches can be traced back to MSCKF [33] where the EKF quadratic cost in the number of features is avoided by feature marginalization. The initial system was perfected in [34] and extended to stereo in [35], [36]. The first tightly coupled visual odometry system based on keyframes and bundle adjustment was OKVIS [38], [39] that is also able to use monocular and stereo vision. While these systems rely on features, ROVIO [41], [42] feeds an EFK with photometric error using direct data association. ORB-SLAM-VI [4] presented for the first time a visualinertial SLAM system able to reuse a map with short-term, mid-term and long-term data associations, using them in an accurate local visual-inertial BA based on IMU preintegration [61], [62]. However, its IMU initialization technique was too slow, taking 15 seconds, which harmed robustness and accuracy. Faster initialization techniques were proposed in [63], [64], based on a closed-form solution to jointly retrieve scale, gravity, accelerometer bias and initial velocity, as well as visual features depth. Crucially, they ignore IMU noise properties, and minimize the 3D error of points in space, and not their reprojection errors, that is the gold standard in feature-based computer vision. Our previous work [65] shows that this results in large unpredictable errors. VINS-Mono [7] is a very accurate and robust monocularinertial odometry system, with loop-closing using DBoW2 and 4 DoF pose-graph optimization, and map-merging. Feature tracking is performed with Lucas-Kanade tracker, being slightly more robust than descriptor matching. In VINS-Fusion [44] it has been extended to stereo and stereo-inertial. Kimera [8] is an novel outstanding metric-semantic mapping system, but its metric part consists in stereo-inertial odometry plus loop closing with DBoW2 and pose-graph optimization, achieving similar accuracy to VINS-Fusion. The recent BASALT [47] is a stereo visual-inertial odometry system that extracts nonlinear factors from visual-inertial odometry to use them in BA, and closes loops matching ORB features, achieving very good to excellent accuracy. Recently, VI-DSO [66] extends DSO to visual-inertial odometry. They propose a bundle adjustment which combines inertial observations with the photometric error in selected high gradient pixels, what renders very good accuracy. As the information in high gradient pixels is successfully exploited, the robustness in scene regions with poor texture is also boosted. Their initialization method relies on visual-inertial BA and takes 20-30 seconds to converge within 1% scale error. In this work we build on ORB-SLAM-VI and extend it to stereo-inertial SLAM. We propose a novel fast initialization method based on Maximum-a-Posteriori (MAP) estimation that properly takes into account visual and inertial sensor uncertainties, and estimates the true scale with 5% error in 2 seconds, converging to 1% scale error in 15 seconds. All other systems discussed above are visual-inertial odometry methods, some of them extended with loop closing, and lack the capability of using mid-term data associations. We believe this, together with our fast and precise initialization, is the key of the better accuracy consistently obtained by our system, even in sequences without loops. C. Multi-Map SLAM The idea of adding robustness to track losses during exploration by means of map creation and fusion was first proposed in [67] within a filtering approach. One of the first keyframebased multi-map systems was [68], but the map initialization was manual, and the system was not able to merge or relate the different sub-maps. Multi-map capability has been researched as a component of collaborative mapping systems, with several mapping agents and a central server that only receives information [69] or with bidirectional information flow as in C2TAM [70]. MOARSLAM [71] proposes a robust stateless clientserver architecture for collaborative multiple-device SLAM, but the main focus is the software architecture, not reporting accuracy results. More recently, CCM-SLAM [72], [73] proposes a distributed multi-map system for multiple drones with bidirectional information flow, built on top of ORB-SLAM. Their focus is on overcoming the challenges of limited bandwidth and distributed processing, while ours is on accuracy and robustness, achieving significantly better results on the Eu-RoC dataset. ORB-SLAMM [74] also proposes a multi-map extension of ORB-SLAM2, but keeps sub-maps as separated entities, while we perform seamless map merging, building a more accurate global map. VINS-Mono [7] is a visual odometry system with loop closing and multi-map capabilities that rely on the place recognition library DBoW2 [9]. Our experiments show that ORB-SLAM3 is 2.6 times more accurate than VINS-Mono in monocular-inertial single-session operation on the EuRoc dataset, thanks to our ability to use mid-term data association. Our Atlas system also builds on DBoW2, but proposes a novel higher-recall place recognition technique, and performs more detailed and accurate map merging using local BA, increasing the advantage to 3.2 times better accuracy than VINS-Mono in multi-session operation on EuRoC. III. SYSTEM OVERVIEW ORB-SLAM3 is built on ORB-SLAM2 [3] and ORB-SLAM-VI [4]. It is a full multi-map and multi-session system able to work in pure visual or visual-inertial modes with monocular, stereo or RGB-D sensors, using pin-hole and fisheye camera models. Figure 1 shows the main system components, that are parallel to those of ORB-SLAM2 with some significant novelties, that are summarized next: • Atlas is a multi-map representation composed of a set of disconnected maps. There is an active map where the tracking thread localizes the incoming frames, and is continuously optimized and grown with new keyframes by the local mapping thread. We refer to the other maps in the Atlas as the non-active maps. The system builds a unique DBoW2 database of keyframes that is used for relocalization, loop closing and map merging. • Tracking thread processes sensor information and computes the pose of the current frame with respect to the active map in real-time, minimizing the reprojection error of the matched map features. It also decides whether the current frame becomes a keyframe. In visual-inertial mode, the body velocity and IMU biases are estimated by including the inertial residuals in the optimization. When tracking is lost, the tracking thread tries to relocate the current frame in all the Atlas' maps. If relocated, tracking is resumed, switching the active map if needed. Otherwise, after a certain time, the active map is stored as non-active, and a new active map is initialized from scratch. • Local mapping thread adds keyframes and points to the active map, removes the redundant ones, and refines the map using visual or visual-inertial bundle adjustment, operating in a local window of keyframes close to the current frame. Additionally, in the inertial case, the IMU parameters are initialized and refined by the mapping thread using our novel MAP-estimation technique. • Loop and map merging thread detects common regions between the active map and the whole Atlas at keyframe rate. If the common area belongs to the active map, it performs loop correction; if it belongs to a different map, both maps are seamlessly merged into a single one, that becomes the active map. After a loop correction, a full BA is launched in an independent thread to further refine the map without affecting real-time performance. IV. CAMERA MODEL ORB-SLAM assumed in all system components a pin-hole camera model. Our goal is to abstract the camera model from the whole SLAM pipeline by extracting all properties and functions related to the camera model (projection and unprojection functions, Jacobian, etc) to separate modules. This allows our system to use any camera model by providing the corresponding camera module. In ORB-SLAM3 library, apart from the pinhole model, we provide the Kannala-Brandt [12] fisheye model. However, camera model abstraction raises some difficulties that need to be addressed, and are discussed next. A. Relocalization A robust SLAM system needs the capability of relocating the camera when tracking fails. ORB-SLAM solves the relocalization problem by setting a Perspective-n-Points solver based on the ePnP algorithm [75], which assumes a calibrated pinhole camera along all its formulation. To follow up with our approach, we need a PnP algorithm that works independently of the camera model used. For that reason, we have adopted Maximum Likelihood Perspective-n-Point algorithm (MLPnP) [76] that is completely decoupled from the camera model as it uses projective rays as input. The camera model just needs to provide an unprojection function passing from pixels to projection rays, to be able to use relocalization. B. Non-rectified Stereo SLAM Most stereo SLAM systems assume that stereo frames are rectified, i.e. both images are transformed to pin-hole projections using the same focal length, with image planes co-planar, and are aligned with horizontal epipolar lines, such that a feature in one image can be easily matched by looking at the same row in the other image. However the assumption of rectified stereo images is very restrictive and, in many applications, is not suitable nor feasible. For example, rectifying a divergent stereo pair, or a stereo fisheye camera would require severe image cropping, loosing the advantages of a large field of view: faster mapping of the environment and better robustness to occlusions. For that reason, our system does not rely on image rectification, considering the stereo rig as two monocular cameras having: 1) a constant relative SE(3) transformation between them, and 2) optionally, a common image region that observes the same portion of the scene. These constrains allow us to effectively estimate the scale of the map by introducing that information when triangulating new landmarks and in the Bundle Adjustment optimization. Following up with this idea, our SLAM pipeline estimates a 6 DoF rigid body pose, whose reference system can be located in one of the cameras or in the IMU sensor, and represent the cameras with respect to the rigid body pose. If both cameras have an overlapping area in which we have stereo observations, we can triangulate true scale landmarks the first time they are seen. The rest of both images still has a lot of relevant information for the SLAM pipeline and it is used as monocular information. Features first seen in these areas are triangulated from multiple views, as in the monocular case. V. VISUAL-INERTIAL SLAM ORB-SLAM-VI [4] was the first true visual-inertial SLAM system capable of map reusing. However, it was limited to pinhole monocular cameras, and its initialization was too slow, failing in some challenging scenarios. In this work, we build on ORB-SLAM-VI providing a fast an accurate IMU initialization technique, and an open-source SLAM library capable of monocular-inertial and stereo-inertial SLAM, with pinhole and fisheye cameras. A. Fundamentals While in pure visual SLAM, the estimated state only includes the current camera pose, in visual-inertial SLAM, additional variables need to be computed. These are the body (3) and velocity v i in the world frame, and the gyroscope and accelerometer biases, b g i and b a i , which are assumed to evolve according to a Brownian motion. This leads to the state vector: For visual-inertial SLAM, we preintegrate IMU measurements between consecutive visual frames, i and i+1, following the theory developed in [61], and formulated on manifolds in [62]. We obtain preintegrated rotation, velocity and position measurements, denoted as ∆R i,i+1 , ∆v i,i+1 and ∆p i,i+1 , as well an information matrix Σ Ii,i+1 for the whole measurement vector. Given these preintegrated terms and states S i and S i+1 , we adopt the definition of inertial residual r Ii,i+1 from [62]: Together with inertial residuals, we also use reprojection errors r ij between frame i and 3D point j at position x j : where Π : R 3 → R n is the projection function for the corresponding camera model, u ij is the observation of point j at image i, having an information matrix Σ ij . T CB ∈ SE(3) stands for the rigid transformation from body-IMU to camera (left or right), known from calibration. ⊕ is the transformation operation of SE(3) group over R 3 elements. Combining inertial and visual residual terms, visual-inertial SLAM can be posed as a keyframe-based minimization problem [39]. Given a set of k + 1 keyframes and its statē S k . = {S 0 . . . S k }, and a set of l 3D points and its state X . = {x 0 . . . x l−1 }, the visual-inertial optimization problem can be stated as: where K j is the set of keyframes observing 3D point j. This optimization may be outlined as the factor-graph shown in figure 2a. Note that for reprojection error we use a robust Huber kernel ρ Hub to reduce the influence of spurious matchings, while for inertial residuals it is not needed, since miss-associations do not exist. This optimization needs to be adapted for efficiency during tracking and mapping, but more importantly, it requires good initial seeds to converge to accurate solutions. B. IMU Initialization The goal of this step is to obtain good initial values for the inertial variables: body velocities, gravity direction, and IMU biases. Some systems like VI-DSO [46] try to solve from scratch visual-inertial BA, sidestepping a specific initialization process, obtaining slow convergence for inertial parameters (up to 30 seconds). In this work we propose a fast and accurate initialization method based on three key insights: • Pure monocular SLAM can provide very accurate initial maps [2], whose main problem is that scale is unknown. Solving first the vision-only problem will enhance IMU initialization. • As shown in [77], scale converges much faster when it is explicitly represented as an optimization variable, instead of using the implicit representation of BA. • Ignoring sensor uncertainties during IMU initialization produces large unpredictable errors [65]. So, taking properly into account sensor uncertainties, we state the IMU initialization as a MAP estimation problem, split in three steps: 1) Vision-only MAP Estimation: We initialize pure monocular SLAM [2] and run it during 2 seconds, inserting keyframes at 4Hz. After this period, we have an up-to-scale map composed of k = 10 camera poses and hundreds of points, that is optimized using Visual-Only BA (figure 2b). These poses are transformed to body reference, obtaining the trajectoryT 0:k = [R,p] 0:k where the bar denotes up-to-scale variables. 2) Inertial-only MAP Estimation: In this step we aim to obtain the optimal estimation of the inertial variables, in the sense of MAP estimation, using onlyT 0:k and inertial measurements between these keyframes. These inertial variables may be stacked in the inertial-only state vector: where s ∈ R + is the scale factor of the visiononly solution; R wg ∈ SO (3) is the gravity direction, represented with two angles, such that gravity vector in world reference is g = R wg g I , with g I = (0, 0, G) T being G the magnitude of gravity; b = (b a , b g ) ∈ R 6 are the accelerometer and gyroscope biases assumed to be constant during initialization; andv 0:k ∈ R 3 is the up-to-scale body velocities from first to last keyframe, initially estimated fromT 0:k . At this point, we are only considering the set of inertial measurements I 0:k . = {I 0,1 . . . I k−1,k }. Thus, we can state a MAP estimation problem, where the posterior distribution to be maximized is: where p(I 0:k |Y k ) stands for likelihood and p(Y k ) for prior. Considering independence of measurements, the inertial-only MAP estimation problem can be written as: Taking negative logarithm and assuming Gaussian error for IMU preintegration and prior distribution, this finally results in the optimization problem: This optimization, represented in figure 2c, differs from equation 4 in not including visual residuals, but a prior residual r p that we use to impose that IMU biases should be close to zero, with a covariance given by the IMU characteristics. As we are optimizing in a manifold we need to define a retraction [62] to update the gravity direction estimation during the optimization: being Exp(.) the exponential map from so(3) to SO (3). To guarantee that scale factor remains positive during optimization we define its update as: Once the inertial-only optimization is finished, the frame poses and velocities and the 3D map points are scaled with the estimated scale and rotated to align the z axis with the estimated gravity direction. Biases are updated and IMU preintegration is repeated, aiming to reduce future linearization errors. 3) Visual-Inertial MAP Estimation: Once we have a good estimation for inertial and visual parameters, we can perform a joint visual-inertial optimization for further refining the solution. This optimization may be represented as figure 2a but having common biases for all keyframes and including the same prior information than in the inertial-only step. Our exhaustive initialization experiments on the EuRoC dataset [6] show that this initialization is very efficient, achieving 5% scale error with 2 seconds trajectories. To improve the initial estimation, visual-inertial BA is performed 5 and 15 seconds after initialization, converging to 1% scale error as shown in section VII. After these BAs, we say that the map is mature, meaning that scale, IMU parameters and gravity directions are already accurately estimated. Our initialization is much more accurate than joint initialization methods that solve a set o algebraic equations [63]- [65], and much faster than the initialization used in ORB-SLAM-VI [4] that needed 15 seconds to get the first scale estimation, or that used in VI-DSO [66], that starts with a huge scale error and requires 20-30 seconds to converge to 1% error. We have easily extended our monocular-inertial initialization to stereo-inertial by fixing the scale factor to one and taking it out from the inertial-only optimization variables, enhancing its convergence. C. Tracking and Mapping For tracking and mapping we adopt the schemes proposed in [4]. Tracking solves a simplified visual-inertial optimization where only the states of the last two frames are optimized, while map points remain fixed. For mapping, trying to solve the whole optimization from equation 4 would be intractable for large maps. We use as optimizable variables a sliding window of keyframes and their points, including covisible keyframes but keeping them fixed. In some specific cases, when slow motion does not provide good observability of the inertial parameters, initialization may fail to converge to accurate solutions in just 15 seconds. To get robustness against this situation, we propose a novel scale refinement technique, based on a modified inertial-only optimization, where all inserted keyframes are included but scale and gravity direction are the only parameters to be estimated (figure 2d). Notice that in that case, the assumption of constant biases would not be correct. Instead, we use the values estimated for each frame, and we fix them. This optimization, which is very computationally efficient, is performed in the Local Mapping thread every ten seconds, until the map has more than 100 keyframes or more than 75 seconds have passed since initialization. D. Robustness to tracking loss In pure visual SLAM or VO systems, temporal camera occlusion and fast motions result in losing track of visual elements, getting the system lost. ORB-SLAM pioneered the use of fast relocation techniques based on bag-of-words place recognition, but they proved insufficient to solve difficult sequences in the EuRoC dataset [3]. Our visual-inertial system enters into visually lost state when less than 15 point maps are tracked, and achieves robustness in two stages: • Short-term lost: the current body state is estimated from IMU readings, and map points are projected in the estimated camera pose and searched for matches within a large image window. The resulting matches are included in visual-inertial optimization. In most cases this allows to recover visual tracking. Otherwise, after 5 seconds, we pass to the next stage. • Long-term lost: A new visual-inertial map is initialized as explained above, and it becomes the active map. VI. MAP MERGING AND LOOP CLOSING Short-term and mid-term data-associations between a frame and the active map are routinely found by the tracking and mapping threads by projecting map points into the estimated camera pose and searching for matches in an image window of just a few pixels. To achieve long-term data association for relocation and loop detection, ORB-SLAM uses the DBoW2 bag-of-words place recognition system [9], [78]. This method has been also adopted by most recent VO and SLAM systems that implement loop closures (Table I). Unlike tracking, place recognition does not start from an initial guess for camera pose. Instead, DBoW2 builds a database of keyframes with their bag-of-words vectors, and given a query image is able to efficiently provide the most similar keyframes according to their bag-of-words. Using only the first candidate, the raw DBoW2 queries achieve precision and recall in the order of 50-80% [9]. To avoid false positives that would corrupt the map, DBoW2 implements temporal and geometric consistency checks moving the working point to 100% precision and 30-40% recall [9], [78]. Crucially, the temporal consistency check delays place recognition at least during 3 keyframes. When trying to use it in our Atlas system, we found that this delay and the low recall results too often in duplicated areas in the same or in different maps. In this work we propose a new place recognition algorithm with improved recall for long-term and multi-map data association. Whenever the mapping thread creates a new keyframe, place recognition is launched trying to detect matches with any of the keyframes already in the Atlas. If the matching keyframe found belongs to the active map, a loop closure is performed. Otherwise, it is a multi-map data association, then, the active and the matching maps are merged. As a second novelty in our approach, once the relative pose between the new keyframe and the matching map is estimated, we define a local window with the matching keyframe and its neighbours in the covisibility graph. In this window we intensively search for mid-term data associations, improving the accuracy of loop closing and map merging. These two novelties explain the better accuracy obtained by ORB-SLAM3 compared with ORB-SLAM2 in the EuRoC experiments. The details of the different operations are explained next. A. Place Recognition To achieve high recall, for every new active keyframe we query the DBoW2 database for several similar keyframes in the Atlas. To achieve 100 % precision, each of these candidates goes through several steps of geometric verification. The elementary operation of all the geometrical verification steps consists in checking whether there is an ORB keypoint inside an image window whose descriptor matches the ORB descriptor of a map point, using a threshold for the Hamming distance between them. If there are several candidates in the search window, to discard ambiguous matches, we check the distance ratio to the second-closest match [79]. The steps of our place recognition algorithm are: 1) DBoW2 candidate keyframes. We query the Atlas DBoW2 database with the active keyframe K a to retrieve the three most similar keyframes, excluding keyframes covisible with K a . We refer to each matching candidate for place recognition as K m . 2) Local window. Per each K m we define a local window that includes K m , its best covisible keyframes, and the map points observed by all of them. The DBoW2 direct index provides a set of putative matches between keypoints in K a and in the local window keyframes. Per each of these 2D-2D match we have also available the 3D-3D match between their corresponding map points. 3) 3D aligning transformation. We compute using RANSAC the transformation T am that better aligns the map points in K m local window with those of K a . In pure monocular, or in monocular-inertial when the map is still not mature, we compute T am ∈ Sim(3), otherwise T am ∈ SE(3). In both cases we use Horn algorithm [80] using a minimal set of three 3D-3D matches to find each hypothesis for T am . The putative matches that, after transforming the map point in K a by T am , achieve a reprojection error in K a below a threshold, give a positive vote to the hypothesis. The hypothesis with more votes is selected, provided the number is over a threshold. 4) Guided matching refinement. All the map points in the local window are transformed with T am to find more matches with the keypoints in K a . The search is also reversed, finding matches for K a map points in all the keyframes of the local window. Using all the matchings found, T am is refined by non-linear optimization, where the goal function is the bidirectional reprojection error, using Huber influence function to provide robustness to spurious matches. If the number of inliers after the optimization is over a threshold, a second iteration of guided matching and non-linear refinement is launched, using a smaller image search window. 5) Verification in three covisible keyframes. To avoid false positives, DBoW2 waited for place recognition to fire in three consecutive keyframes, delaying or missing place recognition. Our crucial insight is that, most of the time, the information required for verification is already in the map. To verify place recognition, we search in the active part of the map two keyframes covisible with K a where the number of matches with points in the local window is over a threshold. If they are not found, the validation is further tried with the new incoming keyframes, without requiring the bag-of-words to fire again. The validation continues until three keyframes verify T am , or two consecutive new keyframes fail to verify it. 6) VI Gravity direction verification. In the visual-inertial case, if the active map is mature, we have estimated T am ∈ SE(3). We further check whether the pitch and roll angles are below a threshold to definitively accept the place recognition hypothesis. B. Visual Map Merging When a successful place recognition produces multi-map data association between keyframe K a in the active map M a , and a matching keyframe K m from a different map stored in the Atlas M m , with an aligning transformation T am , we launch a map merging operation. In the process, special care must be taken to ensure that the information in M m can be promptly reused by the tracking thread to avoid map duplication. For this, we propose to bring the M a map into M m reference. As M a may contain many elements and merging them might take a long time, merging is split in two steps. First, the merge is performed in a welding window defined by the neighbours of K a and K m , and in a second stage, the correction is propagated to the rest of the merged map by a pose-graph optimization. The detailed steps of the merging algorithm are: 1) Welding window assembly. The welding window includes K a and its covisible keyframes, K m and its covisible keyframes, and all the map point observed by them. Before their inclusion in the welding window, the keyframes and map points belonging to (Fig. 3a). To fix gauge freedoms, the keyframes that are covisible with those in M m are kept fixed. Once the optimization finishes, all the keyframes included in the welding area can be used for camera tracking, achieving fast and accurate reuse of map M m . 4) Pose-graph optimization. A pose-graph optimization is performed using the essential graph of the whole merged map, keeping fixed the keyframes in the welding area. This optimization propagates corrections from the welding window to the rest of the map. C. Visual-Inertial Map Merging The visual-inertial merging algorithm follows similar steps than the pure visual case. Steps 1) and 3) are modified to better exploit the inertial information: 1) VI welding window assembly: If the active map is mature, we apply the available T ma ∈ SE(3) to map M a before its inclusion in the welding window. If the active map is not mature, we align M a using the available T ma ∈ Sim(3). 2) VI welding bundle adjustment: Poses, velocities and biases of the active keyframe K a and its 5 last temporal keyframes are included as optimizable. These variables are related by IMU preintegration terms. For map M m , we proceed similarly, including poses, velocities and biases of K m and its 5 temporal neighbours, as shown in Figure 3b. For M m , the keyframe immediately before the local window is included but fixed, while for M a the similar keyframe is included but its pose remains optimizable. All points seen by all these keyframes, and keyframes poses observing these points are also optimized. All keyframes and points are related by means of reprojection error. D. Loop Closing Loop closing correction algorithm is analogous to map merging, but in a situation where both keyframes matched by place recognition belong to the active map. A welding window is assembled from the matched keyframes, and point duplicates are detected and fused creating new links in the covisibility and essential graphs. The next step is a pose-graph optimization to propagate the loop correction to the rest of the map. The final step is a global BA to find the MAP estimate after considering the loop closure mid-term and long-term matches. In the visual inertial case, the global BA is only performed if the number of keyframes is below a threshold to avoid a huge computational cost. VII. EXPERIMENTAL RESULTS The evaluation of the whole system is split in: the four sensor configurations: Monocular, Monocular-Inertial, Stereo and Stereo-Inertial. • Performance of monocular and stereo visual-inertial SLAM with fisheye cameras, in the challenging TUM VI Benchmark [82]. • Multi-session experiments in both datasets. As usual in the field, we measure accuracy with RMS ATE [83], aligning the estimated trajectory with ground truth using a Sim(3) transformation in the pure monocular case, and a SE(3) transformation in the rest of sensor configurations. All experiments have been run in an Intel Core i7-7700 CPU, at 3.6GHz, with 32 GB memory. A. Single-session SLAM on EuRoC Table II compares the performance of ORB-SLAM3 using its four sensor configurations with the most relevant systems in the state-of-the-art. Our reported values are the median after 10 executions. As shown in the table, ORB-SLAM3 achieves in all sensor configurations more accurate result than the best systems available in the literature, in most cases by a wide margin. In monocular and stereo configurations our system is more precise than ORB-SLAM2 due to the better place recognition algorithm that closes loops earlier and provides more midterm matches. Interestingly, the next best results are obtained by DSM that also uses mid-term matches, even though it does not close loops. In monocular-inertial configuration, ORB-SLAM3 more than doubles the accuracy of VI-DSO and VINS-Mono, showing again the advantages of mid-term and long-term data association. Compared with ORB-SLAM VI, our novel fast IMU initialization allows ORB-SLAM3 to calibrate the inertial sensor in a few seconds and use it from the very beginning, being able to complete all EuRoC sequences, and obtaining better accuracy. In stereo-inertial configuration, ORB-SLAM3 is vastly superior to OKVIS, VINS-Fusion and Kimera. It's accuracy is only approached by the recent Basalt that, being a native stereo-inertial system, was not able to complete sequence V203, where some frames from one of the cameras are missing. To summarize performance, we have presented the median of 10 executions for each sensor configuration. For a robust system, the median represents accurately the behavior of the system. But a non-robust system will show high variance in its results. This can be analyzed using figure 4 that shows with colors the error obtained in each of the 10 executions. Comparison with the figures for DSO, ROVIO and VI-DSO published in [66] confirms the superiority of our method. In pure visual configurations, the multi-map system adds some robustness to fast motions by creating a new map when tracking is lost, that is merged later with the global map. This can be seen in sequences V103 monocular and V203 stereo that could not be solved by ORB-SLAM2 and are successfully solved by our system in most executions. As expected, stereo is more robust than monocular thanks to its faster feature initialization, with the additional advantage that the real scale is estimated. However, the big leap in robustness is obtained by our novel visual-inertial SLAM system, both in monocular and stereo configurations. The stereo-inertial system comes with a very slight advantage, particularly in the most challenging V203 sequence. We can conclude that inertial integration not only boosts accuracy, reducing the median ATE error compared to pure visual solutions, but it also endows the system with excellent robustness, having a much more stable performance. B. Visual-Inertial SLAM on TUM-VI Benchmark The TUM-VI dataset [82] consists of 28 sequences in 6 different environments, recorded using a hand-held fisheye stereo-inertial rig. Ground-truth for the trajectory is only available at the beginning and at the end of the sequences. Many sequences in the dataset do not contain loops. Even if the starting and ending point are in the same room, point of view directions are opposite and place recognition cannot detect any common region. The idea is to measure the accumulated drift along the whole trajectory. In fact, the RMS ATE error computed after GT alignment is about half the accumulated drift. We extract 1500 ORB points per image in monocularinertial setup, and 1000 points per image in stereo-inertial, after applying CLAHE equalization to address under and over exposure found in the dataset. For outdoors sequences, our system struggles with very far points coming from the cloudy sky, that is very visible in fisheye cameras. These points may have slow motion that can corrupt the system. For preventing this, we discard points further than 20 meters from the current camera pose, only for outdoors sequences. A more sophisticated solution would be to use an image segmentation algorithm to detect and discard the sky. The results obtained are compared with the most relevant systems in the literature in table III, that clearly shows the superiority of ORB-SLAM3 both in monocular-inertial and stereo-inertial. The closest systems are VINS-Mono and BASALT, that are essentially visual-inertial odometry systems with loop closures, and miss mid-term data associations. Analyzing more in detail the performance of our system, it gets lowest error in small and medium indoor environments, room and corridor sequences, with errors below 10 cm for most of them. In these trajectories, the system is continuously revisiting and reusing previously mapped regions, which is one of the main strengths of ORB-SLAM3. Also, tracked points are typically closer than 5 m, what makes easier to estimate inertial parameters, preventing them from diverging. In magistrale indoors sequences, that are up to 900 m long, most tracked points are relatively close, and ORB-SLAM3 obtains errors around 1 m except in one sequence that goes close to 5 m. In contrast, in some long outdoors sequences, the scarcity of close visual features may cause divergence of the inertial parameters, notably scale and accelerometer bias, which leads to errors in the order of 10 to 70 meters. Even though, ORB-SLAM3 is the best performing system in the outdoor sequences. This dataset also contains three really challenging slides sequences, where the user descends though a dark tubular slide with almost total lack of visual features. In these situation, a pure visual system would be lost, but our visual-inertial system is able to process the whole sequence with competitive error, even if no loop-closures can be detected. Interestingly, VINS-Mono and BASALT, that track features using Lukas-Kanade, obtain in some of these sequences better accuracy than ORB-SLAM3, that matches ORB descriptors. Finally, the room sequences can be representative of typical AR/VR applications, where the user moves with a hand-held or head-mounted device in a small environment. Table III shows that ORB-SLAM3 is significantly more accurate that competing approaches. The results obtained using our four sensor configurations are compared in table IV. The better accuracy of pure monocular compared with stereo is only apparent: the monocular solution is up-to-scale and is aligned with ground-truth with 7DoF, while stereo provides the true scale, and is aligned with 6DoF. Using monocular-inertial, we further reduce the average RMSE ATE error below 2 cm, also obtaining the true scale. Finally, our stereo-inertial SLAM brings error below 1 cm, making it an excellent choice for AR/VR applications. C. Multi-session SLAM EuRoC dataset contains several sessions for each of its three environments: 5 in Machine Hall, 3 in Vicon1 and 3 in Vicon2. To test the multi-session performance of ORB-SLAM3, we process sequentially all the sessions corresponding to each environment. Each trajectory in the same environment has ground truth with the same world reference, which allows to perform a single global alignment to compute ATE. The first sequence in each room provides an initial map. Processing the following sequences starts with the creation of a new active map, that is quickly merged with the map of the previous sessions, and from that point on, ORB-SLAM3 profits from reusing the previous map. Table V reports the global multi-session RMS ATE for the four sensor configurations in the three rooms. Is is interesting to compare these multi-session performances with the singlesession results reported in Table II. For example, in the pure monocular case, multi-session Vicon 1 achieves a global error that is smaller than the mean error of the single-session maps, and significantly smaller than the error of the single-session V103. Multi-session Vicon 2 can process V203 sequence that failed in single-session, thanks to the exploitation of the previous map. In the difficult Machine Hall sequences, MH04 and MH05, the multi-session error is smaller than singlesession errors. The table also compares with the two only published multisession results in EuRoC dataset: CCM-SLAM [73] reporting pure monocular results in MH01-MH03, and VINS-Mono [7] in the five Machine Hall sequences, using monocular-inertial. In both cases ORB-SLAM3 more than doubles the accuracy of competing methods. In the case of VINS-Mono, ORB-SLAM3 obtains 2.6 better accuracy in single-session, and the advantage goes up 3.2 times in multi-session, showing the superiority of our map merging operations. We have also performed some multi-session experiments on the TUM-VI dataset. Figure 5 shows the result after processing several sequences inside the TUM building 1 . In this case, the small room sequence provides loop closures that were missing in the longer sequences, bringing all errors to centimeter level. Although ground truth is not available outside the room, comparing the figure with the figures published in [84] clearly shows our point: our multi-session SLAM system obtains far better accuracy that existing visual-inertial odometry systems. This is further exemplified in Figure 6. Although ORB-SLAM3 ranks higher in stereo inertial single- Figure 6: Multi-session stereo-inertial In red, the trajectory estimated after single-session processing of outdoors1. In blue, multi-session processing of magistrale2 first, and then outdoors1. session processing of outdoors1, there is still a significant drift (≈ 60 m). In contrast, if outdoors1 is processed after magistrale2 in a multi-session manner, this drift is significantly reduced, and the final map is much more accurate. VIII. CONCLUSIONS Building on [2]- [4], we have presented ORB-SLAM3, the most complete open-source library for visual, visual-inertial and multi-session SLAM, with monocular, stereo, RGB-D, pin-hole and fisheye cameras. Our main contributions, apart from the integrated library itself, are the fast and accurate IMU initialization technique, and the multi-session map-merging functions, that rely on an new place recognition technique with improved recall, and make ORB-SLAM3 highly suitable for long-term and large-scale SLAM in real applications. Our experimental results show that ORB-SLAM3 is the first visual and visual-inertial system capable of effectively exploiting short-term, mid-term, long-term and multi-map data associations, reaching an accuracy level that is beyond the reach of existing systems. Our results also suggest that, regarding accuracy, the capability of using all these types of data association overpowers other choices such as using direct methods instead of features, or performing keyframe marginalization for local BA, instead of assuming an outer set of static keyframes as we do. Regarding robustness, direct methods can be more robust in low-texture environments, but are limited to short-term [27] and mid-term [31] data association. On the other hand, matching feature descriptors successfully solves long-term and multi-map data association, but seems to be less robust for tracking than Lucas-Kanade, that uses photometric information. An interesting line of research could be developing photometric techniques adequate for the four data association problems. We are currently exploring this idea for map building from endoscope images inside the human body. About the four different sensor configurations, there is no question, stereo-inertial SLAM provides the most robust and accurate solution. Furthermore, the inertial sensor allows to estimate pose at IMU rate, which is orders of magnitude higher than frame rate, being a key feature for some use cases. For applications where a stereo camera is undesirable because of its higher bulk, cost, or processing requirements, you can use monocular-inertial without missing much in terms of robustness and accuracy. Only keep in mind that pure rotations during exploration would not allow to estimate depth. In applications with slow motions, or without roll and pitch rotations, such as a car in a flat area, IMU sensors can be difficult to initialize. In those cases, if possible, use stereo SLAM. Otherwise, recent advances on depth estimation from a single image with CNNs offer good promise for reliable and true-scale monocular SLAM [85], at least in the same type of environments where the CNN has been trained.
2020-07-24T01:00:25.310Z
2020-07-23T00:00:00.000
{ "year": 2020, "sha1": "77d7e439b3368875199a1327515a3ba212f0a359", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2007.11898", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "77d7e439b3368875199a1327515a3ba212f0a359", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
250658244
pes2o/s2orc
v3-fos-license
Efficient Activation of Peroxymonosulfate by Cobalt Supported Used Resin Based Carbon Ball Catalyst for the Degradation of Ibuprofen The extensive use of ibuprofen (IBU) and other pharmaceuticals and personal care products (PPCPs) causes them widely to exist in nature and be frequently detected in water bodies. Advanced catalytic oxidation processes (AOPs) are often used as an efficient way to degrade them, and the research on heterogeneous catalysts has become a hot spot in the field of AOPs. Among transitional metal-based catalysts, metal cobalt has been proved to be an effective element in activating peroxymonosulfate (PMS) to produce strong oxidizing components. In this study, the used D001 resin served as the matrix material and through simple impregnation and calcination, cobalt was successfully fixed on the carbon ball in the form of cobalt sulfide. When the catalyst was used to activate persulfate to degrade IBU, it was found that under certain reaction conditions, the degradation rate in one hour could exceed 70%, which was far higher than that of PMS and resin carbon balls alone. Here, we discussed the effects of catalyst loading, PMS concentration, pH value and temperature on IBU degradation. Through quenching experiments, it was found that SO4− and ·OH played a major role in the degradation process. The material has the advantages of simple preparation, low cost and convenient recovery, as well as realizing the purpose of reuse and degrading organic pollutants efficiently. Introduction A large number of so-called pharmaceutical and personal care product pollutants (PPCPs) have been a class of persistent organic pollutants around the world due to people's extensive use and its difficult degradation in the water environment, which has become a concern of the global community [1]. Ibuprofen (IBU) as a typical PPCP, used to treat pain, fever as an anti-inflammatory with the advantages of low toxicity, fine curative effect and few side effects, has become a new generation of non-steroidal anti-inflammatory drugs and is widely used [2,3]. Massive abuse and continuous input of PPCPs make them a false persistence phenomenon in the environment. The long-term ingestion of trace levels of these substances by organisms caused biological malformations and microbial drug resistance, indirectly harming the aquatic ecological environment and human health and receiving extensive attention [4]. However, traditional water treatment processes, such as sedimentation, filtration and disinfection have poor effects on the separation of IBU [5][6][7]. At present, the commonly used treatment methods include the adsorption method [8], membrane separation method [9], biological method [10], photocatalysis (Visible or ultraviolet catalysis) [11][12][13], Fenton Catalytic oxidation and ozone catalytic oxidation [14,15], etc., advanced oxidation methods (AOPs). Advanced oxidation catalysts, especially heterogeneous catalyst materials used to catalyze the formation of free radicals, have been widely studied in wastewater treatment. Compared with Fenton, Fenton-like and ozone catalytic oxidation technologies, the sulfate radical (SO 4 − ·) produced by activating peroxymonosulfate (PMS) has higher redox potential [16,17], and PMS is stable and convenient for transportation and storage, which shows great application potential in the treatment of refractory organic pollutants [18]. Research shows that the production of SO 4 − · activated PMS plays a major role, at the same time, the function of the hydroxyl radical (·OH), superoxide radical (O 2 − ) and singlet oxygen ( 1 O 2 ) cannot be ignored [4,7]. Therefore, increasing the rate of PMS decomposition to produce free or nonfree radicals has become a research hotspot. Currently available activation methods of PMS include thermal activation, electrical activation [19], photoactivation and heterogeneous catalyst material activation [4,20]. Although heterogeneous catalysts have been widely studied [21,22], if the materials are still in a powder state, especially without magnetism [17,23], solid-liquid separation and recovery would still be difficult. Therefore, the preparation of a certain shape catalyst with practical application value is significant, especially active components supported by tangible matrix materials, such as resin [24], alumina [25], sponges [26], etc. Among various metal-based catalytic materials used to activate PMS [22], the standard electrode potential of Co 2+ /Co 3+ (1.82 V) is higher than other metals [27] and close to that of PMS (1.82 V) [17] and it has the splendid quality to activate PMS and has attracted increasing attention. Its corresponding metal oxides [23], hydroxides [6] and sulfides [28,29] are widely used in the decomposition of activated persulfate; cobalt is the most widely studied metal element doped in heterogeneous materials at present. Moreover, studies have shown that sulfur-containing catalysts have many positive effects in the catalytic reaction process [30,31] with the help of accelerating electron transfer, which results from an abundance of electrochemically active sites available for the adsorption or desorption of O 2− /H + as well as their favorable electrical configuration [1,32]. This may also be the reason that transition metal sulfide proved to be used a relatively novel catalytic material to effectively catalyze and activate PMS to degrade organic pollutants in water [28,33]. D001 resin is a cation exchange resin with a sulfonic acid group (−SO 3 H) on a styrene divinylbenzene copolymer with a porous structure. At present, it is commonly used in pure water softening to adsorb calcium, magnesium ions, etc., and impurities in water. When the D001 resin was repeatedly used, its performance could not meet the requirements of use. Then, it often would be abandoned and not used further. However, this used resin still had some adsorption capacity. Through experiments, it could be found that used resins can adsorb cobalt ions and exchange metal ions, such as calcium and magnesium. The content of carbon, sulfur and oxygen accounts for the majority of D001 resin in the previous elemental analysis, and the corresponding metal oxide or sulfide can be formed by calcination with a transition metal. This means that the metal adsorbed on the resin through cation exchange and finally sintered at high temperature may exist on the carbon material in the form of sulfide or oxide, which indirectly synthesizes the metal-based carbon material. Based on the above research and description, we used D001 soft water resin which adsorbed certain metal ions, such as calcium and magnesium ions and is regarded as a metal anchored carrier. It can adsorb cobalt ions on the resin through simple impregnation and is sintered and fixed on the resin through a tubular furnace under the protection of nitrogen. A series of characterizations were carried out to recognize the carbon material, and its external morphology, material composition, element valence, and specific surface area were analyzed. Finally, the catalyst activity was verified by the experiment of activating PMS to degrade IBU. Here, we explored the effects of catalyst loading, PMS concentration, pH value, temperature and four kinds of coexisting anions on the degradation effect of IBU separately. The possible mechanism and path in the PMS activating system under the action of a carbon ball catalyst were analyzed by free radical and nonfree quenching and related characterization. Material Preparation (1) Put the used D001 resin into a conical flask and add an appropriate amount of ethanol, then put it into a constant temperature oscillator to oscillate for several hours. Pour out the solution, add an appropriate amount of water and oscillate; repeat this washing two to three times until the washing water is clear, take out the resin and dry it. (2) Impregnate a certain molar amount (0.25-1.25 mmol) of cobalt nitrate hexahydrate solution per gram of used resin, with a solid-liquid ratio of 1 g:20 mL. After shaking for two hours, take out the resin and dry it to obtain the used resin adsorbed with cobalt ions. (3) The catalyst was prepared by calcining cobalt doped resin in a tubular furnace at 550 • C under the protection of nitrogen for 6 h, with a heating rate of 5 • C/min. The black sphere formed by sintering is the catalyst material we need. Analysis Method The surface morphology and microstructure of the material were studied by scanning electron microscope (SEM) and a transmission electron microscope (TEM) was applied for morphology and crystal structure identification. It was equipped with an energy dispersive X-ray detector (EDS) to obtain the different elements mapping graph. Then the possible material composition and its crystal surface or crystal form were characterized and analyzed by means of X-ray diffraction (XRD) with a Cu-Kα radiation source working at 36 kV and 20 mA and Fourier transform infrared spectroscopy (FT-IR) based on sample preparation with dry potassium bromide to detect the possible functional groups and chemical bonds on carbon ball materials. The valence state of elements and possible chemical bond compositions were calculated by fitting after the valence state of the elements is characterized by X-ray photoelectron spectroscopy (XPS) equipped with a dual X-ray source of Al-Kα (hv = 1486.6 eV). Finally, the specific surface area and pore diameter were analyzed by Brunauer Emmett Teller (BET) based on nitrogen adsorption-desorption isotherm measurements at −196 • C. The catalytic activity of the sintered carbon ball was verified by degrading 50 mL IBU solution with a concentration of 10 mg/L in an oscillating conical flask. We aspirated a 2 mL water sample with syringes and filtered it through a 0.22 µm filter head (a few drops were filtered out to remove the residual liquid from the filter head of the previous sample and then injected into a 1.5 mL liquid phase sample bottle containing 10 µL methanol to stop the subsequent reaction) every 10 min and detected by HPLC. The specific detection conditions were as follows: the mobile phase is acetonitrile (70%) and Wahaha water (30%, adjusted pH = 2 with phosphoric acid), the chromatographic column used was a C18 reverse column (5 µm, 4.6 mm × 250 mm, Shimadzu); the detected temperature was 35 • C, we chose 1 mL/min as the liquid flow rate, and the injection volume of every sample was 20 µL. Then we used 221 nm as the analysis wavelength to obtain the corresponding peak area. Through the previous liquid phase determination of the standard solution, it can be found that the peak area tested by this method had a high degree of linear correlation with the concentration, so the degradation rate of IBU could be calculated according to the measured peak area. Based on the above methods, we studied the catalytic degradation of IBU in different systems and different cobalt doping. At the same time, the effects of several environmental conditions, such as catalyst loading, PMS concentration, temperature, pH value, and coexisting anions, on the catalytic activity and IBU degradation rate were similarly studied. Finally, the corresponding quenchers were used to capture possible free radicals and non-free radicals (MeOH was used to capture both SO 4 − · and OH, TBA, p-BQ and FFA were applied to quench OH, O 2 − and 1 O 2 , respectively. We did not choose L-histidine as the quencher of singlet oxygen because it may react with PMS) [23,31,34]. Combined with relevant characterization and previous studies, the causes and degradation mechanism were reasonably analyzed and some strange experimental phenomena can also be explained. Material Characterization First, we characterized the surface morphology of the material. The material morphology photograph under high magnification of SEM is shown in Figure 1. Meanwhile, we scanned the shooting site for relevant elements, and the EDS results are shown in Figure S1. We find that the element of sulfur and carbon occupied most of the carbon ball, and the distribution of cobalt can also be clearly seen in the carbon ball doped with abundant cobalt, both inside and outside. According to SEM results, Figure 1a shows that the material presents a sphere after high-temperature carbonization, but there also exists a broken globule after the calcination process. Figure 1b shows that the surface of the noncarbonized used D001 resin has a porous morphology, which is conducive to its efficient adsorption of metal ions as a cation exchange resin. However, we can see from Figure 1c that after carbonization of the used resin, a large number of blocks and strips are attached to the surface of the resin, which means that calcium and magnesium form chemical compounds on the surface. In contrast, Figure 1d shows an obvious change in the surface of the carbon ball, which is covered by a large number of small particles and even blocks many pores on the resin surface. Compared with the former, there are no long strips and blocks, thus a large number of calcium and magnesium ions are replaced by cobalt ions, and the surface is covered by metal composites sintered on it. Figure 1e,f show the internal morphology of spheres after cobalt doping calcination. It can be seen from Figure 1f, that although many surface pores of the resin are covered after high-temperature calcination, the interior still shows a porous morphology. Compared with Figure 1d,f, it is found that metal cobalt is successfully attached to the surface of the resin and the interior of the resin pores. Combined with EDS analysis, we found that there are mainly sulfur and oxygen elements in the carbon ball, so the burned metal substances are the corresponding metal sulfides or oxides. As shown in Figure 2, further investigation of the carbonized catalyst material was carried out by TEM images at higher magnification. It can be seen from the morphology in Figure 2a that there are multiple blocks on the carbon material, which may be the compound of cobalt sintered on the carbon material. The mapping diagrams of elemental sulfur and elemental cobalt under the corresponding morphology in Figure 2b,c clearly show that the delamination positions of these two elements are similar, which can prove that after the used resin adsorbs cobalt ions, the metal cobalt can be sintered with the −SO 3 H in the resin to form a metal sulfide. Similarly, the same conclusion can be obtained by combining the morphology of Figure 2d and the element layered images of Figure 2e,f. Therefore, it can be inferred that SEM is attached to the carbon ball. [28,36] of the standard card JCPDS: 48-0826, respectively. According to the variation trend of peak type and cobalt concentration and the position of diffraction peak, we can infer that metal cobalt sintered on the surface, or inside of carbon ball, in the form of cobalt sulfide. To determine the approximate estimation of the element valence and content of the prepared carbon spherical catalyst, X-ray photoelectron spectroscopy (XPS) was performed. In the early stage, we carried out the characterization of FT-IR to determine the possible chemical bonds or functional groups in the material. The results are shown in Figure S2. It can be found that after cobalt doping, the obvious peak positions of the material did not change, that is, they all have characteristic absorption in similar positions, stretching vibration peaks appear at 3433, 1580, 1400, and 1130 cm −1 which can be associated with O-H, carbonyl (C-C/C=C), C-S and C-O, respectively [31,37]. Based on this, we obtain Figure 4a-d corresponding to the valence binding energy peaks of several main elements XPS data graph after our date processing on behalf of sulfur (S 2p), carbon (C 1s), oxygen (O 1s) and cobalt (Co 2p), respectively. In Figure 4a, the characteristic peaks of C-S or C-SO x appeared in the binding energy of 164 eV, 165 eV and 167.8 eV [31], while there are characteristic peaks of spin orbits corresponding to S 2p 3/2 and S 2p 1/2 at 161.9 eV and 163.2 eV, respectively [36,38,39]. The position and area of each binding energy peak of C 1s show that it mainly exists in the form of C-C/C=C [37,40,41], and at 285.3 eV may represent C-S binding energy [42]. At other binding energy positions, it may correspond to a small amount of C-O or C=O [37,41,43]. Figure 4c shows the peak splitting results of the elemental oxygen spectrum, it can be found that the peak area of the oxygen element is mainly at the position with a binding energy of 532.1 eV, which indicates that oxygen is mainly a carbonoxygen bond [43] or surface adsorbed oxygen species (O aos ) [41]. Figure 4d shows the diffraction peaks of several different binding energy positions of cobalt. The binding energy positions at 785.1 eV and 802 eV may be two shake-up satellite peaks generated by X-ray emission during sample measurement [44][45][46]; the binding energy positions at 778.5 eV and 780.8 eV, are Co 2p 3/2 spin orbit trivalent cobalt and divalent cobalt, respectively; 793.6 eV and 797.0 eV correspond to trivalent cobalt and divalent cobalt in Co 2p 1/2 spin-orbit peaks [44,47]. Compared with the peak area, we can find that the peak area of divalent cobalt is much larger than that of trivalent cobalt, it means that the main valence states of cobalt are divalent. From what has been discussed about the XPS peak splitting results of sulfur, we can conclude that the cobalt ions adsorbed on the resin are mainly sintered on the carbon ball in the form of cobalt sulfide. Finally, we analyzed and tested the parameters related to the specific surface area of the material, the nitrogen adsorption−desorption curve is shown in Figure S2. As shown in Table 1, we listed the specific surface area, pore volume, pore diameter and other data of D001 resin, used D001 resin and after cobalt doped carbonized material. The measured mass of the samples is 0.1 g and the degassing time of pretreatment was 6 h, and then the sample carried out nitrogen adsorption and desorption in a liquid nitrogen tank under −196 • C. Finally, the relevant characterization data were calculated according to the nitrogen adsorption>-desorption curve. It can be found that the specific surface area of the three samples shows a downward trend with the doping of metal, and the values are 97.62 m 2 /g, 67.44 m 2 /g and 31.54 m 2 /g, respectively. In particular, we have tested the specific surface area of the maximum cobalt doping amount (impregnated with 1.25 mmol cobalt ion per gram of resin), and the value is less than 20 m 2 /g; Figure S3 also shows that the adsorption capacity of sintered carbon ball decreases with the increase in cobalt loading. According to the scanning electron microscope characterization and relevant experience, this phenomenon can be speculated and explained as that there is no metal doped in the separate resin. After calcination, the resin sphere is heated, and the skeleton shrinks, but the internal channels are not blocked. In contrast, the adsorbed water and impurities will separate from the channels due to calcination, so it has a large specific surface area. However, a large number of metal ions are adsorbed inside the resin, then metal complexes may be formed inside the resin during calcination. Especially after impregnating cobalt ions, internal cobalt ions are attached to the internal pores of the resin in the form of sulfide, and the surface pores are seriously blocked due to the existence of fine cobalt sulfide, which greatly reduces the specific surface area of the carbon ball. Moreover, the active sites of the internal metal cobalt are more difficult to expose with more cobalt content, resulting in the decline of its catalytic effect. Material Catalytic Activity Firstly, we explore the degradation effect of IBU under different systems. Figure 5a shows the catalytic performance of carbon ball catalysts prepared with different cobalt loading. It can be seen that the degradation effect of separate resin carbon ball and PMS system on IBU is poor, which also proves that the degradation or adsorption performance of separate PMS and resin system for IBU are similarly low. In previous studies, we found that under the same conditions, the material prepared by doping other transition metals into the resin has a very poor effect on activating PMS to degrade IBU, while the cobalt doped has an efficient catalytic effect, and its degradation effect can exceed 70% in one hour. The degradation process followed the quasi-first-order kinetic model after kinetic analysis, and its correlation coefficient of ln (C 0 /C) and time are basically above 95%, and the maximum kinetic constant (k value) is 0.216 min −1 . Combined with the catalytic effect of catalysts prepared with different cobalt doping amounts, and the residual rate of cobalt ions after resin adsorption and saturation in the preparation process, as shown in Figure 5b, the optimal doping amount is 0.75 mmol cobalt ions doped per gram of dry resin; the residue rate at this impregnation amount is less than 5% which is analyzed by ICP-OES. In terms of activity, increasing the doping amount of cobalt may reduce the activity by blocking more internal pores, which is not conducive to the exposure of the internal cobalt. According to the impregnation results, cobalt ions are basically impregnated, and the mass of cobalt ions remains unchanged after calcination, while the mass loss of resin into carbon spheres is about half, so the mass fraction of cobalt can be calculated with the amount of impregnated cobalt and the mass of carbon spheres after calcination; the mass fraction of metal cobalt is estimated to be 9% in optimal doping amount, and the subsequent activity and mechanism are studied with this doping amount. In terms of reaction conditions, we discussed the effects of catalyst loading, PMS concentration, pH value and temperature on the degradation effect, respectively, and the results are shown in Figure 6. It can be seen that under the same conditions as other variables, with the increase in catalyst loading, PMS concentration and temperature can improve the degradation effect of IBU. Comprehensively considering the catalytic degradation effect and reaction conditions, we regarded the catalyst loading of 0.3 g/L, PMS concentration of 0.7 mmol/L as the optimum catalyst and oxidant conditions and a temperature of 25 • C to fit the ambient temperature to study the effects of various influencing factors on the activation of PMS by a catalyst. Among all studied factors, the degradation process is greatly affected by pH value. When pH is 5 and 9, the degradation rate is decreased, while when pH is equal to 3 and 11, the degradation rate of IBU is significantly inhibited, and its kinetic constant is as low as 0.085 min −1 and 0.099 min −1 . According to relevant studies, the reason for this phenomenon may be that under the conditions of low pH values, anions (Cl − ) have a certain inhibition [48], more importantly, acidic conditions are not conducive to the ionization of PMS [6,28], and a relatively high concentration of H + may quench a portion of the radicals [23]. Under the condition of a high pH value, relevant research shows that it will inhibit the reaction direction of cobalt sulfide and hydrogen persulfate ions to form SO 4 − , due to the formation of OH − in the reaction process; even the SO 4 − generated from PMS can react with OH − and create ·OH with low activity, as shown in the reaction Formulas (4) and (11) [30,48]. Next, we also studied the influence of different concentrations of coexisting anions on the environmental conditions and discussed a certain concentration of HCO 3 − , H 2 PO 4 − , SO 4 2− , Cl − in the reaction system, as shown in Figure 7. It can be seen that the low concentration of bicarbonate ion has no obvious inhibition on the reaction, while the high concentration does differ. Through the determination of pH value and comparison with the previous pH value influence experiment, it can also be found that the inhibition effect of bicarbonate ion is not large, and mainly through increasing the pH value of water. Similarly, with the addition of H 2 PO 4 − , the pH value decreases, but the removal rate of IBU does not decrease much, which can better prove that H 2 PO 4 − has no obvious inhibitory effect. SO 4 2− has a certain degree of inhibition on the reaction system. Combined with relevant studies, it can be explained that the existence of SO 4 2− will inhibit the formation of SO 4 2− by HSO 5 − , which will reduce the formation rate of OH or O 2 − indirectly as shown in the reaction Formulas (5) and (7) [3]. The influence of Cl − is very obvious, the inhibition rate is as high as 60%, and the final effect is not much better than that of the resin and PMS degradation system, more importantly, the kinetic constants are all around 0.04 min −1 , and the linear correlation coefficient of the first-order kinetic fitting is only 85%. This phenomenon is similar to the study on activating PMS with cobalt doped carbon matrix catalyst prepared by Ren Z.F. [48]. This may be because the PMS concentration in the reaction system is only 0.7 mmol, while the concentration of Cl − exceeds 1 mmol, and IBU is an organic pollutant that is difficult to degrade. Under these conditions, SO 4 2− or HSO 5 − reacts with Cl − to generate a chloride ion radical with low oxidation performance, and its ability to degrade pollutants is poor [31]. However, with the increased concentration of Cl − , the generation rate of chloride-related radicals accelerates, but its oxidation performance is low, and it may not show a degradation effect. The reaction may be shown in Formulas (1)-(3) [48]. Mechanism Exploration In the part of mechanism exploration, we studied the catalyst activating PMS to degrade IBU through relevant free or nonfree radical quenching experiments as Figure 8 shown. From the quenching experiment, it can be seen that by adding a certain amount of methanol quencher to the reaction system, the degradation effect of IBU is basically reduced to be similar to that of PMS alone, and the kinetic constants decreased from 0.216 min −1 to 0.043 min −1 and 0.128 min −1 under the maximum quenching degree by MeOH and TBA, respectively. Comparing the quenching effect of O 2 − and O 2 1 , it can be concluded that the degradation process of IBU is mainly played by SO 4 − and OH [49], while the other two have no obvious effect on the reaction process, which proves that the cobalt sulfide existing on the carbon ball reacts with the HSO 5 − ionized by PMS to form abundant SO 4 − and OH; relevant possible reaction Formulas are 4 and 5 [17]. We also find that the effect of SO 4 − is much better than OH. it may be that the SO 4 − generated by the reaction accounts for the majority. On the other hand, its redox potential exceeds OH [16]. At the same time, it can prove that the generated amount of O 2 − and 1 O 2 is small in this degradation system, in particular, 1 O 2 is based on generated by O 2 − . Based on this and combined with existing research, relevant possible reactions as Formulas (4)-(12) are shown [3,50,51]: The formation and degradation mechanism of free radicals is roughly shown in Figure 9 [17]. In general, after absorbing a certain amount of cobalt ions by impregnation, we carbonize the used resin into a black carbon ball. Through XRD and XPS, it can be concluded that cobalt is successfully attached to the carbon ball in the form of cobalt sulfide in the characterization of XRD and XPS. Through BET characterization, it can be found that the increase in cobalt may seriously block the carbon ball, which is not conducive to the exposure of internal cobalt sulfide, so the reaction speed is reduced. In the reaction system, the carbon ball activates PMS to produce a large amount of SO 4 − and OH degrade IBU into small molecular substances, CO 2 and H 2 O, but other active substances generated from this reaction are extremely few. Conclusions Utilizing the used D001 resin as the base material, the metal cobalt is fixed on the resin by impregnation, and then the cobalt is sintered on the carbon ball in the form of cobalt sulfide by high-temperature calcination. The preparation method is simple, low cost and has low material loss with a high resource degree. In terms of material activity, under optimal cobalt doping and certain reaction conditions, the degradation rate of IBU can exceed 70%. To a certain extent, the increase in catalyst loading, PMS concentration and temperature can improve the reaction rate of the degradation system. However, the material is greatly affected by the environmental conditions in the reaction process, especially the pH value and the existence of several common anions. According to the free and non-free radical quenching experiment, it is not difficult to see that the degradation process of IBU is mainly the function of SO 4 − and OH produced by catalyst-activated PMS. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/ma15145003/s1, Figure S1. EDS element layered image of material, (a) the carbonization of used resin, (b,c) the surface and inside of carbonized cobalt doped used resin; Figure S2. FTIR spectrums of carbonized used resin and cobalt doped used resin; Figure S3. Nitrogen adsorption-desorption curves of carbonized used resin and cobalt doped used resin. Conflicts of Interest: The authors declare no conflict of interest.
2022-07-20T15:13:22.928Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "8cbec7fd9c5e2ff00b7e5bf140ee409df4f37d3f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/15/14/5003/pdf?version=1658150323", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "88363070cd88e95c2b749b237321afe154509b56", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
1217693
pes2o/s2orc
v3-fos-license
Attenuation of Helicobacter pylori-induced gastric inflammation by prior cag− strain (AM1) infection in C57BL/6 mice Background Helicobacter pylori, colonize in stomach of ~50% of the world population. cag pathogenicity Island of H. pylori is one of the important virulent factors that attributed to gastric inflammation. Coinfection with H. pylori strain with different genetic makeup alters the degree of pathogenicity and susceptibility towards antibiotics. The present study investigates host immunomodulatory effects of H. pylori infection by both cag + strain (SS1) and cag − strain (AM1). C57BL/6 mice were infected with AM1 or SS1 strain as well as AM1 followed by SS1 (AM1/SS1) and vice versa. Results Mice infected with AM1/SS1 strain exhibited less gastric inflammation and reduced proMMP9 and proMMP3 activities in gastric tissues as compared to SS1/SS1 and SS1/AM1 infected groups. The expression of both MMP9 and MMP3 followed similar trend like activity in infected tissues. Both Th1 and Th17 responses were induced by SS1 strain more profoundly than AM1 strain infection which induced solely Th1 response in spleen and gastric tissues. Moreover, IFN-γ, TNF-α, IL-1β and IL-12 were significantly downregulated in mice spleen and gastric tissues infected by AM1/SS1 compared to SS1/SS1 but not with SS1/AM1 coinfection. Surprisingly, IL-17 level was dampened significantly in AM1/SS1 compared to SS1/AM1 coinfected groups. Furthermore, number of Foxp3+ T-regulatory (Treg) cells and immunosuppressive cytokines like IL-10 and TGF-β were reduced in AM1/SS1 compared to SS1/SS1 and SS1/AM1 coinfected mice gastric tissues. Conclusions These data suggested that prior H. pylori cag − strain infection attenuated the severity of gastric pathology induced by subsequent cag + strain in C57BL/6 mice. Prior AM1 infection induced Th1 cytokine IFN-γ, which reduced the Th17 response induced by subsequent SS1 infection. The reduced gastritis in AM1/SS1-infected mice might also be due to enrichment of AM1- primed Treg cells in the gastric compartment which inhibit Th1 and Th17 responses to subsequent SS1 infection. In summary, prior infection by non-virulent H. pylori strain (AM1) causes reduction of subsequent virulent strain (SS1) infection by regulation of inflammatory cytokines and MMPs expression. Background Helicobacter pylori, a class I carcinogen inhabit in the stomach of approximately 50% of the human population while only 10-15% population either develop chronic gastritis or gastric adenocarcinoma or gastric mucosa-associated lymphoid tissue lymphoma [1][2][3][4]. The underlying mechanisms governing the clinical outcome of H. pylori infection are poorly understood. However, accumulated evidences suggested that differences in host immune responses, environmental factors as well as the virulence properties of H. pylori strains may play important roles in determining the disease outcome. The most prominent H. pylori virulence-associated determinant is the cag pathogenicity island (PAI). It is a 40-kb genome segment that encodes the immunodominant protein Open Access Gut Pathogens *Correspondence: snehasiktas@hotmail.com 1 Cancer Biology and Inflammatory Disorder Division, Indian Institute of Chemical Biology, 4, Raja S.C. Mullik Road, Jadavpur, Kolkata 700032, India Full list of author information is available at the end of the article cagA and type IV secretion system, which serve to transfer the bacterial cagA protein and other soluble factors such as peptidoglycans, to the cytoplasm of the host cells, known to play a key role in disease manifestation [5][6][7]. Strains harboring the cag PAI have been associated with more severe inflammatory responses than that induced by cag − strains [6,8,9]. H. pylori specific host T cell response is predominantly a CD4+ T cell response polarized towards a T helper1 (Th1) phenotype [10,11]. H. pylori induced inflammation are associated with the production of pro-inflammatory cytokines and appear to be triggered partly by genes located within the cag PAI [6,8,12]. The gastric mucosal levels of the proinflammatory cytokine IL-1β, IL-6, IL-8 and TNF-α were increased in H. pylori infected subjects [13]. Earlier studies revealed that H. pylori infection is also associated with a marked increased in cytokine IL-17 secretion from Th17 cells [14]. Involvement of IL-17 has also been reported in various other chronic inflammatory conditions such as rheumatoid arthritis and multiple sclerosis [15,16]. Recently, Shi et al. suggested that H. pylori infection induced a mixed Th1/Th17 response [17]. In addition, cagA and type IV secretion system are required for the induction of IL-17 responses in H. pylori infection [18]. Secretion of IL-17 led to induction of other inflammatory molecules required for the establishment of chronic inflammation [19]. MMPs are a family of zinc dependent endopeptidases that play a crucial role in various pathological conditions including gastric ulcer [20][21][22]. The activities of MMPs are regulated by their inhibitors (TIMPs), while their expressions are modulated by cytokines, growth factors, tumor promoters and transcription factors [20,23,24]. Gelatinases B (MMP9) and stromelysin-1 (MMP3) are the two major inflammatory contributors of gastric pathology, collectively cleave a large array of matrix proteins [25]. Accumulated evidences suggested that H. pylori induced gastric inflammation with the upregulation of MMP9 and MMP3 in vivo [26]. MMPs are either directly or indirectly produced by gastric epithelial cells via cytokine mediated cell signaling pathways [24]. The gastrointestinal tract of human are colonized by various microorganisms which can be either commensalistic or pathogenic to human [27]. The interplay among those organisms can lead to either attenuation or promotion of infection-induced pathology. For instance, C57BL/6 mice coinfection with a natural murine nematode parasite Heligmosomoides polygyrus, attenuated gastric pathology induced by H. felis [28]. Attenuation of gastric pathology was associated with reduced expression of proinflammatory Th1 cytokine as well as with increased Th2 cytokine levels. Interaction between different bacterial species also determines diseases severity. Recently it has been documented that H. pylori infection attenuated Salmonella enterica serovar Typhimuriuminduced colitis in C57BL/6 mice; this protective effect was associated with downregulation of the cecal Th17 response to S. typhimurium [29]. Even severity of the H. pylori induced gastric pathology is also modulated by coinfection with other Helicobacter species. Coinfection of enterohepatic Helicobacter species (EHS), Helicobacter muridarum along with H. pylori attenuated the H. pylori induced gastric pathology in C57BL/6 mice [30]. Moreover, coinfection of another EHS, Helicobacter hepaticus with H. pylori lead to more severe gastritis as well as increased production of IL-17 cytokine [30]. Interestingly, it has also been reported that the interactions between different strains of H. pylori also modulate gastric inflammation status [31]. Surprisingly, in some African countries having lower economic status show high rate of H. pylori infection but low level of gastric carcinoma incidence, widely known as African Enigma [32,33]. The reasons could be associated with diet, infection with other endemic parasites and degree of pathogenicity of different H. pylori strains. We hypothesized that coinfection with a non pathogenic strain may provide protection against further infection of a virulent strain. The effect of coinfection on gastric inflammation, with different strains of H. pylori with or without cag Pathogenicity Island, has not been systematically studied yet. To investigate that, we established a coinfection in C57BL/6 mice using both cag − and cag + strains of H. pylori and measured the gastric inflammatory pathways. We address whether prior cag − strain infection alleviate gastric damage induced by cag + H. pylori strain coinfection and the underlying host immunomodulatory mechanisms thereon. Here we for the first time documents that prior infection with cag − H. pylori strain dampens the disease severity for further cag + coinfection. Prior cag − H. pylori infection dampens gastric inflammation due to cag + H. pylori infection We previously reported that both cag + strains (SS1) and cag − strain (AM1) has the capacity to induce gastric inflammation, although the severity of damage was more pronounced in SS1 infection in C57BL/6 mice [26]. In present study, we investigated the coinfection of different strains of H. pylori in different combination on disease burden of H. pylori induced gastric pathology in C57BL/6 mice. Different groups of mice were inoculated with vehicle for control, SS1 and boosted with SS1 strain, inoculated with AM1 boosted with AM1 strain, SS1 and boosted with AM1 strain and AM1 and boosted with SS1 strain of H. pylori separately and were sacrificed at day 10 post infection. Histological examination of mouse gastric tissues revealed that H. pylori infection in any combination of strains caused inflammation in gastric pit cells along with disruption in submucosa and muscularis mucosa compared with control ( Fig. 1). Glandular atrophy and infiltration of inflammatory cells, mostly lymphocytes, were also detected in the gastric tissues of all infected mice groups. However, mice inoculated with AM1 followed by SS1 strain exhibited significantly lower level of gastric inflammation, glandular atrophy, surface epithelial eruption and decreased infiltration of inflammatory cells compared to SS1/SS1 infected and SS1/ AM1 coinfected mice. However, no significant difference in the gastric lesions in AM1/AM1 and AM1/SS1 coinfected mice were detected (Fig. 1b, e). Therefore, these results suggested that earlier cag − H. pylori strain infection significantly abrogated the severity of gastric inflammation induced by further H. pylori cag + strain infection. The activity and expression of MMP9 and 3 were upregulated with increased severity of gastric lesion Aberrant ECM remodeling is a prerequisite event in gastric ulcer development. MMP2 and 9 are the two most potent enzymes involved in the ECM remodeling. Hence, we measured and compared the activity and expression of MMP2 and 9 by gelatin zymography and Western blotting respectively in H. pylori infected mice gastric tissue extracts. We found that there is a significant upregulation of MMP9 activity and expression in H. pylori infected gastric tissues compared with control ( Fig. 2a-c). Highest level of activity and expression of MMP9 were obtained in SS1/SS1 infected gastric tissues. However, interestingly AM1/SS1 coinfected mice showed decreased level of MMP9 expression and activity compared to SS1/ SS1 infected and SS1/AM1 coinfected gastric tissues. In between coinfected groups, MMP9 expression and activity were detected higher in SS1/AM1 in compared to AM1/SS1 coinfected group. We also measured the Fig. 1 Histology of mouse gastric tissues infected with H. pylori. Different groups of mice were intragastrically inoculated with SS1 (cag + ) and AM1 (cag − ) strains alone or coinfected with both strains of H. pylori (SS1 followed by AM1, AM1 followed by SS1). Control mice were fed with PBS and kept separately under the same conditions. Mice were sacrificed and gastric tissue sections were processed for histological analysis. Histological appearances of control (A1), AM1/AM1 infected (B1), SS1/SS1 infected (C1), SS1/AM1-coinfected (D1), AM1/SS1-coinfected (E1) gastric tissues stained with hematoxylin and eosin and were observed at 10× magnification. While A2, B2, C2, D2 and E2 represent 20× magnification and A3, B3, C3, D3 and E3 represent 40× magnification of control, AM1/AM1, SS1/SS1, SS1/AM1, AM1/SS1-infected tissues. Gastric mucosal epithelium and inflammatory cells are shown by black arrows and green arrows, respectively activity and expression of another potent ECM degrading enzyme MMP3 in H. pylori infected and control mice gastric tissues. Like MMP9 similar trends of activity and expression pattern of MMP3 were detected ( Fig. 2d-f ). Highest level of MMP3 activity and expression were detected in SS1 infected group. Notably, lower level of MMP3 expression and activity were also detected in AM1/SS1 coinfected group. To examine the effect of H. pylori infection on systemic level of MMP9 and 2, we measured the activity of MMP9 in infected mice serum. Figure 3 shows that H. pylori infection also increases activity of proMMP9 in mice serum. In addition, AM1/SS1 coinfected mice show decreased level of serum MMP9 activity compared to SS1 infected and SS1/AM1 coinfected groups. Cytokine expression pattern differed in different combination of H. pylori infection Th1 and Th17 cell response plays an important role to mediate inflammation during H. pylori induced gastric pathogenesis. Hence, we measured the level of proinflammatory cytokines IFN-γ and IL-17A in H. pylori infected mice gastric tissues using ELISA. All mice infected or coinfected with different H. pylori strains significantly upregulated gastric IFN-γ compared to control (Fig. 4a) SS1/SS1 infected group secreted highest level of IFN-γ compared to any other groups. In addition, cag + H. pylori infection irrespective of sequence of infection significantly induced gastric IFN-γ secretion compared cag − H. pylori infection. Even though AM1/SS1 infected mice developed less severe gastric pathology than SS1/AM1, no significant differences in the level of IFN-γ secretion Fig. 2 Effect of H. pylori infection and coinfection on activity and expression of MMPs. Different strains of H. pylori (AM1 or SS1) were orally fed separately or coinfected (AM1/SS1or SS1/AM1) to four groups of mice and they were sacrificed on day 10 post infection. Control mice were fed with PBS and kept separately under same conditions. The activities of MMP2 and 9 in mouse gastric tissue extracts were measured by gelatin zymography (a). Histographic representations of gelatinolytic activities as measured by lab image densitometry (b). Values were from the above zymograms and three other zymograms from independent experiments. Expression of MMP9 was measured by Western blotting analysis (c). Equal amount of tissue extracts (120 μg) of control and infected mice were used and probed with polyclonal anti-MMP9 and monoclonal anti-β actin antibody. The activity and expression of MMP3 in mouse gastric tissue extracts were measured by casein zymography and Western blot. Representative blots showing the activity and expression MMP3 (d, f). β-actin served as loading control. Histographic representations of MMP3 activity and expressions in control and H. pylori infected gastric tissues (e) from the above blots and two other representative blots from independent experiments in each case. Error bars ±SEM ***P < 0.001; **P < 0.01; *P < 0.05; NS, P = not significant versus control were detected. This observation led us to ask the question whether reduce gastric pathology in AM1/SS1 coinfected mice are due to reduce activation of Th17 response. We found that all mice that were prior infected with cag + H. pylori significantly expressed higher level of gastric IL-17A cytokine compared to control (Fig. 4a). However, no significant levels of IL-17A were detected in AM1/AM1 infected or AM1/SS1 coinfected groups. Hence severe gastric damage in cag + infection might be mediated by activation of IL-17 response. To correlate the Th1 and Th17 cytokines responses in both gastric mucosa and spleen, the splenocytes from the infected mice were stimulated by H. pylori WCP in vitro for 48 h. Then, IFN-γ and IL-17A expression level in the supernatants of the cultured splenocytes were measured. Cytokines expression pattern in splenocytes match parallel with gastric tissues in H. pylori infected mice (Fig. 4b). To confirm the involvement of Th1 response in H. pylori infection, we further measured the cytokine IL-12 that is responsible for the regulation of Th1 cell response. Our result showed that prior SS1 infection significantly upregulated IL-12 secretion as compared to cag − H. pylori infection both in gastric mucosa and spleen (Fig. 4a, b). Highest level of IL-12 was detected in SS1/SS1 infected group while AM1/AM1 exhibited lowest level. No significant difference in the expression of IL-12 was observed in between SS1/AM1 and AM1/SS1 infected groups. Immunosuppression is mediated by cytokines IL-10 and TGF-β. Hence, we measured the level of IL-10 and TGF-β in H. pylori infected mouse gastric tissues. A significant decrease in the level of IL-10 and TGF-β were observed in the AM1/SS1 coinfected mice which correlate with reduced gastric inflammation (Fig. 4c). Infiltration of Foxp3 + Treg cells in cag + strain infected gastric tissue was reduced by earlier cag − H. pylori infection Foxp3 transcription factor is essential for differential development of anti inflammatory Treg cells. Increased numbers of CD4 CD25 Foxp3 regulatory T (Treg) cells were detected in H. pylori infected gastric tissues. To understand the immunosuppressive action of Treg cells during H. pylori infection or coinfection mediated gastric pathogenesis, we measured the number of Treg cells present in all H. pylori infected and coinfected gastric tissues. All groups infected with H. pylori showed elevated numbers of gastric Foxp3 + cells than control (Fig. 5a, b). Moreover prior cag + H. pylori strain infected gastric tissues exhibited higher number of Foxp3 + cells than cag − H. pylori. Although we did not found any significant differences in the gastric Foxp3 + Treg cells between SS1/ SS1 infected and SS1/AM1 coinfected mice and between AM1/AM1 infected and AM1/SS1 coinfected mice. In between the coinfected groups AM1/SS1 infected mice exhibited significantly lower gastric Foxp3 + cells than SS1/AM1 infected group. Interestingly decrease numbers of FoxP3 + Treg cells correlates with the low degree of gastric inflammation in AM1/SS1 infection. Hence, there is a positive association with number of foxp3 + Treg cells and gastric inflammation severity in H. pylori infection. Interplay between inflammatory and immunosuppressive cytokines in H. pylori mediated gastric inflammation Balance between inflammatory and immunosuppressive cytokines play important role in various chronic infections including H. pylori induced gastric injury. Interplay among Th1, Th17 and Treg cells and their signature cytokines are crucial for H. pylori induced pathogenesis. To understand the role of inflammatory (Th17 and Th1) and immunosuppressive (Treg) cytokines in H. pylori induced gastric pathology, we compared the level of various cytokines in mice gastric tissues infected or coinfected with different strains of H. pylori. We found Fig. 3 Serum MMP9 level reduced in AM1/SS1 coinfected mice. Different groups of mice were orally fed with either SS1 or AM1 strains alone or coinfected with both strains of H. pylori (SS1 followed by AM1, AM1 followed by SS1). Control mice were fed with PBS and kept separately under the same conditions. Activity of MMP9 in mice serum was assayed by gelatin zymography (a). Equal volumes of serum were loaded in each lane for gelatin zymography. Histographic representation of gelatinolytic activity as measured by lab image densitometry (b). Data were represented as mean ± SD from three independent sets of experiment. Error bars ±SEM ***P < 0.001; *P < 0.05 NS, P = not significant versus control that H. pylori infection increased the expression of Th1 cytokines, TNF-α and IL-1β in mice gastric tissues (Fig. 6). To check the contribution of cag PAI on immunomodulation, we compared cytokine expression pattern among different combination of H. pylori infected mice gastric tissues. Our result suggested that Th1 cytokines, TNF-α and IL-1β expression increased in cag + infection compared to cag − infection. SS1/SS1 infection exhibited highest level of TNF-α and IL-1β expression as compared to other groups. Although, AM1/SS1 infected mice developed less gastric injury than SS1/AM1, no significant difference in TNF-α and IL-1β cytokines level were observed in between these groups. We also measured the expression of Th17 cytokine IL-17A in different combination of H. pylori strain infected mice gastric tissues. Interestingly, we found that IL-17A expression increased in SS1/SS1 and SS1/AM1 infection, while no significant changes in IL-17A expression were observed in AM/AM1 infected group as compared to control. Surprisingly, AM1/SS1 infected group exhibited no significant increase in IL-17A expression as compared to control. We found a positive correlation between IL-17A expression and severity of gastric inflammation. Expression pattern of immunosuppressive cytokine TGF-β was also measured using Western blotting. Highest level of expression was detected in SS1/SS1 infected Fig. 4 Level of cytokines in mouse gastric tissues and supernatants from cultured splenocytes. Tissue homogenates of gastric tissue of different group of mice infected by different combination of H. pylori were subjected for analysis of IFN-γ, IL-12 and IL-17 production by ELISA (a). Results were expressed as pg/mg total protein. Splenocytes isolated from control and all H. pylori infected groups were re-stimulated with or without H. pylori WCP (2.5 mg/ml). Supernatants were collected 48 h after stimulation, and secreted cytokines were measured by ELISA (b). Results were expressed as pg/ml protein. Gastric tissues extract from controls and H. pylori infected mice were analyzed for immunosuppressive cytokines, IL-10 and TGF-β production by ELISA (c). IL-10 and TGF-β concentration (pg/mg) were expressed as pg/mg total protein Error bars ±SEM ***P < 0.001; **P < 0.01; *P < 0.05; NS, P = not significant versus control; n = 4 group. However, interestingly AM1/SS1 coinfected mice showed decreased TGF-β expression as compared to SS1/SS1 infected and SS1/AM1 coinfected groups. Furthermore, we did not found any significant change in TGF-β expression between AM1/SS1 and AM1/AM1 infected mice gastric tissues. The expression of TGF-β was higher in SS1/AM1 infected mice as compared to AM1/SS1 infected groups. Discussion Helicobacter pylori colonization and associated pathology is determined by a combination of pathogen virulence factors and host immune response [5,6,38]. H. pylori infection induced a robust proinflammatory Th1 and Th17 response that are associated with gastric inflammation, atrophy, epithelial hyperplasia and dysplasia [10,17,18,39]. Moreover, mixed or coinfection of different Helicobacter species/strains determined the outcome of disease severity. In this context, Secka et al. reported that mixed infection with cag + and cag − strains of H. pylori lowers disease burden among the Gambian population [31]. Furthermore, coinfection with enterohepatic Helicobacter species can reduce H. pylori induced gastric pathology in C57BL/6 mice through modulation of gastric Th1 and Th17 responses [30]. In present study we have investigated whether cag + and cag − H. pylori coinfection induces gastric mucosal inflammatory response differ from single strain infection. The study also focuses whether coinfection has any modulatory effect on gastric ulcer severity compared to single strain infection. We previously reported that both SS1 and AM1 strains were capable to cause gastric . Bar diagrams showed the average number of FoxP3 + cells present in different groups (b). Error bars ±SEM ***P < 0.001; **P < 0.01; *P < 0.05; NS, P = not significant versus control inflammation although the severity of damage was more pronounced in SS1 infection [26]. Although, the functionality of SS1 cag gene within mouse gastric tissue is a controversial, but its association with severe gastric inflammation is well established. Our current result suggested that cag + strain (SS1) induced gastric pathology were significantly attenuated in mice that were earlier coinfected with cag − strain (AM1) and associated with modulation of Th17 and Treg cell responses. It is reported that H. pylori infection is associated with elevated Th1 cytokines [10,40]. Hence, we examined whether reduced gastric inflammation in AM1/SS1 coinfected mice has any correlation with Th1 cytokines level. We found that despite the reduced gastric inflammation pathology in AM1/SS1 infected mice, the expression of inflammatory Th1 cytokines IFN-γ, TNF-α and IL-1β in AM1/SS1 and SS1/AM1 are comparable (Fig. 5). Interestingly, significantly lower level of IL-17 was detected in AM1/SS1 coinfected group than SS1/AM1 coinfected group. Previous studies established the role of proinflammatory Th17 pathway in the development of H. pylori induced gastric inflammation in mouse model and human [39,41]. Yun shi et al. suggested that both Th1 and Th17 cells mediated mucosal inflammation is important in H. pylori infection and Th17/IL-17 pathway modulates Th1 cell responses [17]. Th17 cell responses are induced earlier than Th1 cell responses [17], implying that Th17 and Th1 cells promote inflammation differentially. It is known that active type IV secretion system is required for IL-17 secretion [18]. We found that cag + strain infection induced IL-17A secretion in mouse gastric tissues as well as spleen, while cag − infection did not. It seems to us that severe gastric inflammation in SS1/SS1 infected mice were mediated by both Th1 and Th17 responses while AM1/AM1 infection only by Th1 responses. We found that the level of Th1 cytokines IFN-γ, TNF-α and IL-1β in AM1/SS1 infected mice are comparable to SS1/ AM1infected mice. In contrast, higher level of IL-17A was detected in SS1/AM1 mice than AM1/SS1 infected mice. Thus our results clearly indicate that attenuated gastric pathology in AM1/SS1 infected group is not due to reduced Th1 responses instead of reduced Th17 responses to AM1/SS1 infection. We conclude that the Th1 cytokine induced by prior AM1 infection particularly IFN-γ could also contribute in part to the downregulation of Th17 response induced by subsequent cag + (SS1) infection because IFN-γ plays an inhibitory role towards Th17 cell activation [42,43]. Thus, AM1 infection released high level of IFN-γ in the gastric lumen that prevented the activation of Th17 response resulting protection against further cag + infection. Expression and secretion of different MMPs in H. pylori infection have been postulated to be critically involved in the development of gastric ulcer. However, recent evidences suggest that apart from its well studied inflammatory and pathogenic functions, MMPs play a more complex and diverse role in ECM homeostasis, regulation of inflammation, arresting disease progression [22]. Role of cytokines and growth factors in regulation of MMPs expression have been reported earlier under various pathological conditions [22,44]. IL-17 stimulated gastric epithelial cells to produce MMP9 and 3 that might be important in mediating gastric inflammation. However, a significantly lower level of MMP9 and 3 expressions were detected in AM1/SS1 coinfected Fig. 6 Inter-relation between Th1, Th17 and Treg cytokines in H. pylori infected mouse gastric tissues. Different strains of H. pylori were orally fed separately or in combination to C57BL/6 mice and they were sacrificed on day 10 postinfection. Control mice were fed with PBS and kept separately under same conditions. Expression of TNF-α, IL-1β, IL-17 and TGF-β in infected mouse gastric tissue homogenates were assessed by Western blotting. Representative Western blots showing the expression of TNF-α, IL-1β, IL-17 and TGF-β in all groups, β-actin served as loading control (a). Histographic representation of fold changes at expression level as measured by Lab Image densitometry values (b) from the above blots and two other representative blots from independent experiments in each case mice compared to SS1 alone or SS1/AM1 coinfected group (Fig. 3). In line with our observation, it has been reported that MMP9 expression in the stomach following H. pylori infection was significantly reduced when IL-17 is deficient or blocked [17]. Moreover, recombinant IL-17A treatment increased MMP9 expression in vitro [17]. Our results show that the level of IL-17 is significantly increased only in the mouse gastric tissues infected with SS1 strains of H. pylori, suggesting that cag PAI is required for the induction of IL-17 cytokine, and also indicates that the cells producing MMPs have responded to the increased IL-17 secretions. Our results also suggested that the reduced gastritis in AM1/SS1 infected mice may be is due to reduced activation of Th17/IL-17 pathway and subsequent downregulation of MMP9 and 3 expressions in AM1/SS1 infected group. It is well established that natural regulatory T (Foxp3 + Treg) cells suppress the host inflammatory responses during infection and thereby maintain physiological homeostasis of host immunity [45][46][47]. Elevated numbers of Treg cells were reported in H. pylori positive patients and H. pylori infected mice gastric tissues [48][49][50]. Moreover inhibition of Treg cells function by treatment with monoclonal antibody resulted increased expression of gastric proinflammatory cytokines that lead to severe gastritis in H. pylori infected mice [48]. CD4 + CD25 + Treg cells from H. pylori positive patients are more potent in the suppression of memory T cell responses [51]. Treg mediated immune suppression is predominantly utilizes IL-10 and TGF-β that currently gain much attention [45,46]. Previously it has been reported that H. pylori induced gastritis was suppressed by adoptive transfer of Treg cells harvested from IL-10-competent C57BL/6 donor mice, demonstrating that IL-10-dependent Treg cells play a crucial role in suppressing H. pylori-induced gastric disease [52]. Our results also showed that the number of gastric foxp3 + cells as well as gastric IL-10 and TGF-β level were significantly higher in H. pylori infected mouse gastric tissues (Figs. 5, 6). While, AM1/SS1 infected mice with attenuated gastritis have fewer foxp3 + cells and lower level of gastric IL-10 and TGF-β. Hence we found a positive correlation between severity of gastritis and no of Foxp3 + cells as well as IL-10 and TGF-β expression. Previous reports suggested that IL-10 and TGF-β can suppress inflammatory Th17 as well as Th1 responses [53,54]. So it is reasonable to postulate that prior AM1 infection creating an anti-inflammatory bias to further H. pylori infection at the outset of coinfection, with relatively lower demand for Treg cells at more chronic time points because the Th1 and Th17 response to subsequent H. pylori infection was suppressed by prior AM1 primed Treg cells. We hypothesized that dendritic cells exposed to H. pylori may promote the preferential differentiation of naïve T cells into Treg cells. Those exposed dendritic cells then assist the differentiation of Treg cells as well as it lost its capability to further induce Th1 and Th17 responses upon subsequent H. pylori infection. Thus prior AM1 infected group showed reduced gastritis as its deficiency to induce Th17 response and probably stimulation of an anti-inflammatory bias by accumulation of AM1 sensitized dendritic and Treg cells within the gastric mucosa. In both SS1 and AM1 infection, primed Treg cells generated in gastric mucosa and these Treg cells provide protection against further infection of H. pylori by either directly or through cross reactivity. In contrast, prior SS1 infection cause an increase in the level of Th1 and Th17 responses are sufficient to do gastric damage. Irrespective of inhibitory role of SS1 primed Treg cells subsequent SS1 infection enjoy the benefit of existing inflammatory bias for further infection. However, earlier infection with AM1 helps in elicitation of AM1-primed Treg cells as well as less inflammatory bias through reduced secretion of Th1 cytokines. Subsequent infection of SS1 is prevented due to enrichment of AM1 primed Treg cells in the gastric mucosa that might provide protection through creating an anti inflammatory bias as well as by providing an un-hostile environment due to reduced inflammatory bias. Conclusions In summary, we suggest that existed cag − H. pylori infection attenuated severe gastric pathology induced by cag + H. pylori strain. Reduced gastric pathology is due to an anti-inflammatory bias created by cag − H. pylori. Further study is required to elucidate the cascade of interactions between H. pylori and mucosal cells, which will provide additional insights into the pathogenesis of H. pylori. A better understanding of the nature, regulation and function of the T-cells responses during H. pylori coinfection may help to design novel and cost effective strategies through which H. pylori induced gastric pathology might be controlled. Culture of H. pylori strains Two unrelated mouse adapted H. pylori strains with different genetic makeup were used: SS1 [34,35], and AM1 (Indian strain) [26]. SS1 (The Sydney Strain) is widely used as the standard mouse adapted strain for experimental infection. The strain AM1 was isolated from an endoscopic sample of an ulcer patient in Kolkata, India as mixed infections [26]. Both strains of H. pylori were grown on brain-heart infusion agar (BHI; Difco Laboratories, Detroit, MI) supplemented with 7% sheep blood, 0.4% isovitalex and the antibiotics amphotericin B (8 µg/ ml), trimethoprim (5 µg/ml) and vancomycin (8 µg/ml) (referred to here as BHI agar). The plates were incubated at 37 °C under 5% O 2 , 10% CO 2 and 85% N 2 . In all experiments, overnight grown cultures on BHI agar plates were used. Infection of C57BL/6 mice with H. pylori Male C57BL/6 mice with free access to food and water were obtained from institutional animal house. Experiments were designed to minimize animal suffering and to use the minimum number to obtained valid statistical evaluation. Animal experiments were carried out according to the guidelines of animal ethics committee of the institute. Animals of both control and experimental groups were fasted for 6 h with free access to water. H. pylori infection in mice was done using a modification of the Kundu et al. method [34]. Briefly, overnight grown bacterial cultures were harvested in 10 mM phosphatebuffered saline (PBS) and used for inoculation (10 8 CFU/ mouse/inoculation). Mice were divided into five groups (n = 6 in each), first group serves as control was given PBS only. Among the rest 4 groups 2 groups were orogastrically inoculated twice in a period of three days with either AM1 (cag + ) or SS1 (cag − ) strain (AM1/AM1 and SS1/SS1) and the rest 2 groups were given criss-cross infection. Criss-cross means infection by cag + strain followed by cag − and vice versa (AM1/SS1 and SS1/AM1). Mice were sacrificed at day 10 after final inoculation (13 days post-primary inoculation). Histological analysis Gastric tissues of control and 10-day infected mice were sectioned for histological studies. The tissue samples were fixed in 10% formalin and embedded in paraffin. The sections (5 µm) were cut using microtome, stained with hematoxylin and eosin [21], and assessed under an Olympus microscope. Images were captured using Camedia software (E-20P 5.0 Megapixel) at original magnification 10 × 10, 20 × 10 and 40 × 10 and processed in Adobe Photoshop version 7.0. Tissue extraction The pyloric part of the gastric mucosa of mice were suspended in PBS containing protease inhibitors, minced and incubated for 10 min at 4 °C. After incubation the suspension was centrifuged at 12,000×g for 15 min and the supernatant was collected as PBS extract. The pellet was extracted in the lysis buffer (10 mM Tris-HCl pH 8.0, 150 mM NaCl, and 1% Triton X-100 and protease inhibitors) and centrifuged at 12,000×g for 15 min to obtain TX extracts. Tissue extracts were preserved at −80 °C for future studies. Serum isolation Blood samples were isolated from mouse by puncturing the heart followed by incubation for 30 min at room temperature. Serum was isolated from the clotted blood by low centrifugation. Serum sample was mixed with protease inhibitor mixture and stored at −80 °C. Equal volume of serum was used for gelatin zymography. Gelatin and casein zymography For assay of MMP2, 9 and 3 activities, tissue extracts were electrophoresed in 8% SDS-polyacrylamide gel containing 1 mg/ml gelatin or casein (sigma) respectively, under non-reducing conditions. Seventy micrograms proteins were loaded in each lane. The gels were washed twice in 2.5% Triton X-100 (Sigma) and then incubated either in calcium assay buffer or in stromelysin assay buffer at 37 °C. Gels were stained with 0.1% Coomassie blue followed by destaining [21]. Quantification of zymographicbands were done using Lab-Image software (Kapelan, Gmbh, Germany). For assay of MMP9 activity in serum, mice serum samples were mixed with 1× nonreducing Laemmli sample loading buffer and were electrophoresed in SDS-8% polyacrylamide gel containing 1 mg/ml gelatin under nonreducing condition. Equal volume of serum samples were loaded in each lane. The gels were washed twice in 2.5% Triton X-100 and incubated in calcium assay buffer at 37 °C. Gels were then stained with 0.1% Coomassie Brilliant Blue stain followed by destaining. The zone of gelatinolytic activities appeared as negative staining. Quantification of zymographic bands were performed by densitometric analysis using Lab Image software (Kapelan Gmbh, Germany). Measurement of cytokines by ELISA Helicobacter pylori infected and uninfected mice gastric tissues were homogenized in 1 ml sterile PBS, and centrifuged. The supernatants were analyzed for IFN-γ, IL-12, IL-17, IL-10 and TGF-β using sandwich ELISA kits (eBioscience, San Diego, CA) according to the manufacturer's instruction. Total protein was measured by the Lowry method. The cytokines concentrations in gastric tissue extracts were expressed as picograms per milligrams of total protein [17]. Splenocytes culture and cytokine measurement Helicobacter pylori infected and uninfected mice spleens were passed through meshed steel sieve to obtain singlecell suspension of splenocytes. Splenocytes (1.6 × 10 6 cells/ml) were cultured in RPMI 1640 medium with or without H. pylori whole cell protein (WCP) (2.5 μg/ ml).The production of IFN-γ, IL-12 and IL-17 in the supernatants was measured by sandwich ELISA (eBioscience, San Diego, CA) after 48 h of splenocytes culture [17]. Immunofluoresence For Immunofluorescence study, the tissue samples were fixed in 4% paraformaldehyde solution for 48 h, dehydrated in ascending alcohol series [36]. It was embedded in paraffin wax and sectioned at 5 mm thickness using a microtome. The sections were deparaffinized with xylene followed by rehydration with descending alcohol series. Antigen retrieval was performed by trypsin (0.05% trypsin, 0.1% CaCl2) and blocking was performed using 5% BSA in TBS (20 mM Tris-HCl, pH 7.4 containing 150 mM NaCl) for 2 h at room temperature followed by the incubation over night at 4 °C in primary antibody solution (1:200 dilutions in TBS with 1% BSA) in a humid chamber. The tissue sections were washed four times with TBST (20 mM Tris HCl, pH 7.4 containing 150 mM NaCl and 0.025% Triton X-100) followed by incubation with fluorescein isothiocyanate and Texas Red-conjugated secondary antibody (Santa Cruz Biotechnology) solution. Then the tissue sections were washed four times with TBST followed by nuclear staining with DAPI. The images were observed in confocal microscopy. Images at X40 magnification were captured using Andor iQ 2.7 software (Andor spinning dise confocal microscope, Belfast, Ireland) and processed under Adobe Photoshop version 7.0 (Adobe Systems, San Jose, CA). Statistical analysis Densitometry data are fitted using Sigma Plot. Data are presented as the mean ± SE. Statistical analysis was performed using the Student's t test. P value less than 0.05 were considered as significant.
2017-10-25T07:28:11.688Z
2017-03-09T00:00:00.000
{ "year": 2017, "sha1": "9c1cb7ea6ac7eeec65f81944db0814697d49c64b", "oa_license": "CCBY", "oa_url": "https://gutpathogens.biomedcentral.com/track/pdf/10.1186/s13099-017-0161-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9c1cb7ea6ac7eeec65f81944db0814697d49c64b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
58617845
pes2o/s2orc
v3-fos-license
Elevated Wall Tension Leads to Reduced miR‐133a in the Thoracic Aorta by Exosome Release Background Reduced miR‐133a was previously found to be associated with thoracic aortic (TA) dilation, as seen in aneurysm disease. Because wall tension increases with vessel diameter (Law of Laplace), this study tested the hypothesis that elevated tension led to the reduction of miR‐133a in the TA. Methods and Results Elevated tension (1.5 g; 150 mm Hg) applied to murine TA ex vivo reduced miR‐133a tissue abundance compared with TA held at normotension (0.7 g; 70 mm Hg). Cellular miR‐133a levels were reduced with biaxial stretch of isolated murine TA fibroblasts, whereas smooth muscle cells were not affected. Mechanisms contributing to the loss of miR‐133a abundance were further investigated in TA fibroblasts. Biaxial stretch did not reduce primary miR‐133a transcription and had no effect on the expression/abundance of 3 microRNA‐specific exoribonucleases. Remarkably, biaxial stretch increased exosome secretion, and exosomes isolated from TA fibroblasts contained more miR‐133a. Inhibition of exosome secretion prevented the biaxial stretch‐induced reduction of miR‐133a. Subsequently, 2 in vivo models of hypertension were used to determine the effect of elevated wall tension on miR‐133a abundance in the TA: wild‐type mice with osmotic pump–mediated angiotensin II infusion and angiotensin II–independent spontaneously hypertensive mice. Interestingly, the abundance of miR‐133a was decreased in TA tissue and increased in the plasma in both models of hypertension compared with a normotensive control group. Furthermore, miR‐133a was elevated in the plasma of hypertensive human subjects, compared with normotensive patients. Conclusions Taken together, these results identified exosome secretion as a tension‐sensitive mechanism by which miR‐133a abundance was reduced in TA fibroblasts. A lterations in microRNA abundance have been associated with multiple cardiovascular diseases, and although much effort has been directed toward understanding their role in modulating key cellular targets, less is known about how microRNA abundance is regulated within the cell. Previously, this laboratory identified that several microRNAs, including miR-1, miR-21, miR-29a, miR-486, miR-720, and miR-133a, were reduced in ascending aortic tissue from patients with thoracic aortic aneurysm (TAA). 1 Many of these microRNAs, including miR-133a, displayed an inverse linear correlation with aortic diameter, such that, as diameter increased, the abundance of miR-133a was reduced; however, the underlying mechanisms responsible for this observation remained to be determined. On the basis of the fundamentals of the Law of Laplace, we know that vessel wall tension is dependent on the relationship between pressure and diameter (wall tension equals pressure multiplied by diameter). Accordingly, during aneurysm formation, wall tension increases as the aorta dilates, and this may play a role in determining microRNA levels in the thoracic aorta (TA). Therefore, this study sought to determine a mechanism responsible for the reduction of miR-133a and tested the hypothesis that elevated wall tension induces the loss of miR-133a from the TA. The importance of miR-133a in the regulation of extracellular matrix (ECM) remodeling in cardiovascular tissue is becoming increasingly recognized. Care et al demonstrated, in a murine model of cardiac hypertrophy, that miR-133a was reduced in the left ventricular myocardium after transverse aortic arch constriction and that in vivo knockdown of miR-133a (using anti-miR-133a) alone was sufficient to induce cardiac hypertrophy. 2 Similarly, Torella et al demonstrated miR-133a was reduced in the carotid artery after balloon distension injury in a rat model. 3 More important, they demonstrated that systemic overexpression of miR-133a by adenovirus attenuated postinjury remodeling, in contrast to miR-133a knockdown (using an anti-miR-133a oligonucleotide), which enhanced postinjury remodeling. Data from these studies emphasize the key role that miR-133a plays in cardiovascular remodeling and suggest that loss of miR-133a-mediated translational control likely contributes to the pathologic changes that were observed in our studies during aneurysm formation. Furthermore, multiple validated targets of miR-133a have been demonstrated to play key roles in vascular pathologic characteristics. First, miR-133a targets transforming growth factor (TGF)-b, 4 which is elevated in TAA tissue, induces connective tissue growth factor expression, alters fibroblast phenotype, and causes apoptosis in smooth muscle cells (SMCs). Second, miR-133a targets TGF-b receptor II, 4 which is elevated in TAA tissue, and with the concomitant decline in TGF-b receptor-I, shifts TGF-b ligand signaling toward the Activin Receptor-Like Kinase 1 (ALK-1), pathway activating the sma-related + Mothers Against Decapentaplegic Homolog 1, 5, or 8 (SMAD 1/5/8) pathway, which induces a profile of gene expression resulting in matrix degradation. 5 Third, miR-133a targets connective tissue growth factor, 6 which is elevated in TAA tissue and also contributes to changes in fibroblast phenotype. Fourth, miR-133a targets collagen 1a1, a major component of the aortic ECM. 7 Finally, miR-133a targets membrane type-1 matrix metalloproteinase (MMP), 8 which is elevated in TAA tissue, degrades components of the ECM, activates other MMPs, such as MMP-2, and directly releases ECM-bound cytokines, including TGF-b. In addition to the above listed direct targets of miR-133a, it is apparent that miR-133a may have the capacity to regulate multiple pathways involved in complex pathologic features, increasing the significance of the loss of miR-133a observed in the development of TAA. Accordingly, understanding the mechanisms that regulate miR-133a cellular abundance is essential and may provide insight into potential therapeutic targets. The current report identifies increased wall tension (hypertension) as a stimulus driving the loss of miR-133a in TA tissue. The effects of tension were examined on the transcription of miR-133a, the levels of cellular exoribonucleases, and the cellular export of miR-133a via exosomes. The unique findings of this study propose a novel mechanism by which increased mechanical tension reduces miR-133a abundance via exosome secretion from aortic fibroblasts, a cell type believed to play a major role in managing pathologic remodeling. 9,10 Methods The data, analytic methods, and study materials will be made available to other researchers on request for purposes of reproducing the results or replicating the procedures. Ex Vivo TA Tension Application Animal care and surgical procedures were approved by the Medical University of South Carolina Institutional Animal Care and Use Committee (AR3380) and performed in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals. Wild-type C57BL/6 mice (10-16 weeks of age; Envigo) underwent thoracotomy, and the descending TA was harvested (n=7; 4 males/3 females). Endothelial-intact aortic tissue segments were cut transversely into rings of %3 mm in length, which were suspended on parallel wires, and bathed in an oxygenated physiologic salt solution (Krebs-Henseleit solution) in a tissue myograph. Aortic segments were maintained at an experimentally derived optimal tension (normotension; 0.7 g, %70 mm Hg equivalent) or elevated tension (1.5 g, %150 mm Hg equivalent) for 3 hours using methods previously described. 11,12 Aortic segments were also held at 0.7 g and then treated with or without 100 nm angiotensin II (AngII; A9525; Sigma) and Clinical Perspective What Is New? • Tension applied to thoracic aortic rings resulted in the acute reduction in miR-133a. • Mechanical tension in the form of biaxial cyclic stretch applied to isolated primary aortic fibroblasts and smooth muscle cells revealed that fibroblasts preferentially responded to mechanical tension, resulting in the acute reduction of miR-133a. • Tension-dependent loss of miR-133a in fibroblasts was mediated through exosome secretion. • Elevated blood pressure (increased wall tension) was sufficient to induce the loss of miR-133a from the descending thoracic aorta (mouse models) and was associated with increased plasma levels of miR-133a (mouse models and human plasma). What Are the Clinical Implications? • Increased vascular wall tension (hypertension) is a stimulus driving the loss of miR-133a in the thoracic aorta. • These results identified a specific tension-sensitive mechanism by which miR-133a was reduced in a cell type that plays a key role in adverse vascular remodeling (thoracic aortic fibroblasts). allowed to develop tension by contracting against immobilized parallel wires for 3 hours; the peak tension generated was recorded (n=8; 4 males/4 females). Cell Culture The descending TA from wild-type C57BL/6 mice (10-16 weeks of age; Envigo) was extracted, and primary fibroblast or SMC cultures were established as described previously (n=8; 4 males/4 females). 13,14 The isolated fibroblasts were maintained in a fibroblast-specific growth medium (fibroblast growth media 2 with added supplemental pack containing 2% fetal calf serum; C-23020; PromoCell, Heidelberg, Germany) with an additional 10% fetal bovine serum (FBS; Gibco; catalog No. 1600) added. The isolated SMCs were maintained in SMC-specific growth medium (SMC Growth Medium 2 with added supplemental pack containing 5% fetal calf serum; C-22062; PromoCell). All primary cultures were maintained at 37°C in a humidified 5% CO 2 /95% air atmosphere. Primary fibroblasts and SMCs were used between passages 2 and 10. Primary Cell Biaxial Cyclic Tension Application and AngII Stimulation Fibroblasts or SMCs were seeded at a density of 5000 cells per cm 2 onto an amino-coated Bioflex-6 well plate (BF-3001A; Flexcell International Corporation, Burlington, NC), and allowed to adhere overnight. Culture medium was then replaced with fresh complete medium containing exosome-depleted FBS (10%; System Biosciences) with or without AngII (100 nmol/L; A9525; Sigma). To inhibit exosome secretion, fibroblasts were pretreated with an inhibitor of neutral sphingomyelinase, GW4869 (20 nmol/L, 24 hours; D1692; Sigma-Aldrich, St Louis, MO). After the 24-hour treatment, cells were rinsed with PBS and the appropriate culture medium was replaced. Culture plates were then held static (control) or subjected to 12% biaxial cyclic stretch, at a rate of 1 Hz, mimicking a myocardialderived amplitude and waveform in a Flexcell culture system (FX5000; Flexcell International Corporation). Determination of mRNA Expression and microRNA Abundance by Quantitative Polymerase Chain Reaction Total RNA was extracted using TRIzol Reagent (Thermo Fisher Scientific; catalog No. 15596026) and quantified by NanoDrop 2000 (Thermo Fisher Scientific). For cellular (fibroblasts or SMCs) and tissue (TA) microRNA quantitation, single-stranded cDNA was synthesized from 100 ng total RNA. Mature miR-133a levels were standardized to total RNA by methods previously described. 15 Exosomal microRNA was quantified from Exosome Precipitation and Quantitation Fibroblasts were maintained in growth media containing exosome-depleted FBS (10%) and exposed to 12% biaxial cyclic tension, as before. After 18 hours, the culture medium was collected and centrifuged at 3000g for 15 minutes to remove cells and cell debris. Supernatant (5 mL) was transferred to a separate tube and exosomes were precipitated, as described above. Relative exosome abundance was compared using acetylcholinesterase activity as a surrogate for exosome number, as previously described. 16 The incubation was performed at 37°C for 30 minutes, and the change in absorbance was measured at 412 nm on a Spectramax M3 (Molecular Devices). Human Subjects Informed consent was obtained for all patients before blood/ plasma collection, and analysis of patient plasma was approved by each respective Institutional Review Board of the collection centers involved. The inclusion/exclusion criteria for all subjects were previously described. 17 Control patients fulfilled the inclusion criteria, and they did not have a medical history of hypertension. Hypertensive patients fulfilled the inclusion criteria, they had a documented history of hypertension and left ventricular hypertrophy in their electronic medical record, and all were receiving antihypertensive medications at the time of blood/plasma collection. No patient included in this study had evidence of heart failure, as specified by the criteria defined by the European Society of Cardiology and the Heart Failure Society of America. 18,19 Statistical Analysis Statistical tests were performed using STATA (Intercooled STATA v8.2, College Station, TX) and SAS Statistics. The sample sizes for all experiments performed in this project were calculated by power analysis using SigmaPlot, version 14. For ex vivo studies, sample sizes were based on initial analysis comparing the primary readouts between experimental groups (aortic tissue miR-133a levels), and power calculations using a t test model were completed assuming a 69.1% difference in means between groups with a pooled SD (across both groups) of 28.3%. To provide hypothesis testing at a desired power of 0.95 with an a level of 0.05, the sample sizes for tissue analyses were determined to be a minimum of 6 samples per group. For in vitro studies, sample sizes were based on initial analysis comparing the primary readouts between experimental groups (fibroblast miR-133a levels), and power calculations using a t test model were completed assuming a 78.9% difference in means between groups with a pooled SD of 28.2%. To provide hypothesis testing at a desired power of 0.95 with an a level of 0.05, the sample sizes for cell culture analyses were determined to be a minimum of 5 samples per group. For in vivo studies, sample sizes were based on initial analysis comparing the primary readouts between experimental groups (tissue miR-133a levels), and power calculations using an ANOVA model were completed assuming a 25.5% minimum detectable difference in means with a pooled SD (across all 3 groups) of 7.4%. To provide hypothesis testing at a desired power of 0.95 with an a level of 0.05, the sample sizes for tissue analyses were determined to be a minimum of 5 samples per group. All data were assessed for normality using the Shapiro-Wilk test. Data sets in Figures 1 through 5 were subjected to 2-sample t tests (unpaired, 2 tailed). Data sets in Figure 6 were subjected to a 1-way ANOVA, followed by pairwise comparison of means by the Ryan/Einot-Gabriel/ Welsch method. Data for Figure 6B and 6C were plotted and analyzed as log of miR-133a expression values using methods described by Schmittgen and Livak. 20 Data sets in Figure 7 were subjected to 2-sample t tests (unpaired, 2 tailed). Data in Table 1 are presented as mean and SD. In Table 2, a Fisher's exact test was performed on sex and ethnicities. Data were expressed as fold change from control values, unless otherwise stated in the figure legend. Data are represented as mean AESEM in the text. In the figures, data are represented in dot plots with the mean and SEM shown next to each group. P<0.05 was considered to be statistically significant. Increased Mechanical Tension Reduces miR-133a in TA Tissue To determine the effects of elevated applied tension on miR-133a levels, intact murine TA segments (rings) harvested from wild-type, normotensive mice were hung on parallel wires in an ex vivo tissue myograph. Control TA segments were held at a previously determined level of tension (0.7 g) that approximates in vivo normotension (mean arterial pressure [MAP], %70 mm Hg). 11,12 To simulate elevated wall tension (hypertension), an increased amount of tension (1.5 g, MAP, %150 mm Hg) was applied directly to the aortic segments for 3 hours. The abundance of miR-133a was reduced in response to increased applied tension compared with control segments (0.31AE0.08-versus 1.0AE0.25-fold expression; P<0.05 versus control; n=7) ( Figure 1A). Control AngII * * * Figure 1. Effects of ex vivo mechanical tension on miR-133a levels in thoracic aortic rings. Descending aortas from wild-type C57BL/6 mice were transversely cut into 3-mm rings and suspended in a tissue myograph on parallel wires in oxygenated Krebs-Henseleit solution. A, miR-133a abundance in aortic rings held at 0.7 g (normotension; Control, n=7) or with 1.5 g applied tension (Tension, n=7) for 3 hours. B, Peakdeveloped tension (grams) in aortic rings held at normotension (0.7 g) in the absence (Control, n=8) or presence of 100 nmol/L angiotensin II (AngII, n=8) for 3 hours. C, Effect of developed tension on miR-133a abundance in aortic rings held at normotension (0.7 g) in the absence (Control, n=8) or presence of 100 nmol/L AngII (AngII, n=8). A through C, Data are represented in dot plots with the mean and SEM shown next to each group. Comparisons were made using a 2-sample t test (unpaired, 2 tailed). *P<0.05 vs control. aortic fibroblasts and smooth muscle cells. Aortic fibroblasts and smooth muscle cells were isolated from the descending aortas from C57BL/6 mice. Each data point represents an independently isolated cell line. A through D, Data are represented in dot plots with the mean and SEM shown next to each group. Comparisons were made using a 2-sample t test (unpaired, 2 tailed). *P<0.05 vs control. Similarly, to examine the role of developed tension on miR-133a abundance, TA rings from wild-type, normotensive mice were exposed to 100 nmol/L AngII for 3 hours in an RNA was isolated from each individual cell line, and primary miR-133a levels were determined by real-time polymerase chain reaction. A, Primary miR-133a-1 expression levels in aortic fibroblasts exposed to 12% biaxial cyclic stretch (Tension, n=8) or held static (Control, n=8) for 3 hours. B, Primary miR-133a-2 expression levels in aortic fibroblasts exposed to 12% biaxial cyclic stretch (Tension, n=8) or held static (Control, n=8) for 3 hours. A and B, Data are represented in dot plots with the mean and SEM shown next to each group. Comparisons were made using a 2-sample t test (unpaired, 2 tailed). *P<0.05 vs control. isolated from each independent cell line and examined by real-time polymerase chain reaction or Western blotting, respectively. A, XRN1 mRNA expression levels in aortic fibroblasts exposed to 12% biaxial cyclic stretch (Tension, n=9) or held static (Control, n=9) for 3 hours. B, XRN1 protein abundance and representative immunoblot in aortic fibroblasts exposed to 12% biaxial cyclic stretch (Tension, n=5) or held static (Control, n=5) for 3 hours. C, XRN2 mRNA expression levels in aortic fibroblasts exposed to 12% biaxial cyclic stretch (Tension, n=9) or held static (Control, n=9) for 3 hours. D, XRN2 protein abundance and representative immunoblot in aortic fibroblasts exposed to 12% biaxial cyclic stretch (Tension, n=5) or held static (Control, n=5) for 3 hours. E, ExoSC4 mRNA expression levels in aortic fibroblasts exposed to 12% biaxial cyclic stretch (Tension, n=9) or held static (Control, n=9) for 3 hours. F, ExoSC4 protein abundance and representative immunoblot in aortic fibroblasts exposed to 12% biaxial cyclic stretch (Tension, n=5) or held static (Control, n=5) for 3 hours. A through F, Data are represented in dot plots with the mean and SEM shown next to each group. Comparisons were made using a 2-sample t test (unpaired, 2 tailed). No significant differences were observed vs control. Increased Mechanical Tension Results in Reduced miR-133a Abundance in Isolated TA Fibroblasts When isolated aortic fibroblasts and SMCs were exposed to mechanical tension in the form of biaxial cyclic stretch, miR-133a were precipitated from conditioned culture media from each fibroblast and smooth muscle cell (SMC) line. A, Acetylcholinesterase activity was quantitated as a measure of exosome number from each fibroblast line exposed to 12% biaxial cyclic stretch (Tension, n=8) or held static (Control, n=8) for 18 hours. B, miR-133a abundance in exosomes precipitated from the conditioned media of each fibroblast line exposed to 12% biaxial cyclic stretch (Tension, n=7) or held static (Control, n=7) for 18 hours. C, miR-133a abundance in each aortic fibroblast line after exposure to 3 hours of 12% biaxial cyclic stretch in the presence of 20 lmol/L GW4869 (Tension+GW4869, n=8) or held static (Control, n=8). D, Acetylcholinesterase activity was quantitated as a measure of exosome number from each SMC line exposed to 12% biaxial cyclic stretch (Tension, n=6) or held static (Control, n=6) for 18 hours. E, miR-133a abundance in exosomes precipitated from the conditioned media of each SMC line exposed to 12% biaxial cyclic stretch (Tension, n=6) or held static (Control, n=6) for 18 hours. A through E, Data are represented in dot plots with the mean and SEM shown next to each group. Comparisons were made using a 2-sample t test (unpaired, 2 tailed). *P<0.05 vs control. Figure 6. Mean arterial blood pressure and miR-133a levels in thoracic aortic tissue and plasma from 2 murine models of hypertension. To confirm that the loss of miR-133a was independent of AngII receptor signaling, isolated primary aortic fibroblasts and SMCs were exposed to 100 nmol/L AngII for 3 hours. No change in miR-133a was observed in fibroblasts (1.20AE0.52versus 1.0AE0.27-fold expression; P=0.62 versus control; n=8) ( Figure 2C) or SMCs (1.14AE0.26-versus 1.0AE0.11-fold expression; P=0.69 versus control; n=7) ( Figure 2D). These findings confirmed that the alterations in miR-133a abundance observed in the aortic ring segments was likely attributable to the direct effects of mechanical tension on the fibroblasts within the aortic wall rather than AngIImediated receptor signaling. Mechanical Tension Does Not Reduce Transcription of miR-133a in TA Fibroblasts To determine the mechanism of tension-induced loss of miR-133a, primary miR-133a transcript levels (primary miR-133a-1 and primary miR-133a-2) were measured in isolated aortic fibroblasts after biaxial cyclic stretch. The expression of primary miR-133a-1 did not change in response to mechanical stretch miR-133a-2 increased in response to stretch (2.0AE0.44-versus 1.0AE0.21-fold expression; P<0.05 versus control; n=8) (Figure 3B). The results suggested that the tension-induced loss of miR-133a occurred posttranscriptionally. Increased Mechanical Tension Does Not Alter the Expression or Abundance of MicroRNA-Specific Exoribonucleases in TA Fibroblasts Although mature microRNAs are not likely substrates for most endoribonucleases, 21,22 they do possess unprotected 5 0 and 3 0 ends, making them susceptible to degradation by specific exoribonucleases. 23,24 Three exoribonucleases that are capable of degrading microRNAs have been identified: XRN-1, 25 XRN-2, 25,26 and ExoSC4. 23 To examine whether mechanical tension induced these exoribonucleases, mRNA expression and protein levels were measured in isolated aortic fibroblasts after application of biaxial cyclic stretch. Neither the mRNA expression nor protein abundance of any of these exoribonucleases was changed compared with static fibroblasts: XRN-1 mRNA: 1. Figure 4F). These findings suggested that the tension-induced loss of miR-133a in fibroblasts was not mediated by the rapid degradation of mature miR-133a catalyzed by exoribonuclease activity. Mechanical Tension Induces Exosome Secretion of miR-133a It has been well described that many microRNAs are exported/secreted from cells in exosomes. 27 Exosomes are 30-to 100-nm-diameter endosomal vesicles that are packaged into multivesicular bodies and secreted on fusion with the plasma membrane. To determine the effects of stretch on exosome secretion, TA fibroblasts were grown on flexible membranes and either held static or exposed to biaxial cyclic stretch in medium containing exosome-depleted FBS. At the end of 18 hours, the medium was collected and the newly secreted exosomes were precipitated. Acetylcholinesterase is enriched in the lipid bilayer membrane of exosomes, allowing the measurement of acetylcholinesterase activity to be used as a surrogate for the number of exosomes present. 28,29 Acetylcholinesterase activity was higher in exosomes precipitated from the medium of fibroblasts exposed to stretch compared with the medium from the static control fibroblasts (1.27AE0.04-versus 1.0AE0.08-fold acetylcholinesterase activity; P<0.05; n=8) ( Figure 5A). Furthermore, the abundance of miR-133a was increased in the exosomes collected from the fibroblasts exposed to stretch compared with static controls (1.65AE0.28-versus 1.0AE0.12-fold expression; P<0.05; n=7) ( Figure 5B). These results demonstrated that biaxial cyclic stretch enhanced TA fibroblast exosome secretion and the loss of cellular miR-133a. Previous studies have demonstrated that exosome formation and membrane curvature is dependent on neutral sphingomyelinase 2-mediated hydrolysis of the choline head group, in the conversion of sphingomyelin to ceramide. [30][31][32] Inhibition of neutral sphingomyelinase 2 with the use of a well-described noncompetitive inhibitor (GW4869) has been demonstrated to prevent exosome formation and secretion. 30,[33][34][35] Accordingly, to demonstrate that the stretch-induced loss of miR-133a was mediated by exosome formation and secretion, fibroblasts were exposed to biaxial cyclic stretch in the presence of GW4869. The results demonstrated that inhibition of exosome formation prevented the stretch-induced secretion of miR-133a; no change in cellular miR-133a levels were observed with GW4869 compared with static control (1.02AE0.35-versus 1.0AE0.29-fold expression; P=0.52, n=8) ( Figure 5C). Taken together, these studies suggested that one mechanism by which miR-133a is lost in response to elevated mechanical tension is through increased packaging and secretion of miR-133a in exosomes. To determine the effects of stretch on SMC exosome secretion, TA SMCs were grown on flexible membranes and either held static or exposed to biaxial cyclic stretch in medium containing exosome-depleted FBS. At the end of 18 hours, the medium was collected and the newly secreted exosomes were precipitated. Acetylcholinesterase activity was similar in exosomes precipitated from the medium of SMCs exposed to stretch compared with the medium from the static control SMCs (0.90AE0.12-versus 1.0AE0.11-fold acetylcholinesterase activity; P=0.21; n=6) ( Figure 5D). Furthermore, the abundance of miR-133a was similar in the exosomes collected from the SMCs exposed to stretch compared with static controls (1.07AE0.26-versus 1.0AE0.14-fold expression; P=0.86; n=6) ( Figure 5E). These results confirmed that TA SMC exosome secretion of miR-133a was not affected by biaxial cyclic stretch. Increased Vessel Wall Tension in Vivo Results in a Reduction of TA Tissue miR-133a Abundance To further examine the relationship between wall tension and the in vivo loss of miR-133a, 2 distinct murine models of hypertension were used. Murine blood pressure measurements were obtained, and relative changes in MAP were measured by tail-cuff using the CODA system. MAP in wild-type normotensive mice was determined to be 121.16AE2.59 mm Hg (normotensive, n=12). Circulating miR-133a Abundance Was Increased in Hypertensive Patients To establish whether there is a relationship between elevated wall tension and circulating levels of miR-133a in patients, the abundance of miR-133a was determined in plasma samples collected from patients previously diagnosed with hypertension and compared with normotensive controls (patient demographic information is listed in Table 2). Blood pressure was measured at the time of blood collection. MAP was significantly elevated in the hypertensive patients (n=11) compared with the normotensive patients (n=12) (97AE6 versus 89AE8 mm Hg; P<0.05) ( Figure 7A) (blood pressures are listed in Table 2). Moreover, circulating plasma levels of miR-133a were increased in the hypertensive group (n=11) compared with the normotensive group (n=12) (1.55AE0.26versus 1.0AE0.18-fold expression; P<0.05) ( Figure 7B). These results supported the described animal studies and demonstrated that increased MAP was associated with increased plasma levels of miR-133a. Discussion Previous work from this laboratory demonstrated the abundance of miR-133a was decreased in aortic tissue from patients with TAA and was inversely proportional to aortic diameter. 1 We hypothesized, on the basis of the Law of Laplace, that elevated wall tension, as experienced with increased aortic diameter, may play a role in regulating miR-133a cellular abundance. Accordingly, in this study, we examined mechanisms capable of regulating miR-133a abundance under conditions of elevated wall tension. The unique findings of this study were 4-fold. First, it was determined that ex vivo tension applied to intact TA rings resulted in a shortterm reduction in tissue miR-133a abundance. Second, mechanical tension in the form of biaxial cyclic stretch applied to isolated primary aortic fibroblasts and SMCs revealed that fibroblasts preferentially responded to mechanical tension, resulting in the short-term reduction of miR-133a abundance. Third, 3 potential mechanisms capable of regulating the cellular abundance of microRNAs were examined, and results demonstrated that tension-dependent loss of miR-133a was primarily mediated through exosome secretion. Finally, using 2 in vivo models of hypertension and human plasma samples from normotensive and hypertensive patients, it was determined that elevated blood pressure (increased wall tension) was sufficient to induce the loss of miR-133a from the descending TA (mouse models) and was associated with increased plasma levels of miR-133a (mouse models and human plasma). Taken together, these results identified a specific tension-sensitive mechanism by which miR-133a was reduced in a cell type that plays a key role in adverse vascular remodeling. The association between increased wall tension and vascular remodeling, in aortic aneurysm, has been described. 37 In previous results from this laboratory, mechanical tension applied to intact murine TA rings altered gene expression of several key MMPs active in ECM remodeling, specifically MMP-2 and the membrane type-1 MMP. 11,12 Combined with previous findings suggesting miR-133a regulates the abundance of membrane type-1 MMP, 8,38 these prior studies provided foundational evidence that justified examination of the role of elevated wall tension on the levels of this microRNA. In this report, it was demonstrated that in as little as 3 hours, applied tension (roughly equivalent to a MAP of %150 mm Hg) reduced miR-133a by %70% in TA tissue. This is in agreement with a study by Mohamed and colleagues, who performed a genome-wide analysis on the dysregulation of microRNA abundance in the thoracic diaphragm of mice after ex vivo application of mechanical tension. 39 Using microarray analysis, the authors demonstrated that multiple microRNAs were affected by the application of mechanical tension relative to control; applied tension increased the abundance of some microRNAs and decreased the abundance of others, including miR-133a. 39 The present study is in agreement with these findings by demonstrating that the TA is also mechanically sensitive and elevated wall tension results in the short-term reduction of miR-133a. Although the application of ex vivo mechanical tension allowed isolation of the effects of tension alone on miR-133a levels, it is well known that multiple factors influence wall tension in vivo. AngII is one factor that causes increased peripheral vascular resistance through interacting with its receptors and driving vasoconstriction, leading to elevated vascular wall tension. Therefore, using endothelial cell-intact aortic rings, we demonstrated that developed tension in response to AngII reduces miR-133a abundance. Interestingly, developed tension had a similar effect to the direct application of mechanical tension. Furthermore, this suggested that physiological changes in wall tension, such as the development of hypertension, may be sufficient to alter miR-133a abundance in the vasculature. When isolated aortic fibroblasts and SMCs were examined using a biaxial cyclic stretch paradigm mimicking the cardiac cycle, the short-term reduction of miR-133a abundance was observed in fibroblasts, but not in SMCs. Previous studies have suggested that fibroblasts are sensitive to changes in mechanical tension and respond by altering phenotypic characteristics, taking on a "synthetic" or mobile role. 40,41 In pathologic vascular remodeling, these synthetic fibroblasts become activated in the adventitia and migrate inward to the media, remodeling the ECM and enhancing vessel stiffness. 42 Previous results from this laboratory defined changes in cellular makeup within the aortic wall during TAA formation, revealing that aortic dilation occurs simultaneously with vessel remodeling and the emergence of a population of active fibroblasts. 9 When combined with studies demonstrating that both thoracic and abdominal aortic dilation is accompanied by the apoptotic loss of SMCs, 9,43,44 it is believed that fibroblasts may be the key cellular mediator of aortic remodeling in TAA development. Because a single microRNA can regulate the translation of multiple targets, the present results may provide support for the idea that the short-term loss of miR-133a from aortic fibroblasts contributes to altered fibroblast phenotype and may contribute to the aberrant vascular remodeling that occurs during aneurysm development. This hypothesis will be addressed in future studies. Interestingly, when the isolated fibroblasts and SMCs were exposed to AngII in culture, no change in miR-133a levels was observed. This suggested that the short-term loss of miR-133a abundance was not mediated by direct activation of the classic AngII pathway. In this study, 3 mechanisms capable of regulating the abundance of miR-133a were investigated. First, to determine if the reduction of miR-133a in aortic fibroblasts was attributable to reduced miR-133a transcription, primary miR-133a transcript levels were quantitated after the application of mechanical tension. Because miR-133a is transcribed from 2 separate locations within the genome, both primary miR-133a-1 and primary miR-133a-2 levels were examined. Although several studies have demonstrated that mechanical tension may alter transcription, 11,12,45 tension had no effect on primary miR-133a-1 levels. Surprisingly, an increase in primary miR-133a-2 was induced; however, this was not sufficient to overcome the loss of mature miR-133a observed. Therefore, it was concluded that tension-induced reduction of mature miR-133a was not a result of decreased transcription. Second, ribonuclease-mediated degradation was examined. Three known exoribonucleases capable of degrading micro-RNAs are XRN-1, XRN-2, and ExoSC4. 23,24 Tension did not alter exoribonuclease mRNA expression or protein abundance, suggesting rapid degradation of miR-133a was not likely mediated by exoribonucleases. Third, exosome secretion has been identified as an efficient mechanism for rapid reduction of cytoplasmic nucleic acids. 46 In the present study, it was determined that acetylcholinesterase activity increased by %30% in exosomes precipitated from conditioned culture medium collected from stretched fibroblasts versus their static controls. This suggested that there was an increase in either size or number of exosomes secreted in response to stretch. Nonetheless, when normalizing for the amount of acetylcholinesterase activity, more miR-133a was detected in the exosomes collected from the culture medium of fibroblasts exposed to stretch, suggesting increased packaging and export. Most important, when exosome secretion was arrested through inhibition of neutral sphingomyelinase 2, the tensioninduced reduction of miR-133a in fibroblasts was abolished. Therefore, it was concluded that exosome secretion was the major mechanism responsible for the rapid loss of mature miR-133a from aortic fibroblasts in the presence of elevated tension. Interestingly, cellular neutral sphingomyelinase 2 has been identified to be activated immediately (in <1 minute) in response to elevated vascular pressure. 47,48 This pathway, which is largely initiated within caveolae located on the cell membrane, has been demonstrated to upregulate the secretion of exosomes in response to elevated mechanical stretch. 49 Caveolae are primary sites for rapid mechanoinduced tyrosine phosphorylation of proteins and are considered mechanosensing organelles containing signaling molecules, including sphingomyelin and nonreceptor tyrosine kinases. 50 Future studies classifying activation of these pathways in TAA may identify novel targets for therapeutic intervention. The physiological relevance of these findings was determined in vivo through an examination of the effects of increased vessel wall tension on the aortic tissue levels of miR-133a. For this approach, 2 unique murine models of hypertension were used. In the first model, hypertension was induced in mice by delivering AngII by osmotic pump infusion. The second uses a commercially available, spontaneously hypertensive mouse line (BPH2). In this model, it has been reported that hypertension is angiotensin independent through the observation of low circulating angiotensin I levels. 51 In both models, elevated mean arterial blood pressures were confirmed and, as anticipated, miR-133a was found to be reduced in the tissue of the descending TA. In a similar study performed by Castoldi and colleagues, hypertension was induced in Sprague-Dawley rats by treatment with AngII via osmotic pump. 7 After 4 weeks of hypertension, the levels of miR-133a were found to be reduced in myocardial tissue. 7 Although the loss of miR-133a was inhibited with an AngII receptor blocker, irbesartan, this study was unable to determine whether this effect was mediated by AngII signaling or elevated tension. 7 Interestingly, in the present study, both murine models of hypertension displayed increased plasma levels of miR-133a. Although statistical significance was reached only in the BPH2 model, this may have been attributable to the time difference in exposure to elevated blood pressures. Although the AngII model of hypertension experienced elevated blood pressures for a total of 4 weeks, the hypertensive BPH2 mice (12-14 weeks of age when examined) had experienced hypertension since %5 weeks of age (a total of 7-9 weeks). 52 These findings led us to the question of whether this effect could be similarly observed clinically in hypertensive patients. Accordingly, miR-133a abundance was measured in the plasma of individuals with and without documented hypertension. Plasma levels of miR-133a were increased in the hypertensive group. This is in agreement with a previous study from this laboratory when investigating circulating microRNA levels in patients with elevated TA wall tension from TAA. 53 In this past study, it was demonstrated that plasma miR-133a levels were elevated in patients with TAA compared with healthy controls. Combined with these past findings, the present results confirmed that in vivo elevated tension was sufficient to induce the loss of miR-133a in the aortic tissue (mouse) and concomitantly demonstrated an association with increased circulating levels of miR-133a (mouse and human). The present study is not without limitations. First, the effects of tension were determined on miR-133a alone. Although increasing evidence highlights the role miR-133a plays in maintaining tissue homeostasis, it is anticipated that the abundance of other microRNAs may also be altered with elevated tension. Accordingly, we are unable to conclude whether tension-induced secretion of miR-133a in exosomes is limited to miR-133a or is a characteristic of a specific group of microRNAs. Second, after 3 hours of stretch, the amount of exosomes secreted into the cell culture medium was not sufficient to be detected above that of the exosome-depleted medium alone, suggesting 3 hours was insufficient for the detection of de novo synthesis of exosomes. Therefore, the length of time was increased to 18 hours, which was sufficient in demonstrating an increase in exosomes with stretch. Third, murine blood pressures were assessed using a noninvasive tail-cuff system. This system routinely reports systolic and diastolic pressures that contribute to a normotensive MAP of %120 mm Hg in untreated wild-type mice. 54 Although a MAP of this value would typically be considered hypertensive, these results are consistent with other investigators using the CODA system for measuring murine blood pressure. 54 Therefore, we assessed the relative difference between the groups of mice and were able to determine a significant increase in the blood pressure measurements in both murine models of hypertension. Fourth, although circulating levels of miR-133a were identified to be significantly elevated in the clinical plasma samples taken from hypertensive patients, the sample size of measurements was limited, preventing advanced modeling of the correlation between pressure and miR-133a abundance. Last, all hypertensive patients in this study were actively being treated with antihypertensive agents at the time of blood collection. Despite this, blood pressures and plasma levels of miR-133a were significantly elevated compared with the nonhypertensive control group. Taken together, the results of this study have identified a specific tension-sensitive mechanism by which miR-133a was reduced in aortic fibroblasts through the packaging and secretion of exosomes. These current findings hold significance with regard to the potential advancement in understanding the role of miR-133a in the regulation of TA remodeling and even suggest the possibility of using circulating levels of this microRNA to detect or monitor the progression of pathologic conditions associated with adverse vascular wall tension. Author Contributions Experimental design was conceived by Akerman, Stroud, and Jones. All experimental procedures were performed by Akerman, Blanding, Stroud, and Nadeau. Akerman wrote the manuscript. Clinical samples provided by Zile. Editorial revisions and data analysis and interpretation were performed by Akerman, Blanding, Stroud, Ruddy, Mukherjee, Zile, Ikonomidis, and Jones. All authors reviewed the results and approved the final version of the manuscript. Sources of Funding Research reported in this publication was supported by the National Heart, Lung, and Blood Institute (NHLBI) of the National Institutes of Health and the Department of Veterans Affairs under Award Numbers (NHLBI) R01HL102121 (Ikonomidis), 1R01HL123478 (Zile), and (VA-ORD Merit BLR&D Award) I01BX000904 (Jones). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or the Department of Veterans Affairs.
2019-01-22T22:20:46.344Z
2018-12-21T00:00:00.000
{ "year": 2018, "sha1": "4ea3d0089b40c50274eeb2b70c5e7505e1f494a2", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1161/jaha.118.010332", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "002a40a84f64d12d22d87947896c4a16fb29299a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
100250233
pes2o/s2orc
v3-fos-license
Hugoniot-based equations of state for two filled EPDM rubbers Particle-filled elastomers are commonly used as engineering components due to their ability to provide structural support via their elastic mechanical response. Even small amounts of particle fillers are known to increase the mechanical strength of elastomers due to polymer-filler interactions. In this work, the shock response of two filled (SiO2 or silica and KevlarTMfillers) ethylene-propylene-diene (EPDM) rubbers were studied using single and two-stage gas gun-driven plate impact experiments. Hugoniot states were determined using standard plate impact methods. Both filled-EPDM elastomers exhibit high compressibility under shock loading and have a response similar to adiprene rubber. Introduction Particle-reinforced ethylene-propylene-diene monomer (EPDM, figure 1) elastomers are used as liner materials in some solid rocket motors, providing structural support and protection of propellant fills. As such, they may be subjected to high strain rate deformation or unintentional impact conditions, requiring that their compressive properties be investigated over a range of conditions (strain rate, temperature, etc.). In the present work, the low-to-intermediate stress shock behaviors of silica-filled and Kevlar-filled EPDM rubbers were investigated using gas gundriven plate impact to impart sustained shock waves into the polymer samples. These are the first measurements of Hugoniot states for filled EPDM elastomers used in rocket motor liners. A total of 8 experiments were performed, 4 on each material, forming the basis of Hugoniotbased equations of state for the two materials. Both rubbers had similar compressibility and were similar in behavior to adiprene rubber [1]. Materials Filled EPDM rubbers were obtained from Kirkhill-TA, Brea, CA, USA. Table 1 summarizes the properties of the Kevlar and silica filled elastomers.The Kevlar-filled EPDM composite used KL70-L6211 EPDM, with a proprietary fill percent. The silica-filled EPDM was designation EPDM KL70-887, and contained approximately 30% silica by weight. Based on the difference in density between silica and Kevlar, the fill percent (by weight) in the Kevlar-filled EPDM may be subjected to high strain rate deformation or unintentional impact that their compressive properties be investigated over a range of conditions ure, etc.). In the present work, the low-to-intermediate stress shock behaviors of ar™-filled EPDM rubbers were investigated using gas gun-driven plate impact to k waves into the polymer samples. These are the first measurements of Hugoniot elastomers used in rocket motor liners. hemical structure of the monomer repeat unit of the EPDM polymer. Table 1. Summary of initial and thermal properties of silica and Kevlarfilled EPDM. The glass and melt transition temperatures, T g and T m , and the heat of fusion, ∆H f , were obtained by differential scanning calorimetry at 2 • C/min heating rates. 1.042 0.920 is expected to be greater. The samples were molded and cured into flat sheets stock with thicknesses of 0.031, 0.125, and 0.5 inches. The glass and melt transitions of the polymer composites were determined using differential scanning calorimetry (DSC, TA Instruments Q2000) at Los Alamos National Laboratory. Both materials have low temperature glass transitions (T g = -47 • C) and melt transitions at T m = -19 and -16 • C for the silica-and Kevlar-filled materials, respectively. From integration of the melt endotherm the heat of fusion, ∆H f is obtained. ∆H f can be related to the percent crystallinity. From table 1, the Kevlar-filled material contains ∼ 3 times the crystallinity of the silica-filled material. The greater crystallinity manifests itself in other ways not reported here. The difference in initial densities, also indicate that the Kevlar-filled material has a greater percentage of filler. Plate impact experiments Gas gun-driven plate impact experiments were performed using a 72 mm diameter single stage light gas gun, and a two-stage, 50 mm bore (launch tube) light gas gun at Los Alamos National Laboratory described previously [2]. Two types of experiments were performed. In the first, two (0.94 cm in length) electromagnetic gauge elements contained in a single electromagnetic gauge membrane were sandwiched between layers of the elastomer samples, providing direct measurement of particle velocity wave profiles at the impact face, and 2 additional Lagrangian positions in the material. The principles of electromagnetic gauge operation have been described previously [3,4]. Figure 2 shows a photograph of a gauge membrane glued to a Kevlar-filled EPDM sample. The gauges are the vertical elements in the center of the sample, and consist of 5 µm Al sandwiched between two 10 µm-thick FEP-Teflon membranes forming a package ≈ 25 µm thick. The gauges were glued to the EPDM samples using Epon 815 epoxy with glue bonds generally < 10 µm thick. The EPDM samples were ∼0.125 inch thick. The embedded gauge targets were impacted by either z-sapphire or Kel-F 81 (polychlorotrifluoroethylene, (ρ 0 = 2.14 g/cm 3 ) impactors launched by the light gas guns. Example particle velocity wave profiles from all 6 gauges (2 each at 3 Lagrangian positions) from shot 2s-433 on Kevlar-filled EPDM are In the second type of experiment, polymer samples were affixed in the front of polycarbonate (Lexan TM ) projectiles, and impacted into oriented [100] Lithium Fluoride (LiF) windows with an Al reflector coated on the impact face. Dual velocity-per-fringe (vpf) VISARs [5] were used to measure the interface particle velocity. From the measured projectile velocity and interface particle velocity, the Hugoniot state was determined by impedance matching to the LiF Hugoniot; ρ 0 = 2.638 g/cm3, C 0 = 5.15 km/s, S = 1.35 [1]. Figure 4 shows a schematic of the front surface impact experiments. Figure 5 shows example interface velocity wave profiles from shot 2s-432, in which the Kevlar-filled EPDM was impacted into LiF at 2.878 km/s. The ripples in the wave profiles are due to heterogeneities in the sample from the Kevlar filler. Results and discussion A series of gas gun-driven plate impact experiments were performed on silica-and Kevlar-filled EPDM rubbers, imparting several microsecond duration supported shocks into the materials with shock input stresses ranging from < 1 to nearly 15 GPa. The Hugoniot states determined in the experiments are summarized in table 2, and are the first experimental shock data on particle reinforced-EPDM elastomers that we are aware of. The measured Hugoniot states for silica and Kevlar-filled EPDM are shown in the U S − u p and P − V planes in figures 6 and 7. In the U S − u p plane, the data have been fit to a linear Rankine-Hugoniot relationship. Many polymers, liquids, and "porous" or free volume-containing materials, have Hugoniots with downward curvature in the U S − u p plane, and C 0 from a linear fit often exceeds the ambient condition bulk sound velocity by 300-700 m/s [6]. Linear U S − u p fits shown in figure 6 appear to extrapolate well to u p = 0. The linear Rankine-Hugoniot fit coefficients, U S = C 0 + Su p , are, for the silica-filled EPDM, C 0 = 1.823 ± 0.031 km/s, S = 1.855 ± 0.023, and for the Kevlar-filled EPDM, C 0 = 1.657 ± 0.048 km/s, S = 2.027 ± 0.038. Figure 7 shows that both elastomers are quite compressible under Table 2. Summary of Hugoniot states measured in silica-and Kevlar-filled EPDM rubbers. Error in u proj is < 0.1%. Errors in U S and u p are estimated to be 1-2%. The experiment type is designated as "gauge" for the electromagnetic gauging experiments, or "FS" for the front surface impact experiments. Impactor material Al 2 O 3 refers to single crystal z-cut (c-cut) sapphire. dynamic compression with volumetric compressions of ∼ 10 -15% below 1 GPa and nearly 35% at 12 -14 GPa. Because adiprene rubber, ρ 0 = 1.094 g/cm 3 , has similar U S − u p characteristics ( figure 6) its compressibility should also be similar. The measured particle velocity wave profiles in the Kevlar-filled EPDM also showed structure, ripples. We presume that these are due to the shocks propagating over the Kevlar filler particles. Also, at low shock input stresses, transmitted shock waves in both materials had rounding on the front of the wave. This low-pressure wave front rounding behavior is consistent with a viscoelastic response observed in other polymers [7]. The rounding in particle velocity at the top of the shock front is due to shocking first to an "instantaneous" state on a "stiffer" Hugoniot, followed by viscoelastic relaxation to an "equilibrium" condition. Conclusions New Hugoniot data for two different filled EPDM rubbers are presented up to nearly 15 GPa. Similar to adiprene rubber, the two materials were found to be quite compressible under modest shock pressures. For example, compression ratios, V /V 0 of 0.85 -0.9 (or compressions, 1 − V /V 0 , of 10 -15%) are observed at shock pressures below 1 GPa. At 12 -14 GPa, the rubbers are compressed ∼35%. The compressions of the two rubbers are not appreciably different in the P − V /V 0 plane.
2019-04-08T13:12:56.955Z
2013-07-08T00:00:00.000
{ "year": 2014, "sha1": "ce4705c1714c422a915753343d12ccdf70e09c6c", "oa_license": "CCBY", "oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/500/18/182015/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "10aca2f0c8477beb9aca4bb18a717f56dcb38817", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
53109444
pes2o/s2orc
v3-fos-license
One‐Pot Synthesis of Diverse γ‐Lactam Scaffolds Facilitated by a Nebulizer‐Based Continuous Flow Photoreactor Abstract The use of a modified prototype continuous flow reactor (CFR) as a pivotal part of a number of versatile singlet oxygen‐mediated reaction sequences is presented herein. These sequences target rapid access to structural complexity and diversity. The prototype reactor achieves high conversions and productivities by attaining large specific surface areas for these biphasic reactions. In the reactor, the reaction solution is nebulized (using either oxygen or air) and the resulting aerosol is irradiated by an LED jacket that surrounds the Pyrex reaction chamber. The one pot procedures developed herein are, according to many different criteria, both highly efficient and green. The key common intermediates and the source of both the complexity and variety of the final products are N‐acyl imminium ions (NAI; protonated N‐acyl enamines). The initial substrates are simple and readily accessible furans and the diverse array of products is composed of different complex γ‐lactams. Many of the products are of particular interest due to their close relationships to known biologically active molecules. S3 SAFETY PRECAUTIONS: Measures were taken to eliminate all possible ignition sources from the fumehood area (sparks or flames, e.g. the electricity transformer for the LEDs was kept outside of the fumehood) in which the NebPhotOX system was operated. This included operating the reactor at room temperature and pressure without any significant heat input from the low power LEDs used. In addition, the fumehood was always adequately ventilated with a high air flow. System operating conditions prevented oxygen stagnation in the system. Additional precautions included that the operator wears safety glasses with side shields and flame resistant safety clothing. When the procedure was performed on a larger scale, the two cooled collection flasks placed in series were prefilled with an excess of Me 2 S in MeOH (3 equiv in the first flask and 1 equiv in the second flask) for the fast reduction of the initially formed hydroperoxides of types I and II (Scheme 1). Even higher excesses of the reducing agent can be used. General experimental procedure for the preparation of compounds 3a, 3b and 4c 2-Substituted furans 1 (2.5 mmol, 380 mg in case of 1a, or 221 μL in case of 1b, or 315 mg in case of 1c) and rose Bengal (0.5 mol%, 12.7 mg) were dissolved in MeOH (total volume 5 mL, 0.5 M). The resulting solutions were transferred to the nebulizer via a liquid pump (flow rate set at 0.5 mL min -1 ) and timing was initiated for calculation of the exact flow rate. The solutions were dispersed by the nebulizer into the reaction cylinder which was placed in a horizontal or a vertical position using oxygen or air as the nebulizing gas (50 psi back pressure). The cylinder was irradiated by LEDs (natural white light 3800-4200 K, 10 W m -1 , 1050 Lm m -1 ). When all the solution had been dispersed, the timing was stopped for the calculation of the exact flow rate and the three-way valve on the uptake line was switched to pure MeOH (2 mL) to flush out the system. The crude solutions were collected in the two cooled spherical flasks placed in series. A small sample of the crude solution was concentrated in vacuo for the measurement of the conversions by 1 H NMR. Then, the solutions from the two flasks were placed into one flask and Me 2 S (730 μL, 10 mmol) was added. The solution was stirred for 1 h at rt. When the reductions were completed, as indicated by tlc analysis, the appropriate amine (2.5 mmol, 278 mg of histamine, or 400 mg of tryptamine, or 273 L of BnNH 2 ) was added and the solution was stirred for 1 h at rt. After the formation of the corresponding 2-pyrrolidinones of type 2, MeOH was replaced either by HCOOH (2 mL, towards 3a) or CH 2 Cl 2 (6 mL, towards 3b and 4c). For the formation of 3b, TFA (1.25 mmol, 96 L) was added, while for the formation of 4c, p-TsOH (1.25 mmol, 238 mg) was added. After completion of the reactions (15 h for 3a, or 3 h for 3b, or 1 h for 4c) the solutions were concentrated in vacuo and the products were purified by flash column chromatography (silica gel, petroleum ether : EtOAc or acetone : EtOAc). S5 Glochidine (3a) 1 The reaction was accomplished according to the general experimental procedure described above, utilizing furan 1a. Nebulization of the 5 mL reaction solution took 8.62 min (actual flow rate = 0.58 mL min -1 ) when the reaction cylinder was in the horizontal position and oxygen was used as the nebulizing gas (conversion 99%). When the reaction cylinder was in the vertical position and air was used as the nebulizing gas, the reaction solution was nebulized within 7.68 min (actual flow rate = 0.65 mL min -1 ) and the conversion was 95%. Starting the reaction sequence with the cylinder placed in the horizontal position, the product 3a was isolated in 58% yield (379 mg) after purification by flash column chromatography (silica gel, EtOAc → acetone : EtOAc, 1:1 124.9, 124.7, 78.4, 41.4, 33.4, 31.4 (2C), 29.8, 28.8, 23.6, 22.4, 20.1, 13.9 ppm. 6-Benzyl-1-oxa-6-azaspiro[4.4]nonan-7-one (4c) 2 The reaction was accomplished according to the general experimental procedure described above, utilizing furan 1c. Nebulization of the 5 mL reaction solution took 9.09 min (actual flow rate = 0.55 mL min -1 ) when the reaction cylinder was in the horizontal position and oxygen was used as the nebulizing gas (conversion 85%). When the reaction cylinder was in the vertical position and oxygen was used as the nebulizing gas, the reaction solution was nebulized within 7.34 min (actual flow rate = 0.68 mL min -1 ) and the conversion was 92%. When the reaction cylinder was in the vertical position and air was used as the nebulizing gas, the reaction solution was nebulized within 7.68 min (actual flow rate = 0.65 mL min -1 ) and the conversion was 90%. The product 4c was purified by flash column chromatography (silica gel, petroleum ether : EtOAc = 1:1). When the reaction cylinder was in the horizontal position and oxygen was used as the nebulizing gas the yield was 56% (324 mg). When the reaction cylinder was in the vertical position and oxygen was used as the nebulizing gas the yield was 63% (364 mg). General experimental procedure for the preparation of pyrrolidinones 5 2-Hexylfuran 1 (2.5 mmol, 380 mg) and rose Bengal (0.5 mol%, 12.7 mg) were dissolved in MeOH (total volume 5 mL, 0.5 M). The resulting solution was transferred to the nebulizer via a liquid pump (flow rate set at 0.5 mL min -1 ) and timing was initiated for calculation of the exact flow rate. The solution was dispersed by the nebulizer into the reaction cylinder which was placed in a horizontal or a vertical position using oxygen or air as the nebulizing gas (50 psi back pressure). The cylinder was irradiated by the LEDs (natural white light 3800-4200 K, 10 W m -1 , 1050 Lm m -1 ). When all the solution had been dispersed, the exact flow rate was calculated and the three-way valve on the uptake line was switched to pure MeOH (2 mL) to flush out the system. The crude solution was collected in the two cooled spherical flasks placed in series. A small sample of the crude solution was concentrated in vacuo for the measurement of the conversions by 1 H NMR. Then, the solutions from the two flasks were placed in one flask and Me 2 S (730 μL, 10 mmol) was added. The solution was stirred for 1 h at rt. When the reduction was completed, as indicated by tlc analysis, the appropriate amine (2.5 mmol, 273 L of BnNH 2 , or 1.25 mL, 2.0 M solution of NH 3 in MeOH, or 216 L of 40% w/w aqueous solution of MeNH 2 ) was added and the mixture was stirred for 1 h at the same temperature. After the formation of the intermediate 2-pyrrolidinone 2a, MeOH was replaced with CH 2 Cl 2 (6 mL) and the appropriate nucleophile (2.5 mmol, 293 mg of indole, or 173 L of pyrrole, or 5.0 mmol, 443 L of 2-methylfuran) was added followed by p-TsOH (238 mg, 1.25 mmol). The reaction was stirred for 1 h at rt. After completion of the reaction, as indicated by tlc analysis, a saturated aqueous solution of NaHCO 3 (8 mL) was added and the mixture was extracted with CH 2 Cl 2 (2× 8 mL). The combined organic phases were dried over Na 2 SO 4 , filtered and concentrated in vacuo. The products were purified by flash column chromatography (silica gel, petroleum ether : EtOAc). In case of product 5d, the process was repeated twice on a larger scale (using either oxygen or air as the nebulizing gas and with the cyclinder in the vertical position) starting with 10 mmol of furan 1a (1,52 g dissolved in 20 mL of MeOH, 0.5 M) and the results were very similar. In this case, the two cooled collection flasks placed in series were prefilled with excess of Me 2 S in MeOH (3 equiv in the first flask and 1 equiv in the second flask) in order to avoid the accumulation of large amounts of hydroperoxide that formed during the initial photooxygenation step. 1-Benzyl-5-hexyl-5-(1H-indol-3-yl)pyrrolidin-2-one (5d) The reaction was accomplished according to the general experimental procedure described above. Nebulization of the 5 mL reaction solution took 8.77 min (actual flow rate = 0.57 mL min -1 ) when the reaction cylinder was in the horizontal position and oxygen was used as the nebulizing gas (conversion 99%). When the reaction cylinder was in the vertical position and oxygen was used as the nebulizing gas, the reaction solution was nebulized within 6.67 min (actual flow rate = 0.75 mL min -1 ) and the conversion was 99%. When the reaction cylinder was in the vertical position and air was used as the nebulizing gas, the reaction solution was nebulized within 8.48 min (actual flow rate = 0.59 mL min -1 ) and the conversion was 92%. The product 5d was purified by flash column chromatography (silica gel, petroleum ether : EtOAc = 2:1). When the reaction cylinder was in the horizontal position and oxygen was used as the nebulizing gas the yield was 56% (524 mg). When the reaction cylinder was in the vertical position and oxygen was used as the nebulizing gas the yield was 66% (617 mg). When the reaction cylinder was in the vertical position and air was used as the nebulizing gas the yield was 51% (477 mg 5-Hexyl-1-methyl-5-(5-methylfuran-2-yl)pyrrolidin-2-one (5g) The reaction was accomplished according to the general experimental procedure described above. Nebulization of the 5 mL reaction solution took 8.61 min (actual flow rate = 0.58 mL min -1 ) when the reaction cylinder was in the horizontal position and oxygen was used as the nebulizing gas (conversion 99%). When the reaction cylinder was in the vertical position and oxygen was used as the nebulizing gas, the reaction solution was nebulized within 6.03 min (actual flow rate = 0.83 mL min -1 ) and the conversion was 99%. The product 5g was purified by flash column chromatography (silica gel, petroleum ether : EtOAc = 4:1). When the reaction cylinder was in the horizontal position and oxygen was used as the nebulizing gas the yield was 53% (348 mg). When the reaction cylinder was in the vertical position and oxygen was used as the nebulizing gas the yield was 62% (408 mg). 154.1, 152.0, 107.2, 105.7, 64.5, 35.3, 31.7, 30.3, 29.5, 28.7, 25.4, 23.0, 22.6, 14.0, 13.6 General experimental procedure for the preparation of -lactams 6h, 7i, 8j and 9k 2-Substituted furan 1 (2.5 mmol, 380 mg in case of 1a or 350 mg in case of 1d) and rose Bengal (0.5 mol%, 12.7 mg) were dissolved in MeOH (total volume 5 mL, 0.5 M). The resulting solutions were transferred to the nebulizer via a liquid pump (flow rate set at 0.5 mL min -1 ) and timing was initiated for calculation of the exact flow rate. The solutions were dispersed by the nebulizer into the reaction cylinder which was placed in a horizontal or a vertical position using oxygen or air as the nebulizing gas (50 psi back pressure). The cylinder was irradiated by the LEDs (natural white light 3800-4200 K, 10 W m -1 , 1050 Lm m -1 ). When all the solution had been dispersed, the exact flow rate was calculated and the three-way valve on the uptake S10 line was switched to pure MeOH (2 mL) to flush out the system. The crude solutions were collected in the two cooled spherical flasks placed in series. A small sample of the crude solution was concentrated in vacuo for the measurement of the conversions by 1 H NMR. Then, the solutions from the two flasks were placed into one flask and Me 2 S (730 μL, 10 mmol) was added. The solutions were stirred for 1 h at rt. When the reduction was completed, as indicated by tlc analysis, the appropriate amine (2.5 mmol, 273 L of BnNH 2 , or 422 L of 3,4-dimethoxyphenethylamine, or 216 L of 40% w/w aqueous solution of MeNH 2 ) was added and the solutions were stirred for 1 h at the same temperature. After the formation of the intermediate 2-pyrrolidinones of type 2, methylene blue (3 mol%, 24 mg) was added and the solutions were stirred for 3 h at rt. In case of entry h (Table 3) the reaction afforded compound 6h. After treatment with MB, in case of entry j (Table 3) MeOH was replaced by HCOOH (2 mL), while for entries i and k (Table 3), MeOH was replaced with CH 2 Cl 2 (6 mL) and p-TsOH (2.5 mmol, 476 mg towards 7i, or 1.25 mmol, 238 mg towards 9k) was added. After the addition of acid, the reactions were stirred for 1 h at rt. After completion of the reactions, as indicated by tlc analysis, the solutions were concentrated in vacuo and the products 6h, 7i, 8j and 9k were purified by flash column chromatography (silica gel, petroleum ether : EtOAc). 1-Methyl-6-oxa-1-azaspiro[4.5]dec-3-en-2-one (9k) The reaction was accomplished according to the general experimental procedure described above, utilizing furan 1d. Nebulization of the 5 mL reaction solution took 9.24 min (actual flow rate = 0.54 mL min -1 ) when the reaction cylinder was in the horizontal position and oxygen was used as the nebulizing gas (conversion 85%). When the reaction cylinder was in the vertical position and oxygen was used as the nebulizing gas, the reaction solution was nebulized within 6.74 min (actual flow rate = 0.74 mL min -1 ) and the conversion was 95%. When the reaction cylinder was in the vertical position and air was used as the nebulizing gas, the reaction solution was nebulized within 7.92 min (actual flow rate = 0.63 mL min -1 ) and the conversion was 82%. The product 9k was purified by flash column chromatography (silica gel, petroleum ether : EtOAc = 1:1). When the reaction cylinder was in the horizontal position and oxygen was used as the nebulizing gas the yield was 58% (242 mg). When the reaction cylinder was in the vertical position and oxygen was used as the nebulizing gas the yield was 53% (221 mg). When the reaction cylinder was in the vertical position and air was used as the nebulizing gas the yield was 49% (205 mg). 8, 144.7, 127.8, 91.4, 65.8, 29.9, 24.7, 23.5, 21
2018-11-15T16:51:26.277Z
2018-05-02T00:00:00.000
{ "year": 2018, "sha1": "734cac9bf272d7097a1e478719476574fbe48601", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cptc.201800068", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a82acfca4b2cce62e8352f95ae21b41a0f87e41f", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
22438607
pes2o/s2orc
v3-fos-license
Similarities in trabecular hypertrophy with site-specific differences in cortical morphology between men and women with type 2 diabetes mellitus The goal of our study was to investigate interactions between sex and type 2 diabetes mellitus (T2DM) with regard to morphology of the peripheral skeleton. We recruited 85 subjects (mean age, 57±11.4 years): women with and without T2DM (n = 17; n = 16); and men with and without T2DM (n = 26; n = 26). All patients underwent high-resolution, peripheral, quantitative, computed tomography (HR-pQCT) imaging of the ultradistal radius (UR) and tibia (UT). Local bone geometry, bone mineral density (BMD), and bone microarchitecture were obtained by quantitative analysis of HR-pQCT images. To reduce the amount of data and avoid multi-collinearity, we performed a factor-analysis of HR-pQCT parameters. Based on factor weight, trabecular BMD, trabecular number, cortical thickness, cortical BMD, and total area were chosen for post-hoc analyses. At the radius and tibia, diabetic men and women exhibited trabecular hypertrophy, with a significant positive main effect of T2DM on trabecular number. At the radius, cortical thickness was higher in diabetic subjects (+20.1%, p = 0.003). Interestingly, there was a statistical trend that suggested attenuation of tibial cortical hypertrophy in diabetic men (cortical thickness, pinteraction = 0.052). Moreover, we found an expected sexual dichotomy, with higher trabecular BMD, Tb.N, cortical BMD, Ct.Th, and total area in men than in women (p≤ 0.003) at both measurement sites. Our results suggest that skeletal hypertrophy associated with T2DM is present in men and women, but appears attenuated at the tibial cortex in men. Introduction Fragility fractures are increasingly recognized as a skeletal secondary complication of type 2 diabetes mellitus (T2DM) [1][2][3][4]. Although subjects with T2DM carry a high risk of falls due to impaired eyesight, polyneuropathy, and fatty atrophy of the musculature, these factors have been shown to be insufficient to explain the disproportionately high rate of fractures [5]. Currently, the pathogenesis of diabetic bone disease and associated fragility fractures is not sufficiently understood. Bone mineral density (BMD)-as measured by dual-energy, x-ray absorptiometry (DXA) or quantitative computed tomography (QCT)-is typically high to normal or only mildly reduced in patients with T2DM [6]. Potential explanations for the paradoxical positive association of high BMD and fragility fractures include microarchitectural and matrix-based causes, such as cortical porosity [7,8], and deposition of advanced glycation end products (AGEs) [9]. On a cellular level, diabetic bone disease is characterized by low bone turnover [10,11], and there are also numerous suggestions of a significant imbalance of the WNT/SOST/PTH pathway, possibly through osteocyte dysfunction [12,13]. In the past decade, HR-pQCT has been validated with bone biopsies (i.e., the gold standard method for the quantitative assessment of bone microarchitecture), DXA, and QCT of the axial and peripheral skeleton [16][17][18][19]. HR-pQCT has provided key insights into the morphology and pathophysiology of diabetic bone disease in elderly subjects. Poor cortical bone quality, particularly high cortical porosity, has been reported by several researchers ( [7,8,14]. At the same time, trabecular BMD and trabecular microarchitecture, as determined by HR-pQCT, appear to be stable or even relatively high in subjects with T2DM [7,15]. The above-mentioned microstructural findings have been well documented in postmenopausal diabetic women, but only few studies have investigated bone microarchitecture in men with T2DM [15]. Recently, Paccou et al. reported unfavorable associations between bone quality and T2DM in men. Specifically, they found cortical bone quality to be pathologically altered in both elderly men and women, with more pronounced findings in men. In the general, non-diabetic population, sex-specific differences in bone geometry, bone mineral density (BMD), and bone microarchitecture are well recognized and viewed as the causes for the clinical differences in fracture prevalence between men and women [20,21]. Using HR-pQCT, it has been confirmed that young men have larger bones with higher trabecular bone volume and higher trabecular thickness than young women [22]. With aging, trabecular bone volume decreases proportionately in both men and women, but trabecular microarchitecture remains better preserved in men. Cortical thickness appears to be comparable in younger and middle-aged men and women, but over time, especially at older ages, thickness decreases are larger in women [22]. Interestingly, the cross-sectional area of long bones increases with normal aging by periosteal apposition in both sexes [20,23,24]. Of importance, metabolic bone diseases can alter this physiologic pattern (e.g., as seen in male idiopathic osteoporosis) [25]. Considering the importance of age-and sex-specific skeletal differences for the modulation of fracture risk in the general population, and accumulating evidence for impaired bone quality in women and men with T2DM, we designed a study to investigate the interactions between sex and T2DM in the peripheral adult skeleton. Patients and methods Subjects Thirty-three women and 52 men were recruited into one of four groups: women with type 2 diabetes mellitus (WT2DM; n = 17); women without type 2 diabetes mellitus (WCo; n = 16); men with type 2 diabetes mellitus (MT2DM; n = 26); and men without type 2 diabetes mellitus (MCo; n = 26). Diabetic subjects were recruited from the Endocrine Outpatient Unit of the Department of Internal Medicine III of the Medical University of Vienna, Austria. Healthy women (WCo) were recruited by the VINFORCE study group/Department of Internal Medicine II, St. Vincent Hospital Vienna, Austria. Healthy (i.e., non-diabetic) men (MCo) were recruited as part of the STRAMBO study, an epidemiologic cohort study conducted by the Université de Lyon, France [26]. The study was approved by the ethics committees of the Medical University of Vienna, the St. Vincent Hospital Vienna, and the Université de Lyon. All participants gave written, informed consent. Inclusion criteria for all subjects were age 40-75 years and written, informed consent. Diabetic men and women had to be treated with standard antidiabetics and have HbA 1 C values ranging from 6-10%. Women had to be postmenopausal. Current or previous use of rosiglitazone, steroids, antiepileptic drugs, vitamin K antagonists, bisphosphonates, fluorides, PTH, strontium ranelate, raloxifen, denosumab, and calcitonin were defined as exclusion criteria. Severe hepatic and/or renal failure, active malignancy, or a history of malignancy, and pregnancy excluded subjects from study participation. HR-pQCT imaging All subjects underwent HR-pQCT imaging of the non-dominant ultradistal radius and the left tibia (XtremeCT; Scanco Medical AG, Brüttisellen, Switzerland). Diabetic subjects and female non-diabetic controls were scanned at the Medical University of Vienna, Austria. Male nondiabetic controls underwent imaging in Lyon, France. To highlight cross-calibration validity and exclude a multi-center bias, an additional seven non-diabetic men were scanned and analyzed in Vienna. The two HR-pQCT scanners (first-generation devices) used in this study were cross-calibrated, as published by Burghardt et al. [27]. The identical standard in vivo protocol [22,28] was used in both Vienna and Lyon, and was defined by the following settings: 60kVP; 900 μA; and 100 ms integration time. In case of local fracture history, the contralateral extremity was scanned. After the acquisition of a local scout view, a reference line was placed on the joint surface of the radius and tibia. From the reference line, fixed offsets were used to define the scan region (radius offset: 9.5mm; tibia offset: 22.5mm). The final scan volume covered a length of 9.02mm, corresponding to 110 slices. The nominal resolution of HR-pQCT images was isotropic (82 x 82 x 82 μm). The effective dose was < 4 μSv, the scan time was < 3 minutes per scan. Image analysis For quality control, visual semiquantitative motion grading was performed prior to quantitative image analysis. According to the criteria established by Pialat et al., only scans reaching image-quality grades 1-3 were used for quantitative analyses [29]. HR-pQCT images were segmented semi-automatically and analyzed with the standard protocol provided by the manufacturer of the device. Semiautomatic contours were reviewed for accuracy, and manual adjustment was limited to clear contour deviations from the anatomical periosteal boundaries. Volumetric BMD and morphometric parameters were obtained for trabecular and cortical bone [30]. Trabecular bone volume fraction (BV/TV) was derived from trabecular BMD using an assumed density of 100% for compact mineralized bone (1200 mg HA/cm 3 ) and background marrow (0 mg HA/cm 3 ). Trabecular number and the standard deviation of inter-trabecular distances were calculated by distance transformation [31]. Trabecular thickness and trabecular separation were derived from trabecular BMD and trabecular number [32]. Cortical thickness was obtained by annular approximation [30,33]. In addition, HR-pQCT images were reviewed for the presence of vascular calcifications, which were defined as linear or tubular hyperdensity zones of circular, semi-circular, or crescent-like shape, which corresponded to the anatomical territory of the anterior tibial artery, the posterior tibial artery, the radial artery, the ulnar artery, the interosseous branches, or smaller intramuscular or subcutaneous arterioles [34]. Skin calcifications or other non-vascular soft tissue calcifications were not included. Statistical analysis All statistical analyses were performed using IBM SPSS Statistics version 22. Metric data were described using means +/-standard deviation (SD) if normally distributed or, in case of highly skewed data, medians [min; max]. Categorical data were presented using absolute numbers and percentages. As there was a large number of highly correlated measures obtained for the tibia and radius, principal axis factor analyses (FA) was used to reduce the number of necessary statistical tests and to minimize an error of the first type. Only parameters with the highest loading within a factor were used for subsequent analyses. In order to test the moderation effect of sex on the effect of diabetes, two-way analyses of variance were used. Unpaired student t-tests were used to determine differences in age and laboratory data. To compare the percentage of male and female patients with and without calcifications, a Fisher's exact test was applied. Due to the limited sample size, we refrained from using multiplicity corrections to avoid decreasing power. In order to rule out a multi-center bias, a small subset of non-diabetic male participants from Lyon was compared with non-diabetic men from Vienna who were not part of the original study. A p-value equal to or below 5% was considered to indicate significant results. Subject characteristics Demographics and clinical characteristics are given in Table 1. There were no significant differences in age. Laboratory data were available only from subjects with T2DM and not from non-diabetic controls. Comparing diabetic men and women, there were no significant differences in fasting blood glucose (p = 0.804), HbA1c-levels (p = 0.411), serum insulin (p = 0.730), PTH (p = 0.126), and 25-OH-vitamin D (p = 0.074). Serum creatinine was higher in diabetic men, but remained within normal limits (1.0 mg/dl; p = 0.013). As determined from visual assessment of HR-pQCT scans by a board-certified radiologist (JMP), there were no significant differences between lower leg vascular calcification frequencies in diabetic men (50% with calcifications) and women (42.4% with calcifications, p = 0.607). Likewise, at the upper extremity, there were no significant differences between vascular calcification frequencies in diabetic men (19.2% with calcifications) and women (9.1% with calcifications, p = 0.284). HR-pQCT For all four subject groups (i.e., WT2DM, WCo; MT2DM, MCo), means and standard deviations of HR-pQCT parameters are given in Table 2. Moreover, Table 2 provides relative differences in HR-pQCT parameters between subjects with and without T2DM (with separate analyses for men and women). Data reduction by factor-analysis yielded four factor groups ('first level factors') identical for the radius and tibia parameters. Detailed results for factor-analysis are given in Table 3. Based on the highest factor weight within factor groups (Table 3), four representative HR-pQCT parameters were chosen from each group for selective post-hoc testing. Trabecular BMD (chosen from the factor-group that contained trabecular density, trabecular bone volume fraction, and trabecular thickness), trabecular number (chosen from the factor-group that contained trabecular number, trabecular separation, and trabecular heterogeneity), cortical thickness (chosen from the factor-group that contained cortical area, total density, cortical density, and cortical thickness), and total area (chosen from the factor-group that contained total area, trabecular area, and cortical perimeter). Due to independent information provided by cortical BMD, additional post-hoc testing was performed for cortical BMD as a fifth parameter. The post-hoc choice of HR-pQCT parameters was identical for the radius and tibia (Table 3). At the ultradistal radius, trabecular BMD (+25.8%, p<0.001), trabecular number (+7.4%, p = 0.003), cortical thickness (+21.9%, p = 0.002), and total area (+36.2%, p<0.001) were significantly higher in men than in women. There were no significant sex-specific differences in cortical BMD. Trabecular number (+14.7%, p<0.001) and cortical thickness (+20.1%, Skeletal interactions of sex and type 2 diabetes mellitus p = 0.003) were significantly higher in subjects with T2DM than in non-diabetic subjects. We found a trend toward higher cortical BMD in subjects with T2DM (+3.6%, p = 0.076). Regarding total area, there was a trend toward a significant interaction between sex and T2DM (p = 0.074). For visualization of data (including interactions) and a complete list of p-values, please see At the ultradistal tibia, trabecular BMD (+13.1%, p = 0.003), trabecular number (+10.4%, p = 0.002), cortical thickness (+34.1%, p<0.001), cortical BMD (+6.8%; p<0.001), and total area (+25.0%, p<0.001) were significantly higher in men than in women. T2DM had a significant association with trabecular number (+24.1%, p<0.001), but no significant associations with trabecular BMD, cortical thickness, cortical BMD, or total area. We found a trend toward Table 3. Factor-analysis of HR-pQCT parameters of the ultradistal radius and the ultradistal tibia. Numbers in columns are factor loadings per parameter (in rows). . Dashed line represents mean differences between diabetic and non-diabetic men, and continuous line represents mean differences between diabetic and non-diabetic women. p-values are given for differences between men and women (p sex ), differences between subjects with and without T2DM (pT2DM), and interactions between sex and T2DM (p interaction ). an interaction between sex and T2DM for cortical thickness (p = 0.052). There were no significant interactions between sex and T2DM with regard to trabecular BMD, trabecular number, cortical BMD, or total area at the tibia. For visualization of data (including interactions) and a complete list of p-values, please see Fig 3. There were no significant differences in HR-pQCT parameters between an age-matched subset of healthy male participants scanned in Lyon, France, and healthy men from Vienna, Austria (Table 4), reflecting the validity of our multi-center approach. Discussion The majority of bone research in subjects with type 2 diabetes mellitus (T2DM) has been conducted in postmenopausal women. A recent publication reported pronounced cortical disease in elderly, male type 2 diabetics, but, overall, there are only limited data about bone microarchitecture in men with T2DM. It thus remains to be determined whether and how bone morphology differs between diabetic men and women. Specifically addressing the issue of potential interactions between sex and T2DM, we recruited men and women with and without T2DM and performed HR-pQCT imaging of the ultradistal extremities. High BMD without associated fracture risk reduction is recognized as a clinical key feature of diabetic bone disease [35,36]. In the present study, we found a high trabecular number in men and women with T2DM, thereby supporting the results of other HR-pQCT studies in diabetic subjects [7,15]. Trabecular hypertrophy has also been reported in pre-diabetic subjects with insulin resistance [37]. Trabecular rarefactions and increases in trabecular heterogeneity appear to be a feature of the later stages of diabetes-related bone disease [8,38]. With regard to cortical morphology, we found significantly thicker radial cortices in diabetic men and women. Cortical hypertrophy in subjects with T2DM is in keeping with recent QCT data in diabetic subjects without fragility fractures [12]. HR-pQCT studies in elderly, diabetic women without fractures have reported cortical thickening, but this did not reach statistical significance [8,11]. Supporting this phenotypic concept, cortical hypertrophy has also been found in prediabetic women [37]. Cortical deficits, on the contrary, appear to depend on the clinical characteristics of participants (e.g., with/without prevalent fragility fractures [8], race [14], presence of microvascular disease [39]), and thus, vary in extent from study to study. Interestingly, we found cortical hypertrophy to be partially attenuated in men (Fig 1). Despite recent reports of an unfavorable cortical microarchitecture in elderly, diabetic men [15], this finding was unexpected in our relatively young cohort. The unfavorable association between cortical morphology and male sex is surprising because, in the general (i.e., non-diabetic) population, men are at lower risk for osteoporosis and osteoporotic fractures than same-aged women [20]. From a clinical perspective, it thus remains to be determined whether cortical deficits translate into biomechanical deficits and high fracture risk in diabetic men and women. At the radius, sex and T2DM tended to interact in terms of cross-sectional bone size. Specifically, we found small bone size in diabetic men, but not in diabetic women. Small crosssectional bone size has been previously described in subjects with T2DM [12] and insulin resistance [37]. It thus appears to be another morphologic feature of diabetic bone disease. The relevance of small cross-sectional bone size lies in reduced biomechanical stability and higher susceptibility to fractures [40]. Supporting the validity of our dataset, the presence of a strong statistical main effect of sex on the skeleton was in line with the literature. Several HR-pQCT studies have found larger geometry and higher BMD in healthy men than in healthy women, confirming previous studies using different imaging tools, including central quantitative computed tomography (QCT), peripheral QCT, and DXA [41,42]. In terms of microarchitecture, the male skeleton is known to exhibit greater trabecular number, greater trabecular thickness, and lower intertrabecular separation than the female skeleton [22,43]. Bearing in mind the large number of HR-pQCT parameters and their multi-collinearity, we approached our dataset by factor-analysis and attempted to reduce the amount of data. From factor-analysis, we obtained four groups of HR-pQCT parameters with strong statistical intragroup connections. From a technical and pathophysiologic perspective, statistical group compositions were plausible. The first factor represented a group of parameters derived from trabecular BMD. The second factor contained microstructural indices of the trabecular compartment that are mathematically dependent on trabecular number. The third factor covered parameters driven by cortical features, including cortical thickness, cortical area, cortical bone mineral density, and total bone mineral density. The fourth factor contained geometric indices. Confirming the validity of this approach, the composition of factor-groups and the subsequent choice of representative post-hoc parameters (based on factor-weight) was identical for independent measurement sites (i.e., the radius and tibia). With our diabetic participants being relatively young and free of fragility fractures, care must be taken when comparing them to participants from other diabetic cohorts investigated by HR-pQCT. In terms of the 'hypertrophic' bone pattern, the results of our female diabetics ranged between those found in pre-diabetic, hyperinsulinemic women [37] and postmenopausal women with manifest T2DM without fractures [7,11]. Compared to elderly, male diabetic subjects studied by Paccou et al., our diabetic men exhibited a comparable trabecular phenotype and less pronounced-but manifest-cortical bone deficits [15]. While we consider the relatively young age of our participants and the statistical reduction of HR-pQCT parameters to be strengths of our study, there were several limitations that need to be specifically addressed. The sample size was small. Data were collected with two separate HR-pQCT devices, but they were cross-calibrated by a dedicated multicenter study (previously published) [27]. Identical acquisition protocols and evaluation protocols were used. In addition, we were able to demonstrate that there were no differences in HR-pQCT parameters for healthy (i.e.,non-diabetic) men scanned at either European site (Table 4). Nevertheless, due to the current convention for scan-site definition (i.e., the use of fixed off-sets from the radiocarpal and tibiotarsal joint), the scan sites were minimally different in men and women. As given by the use of a fixed off-set, scans were acquired more distally in men than in women, with, e.g., cortical thickness slightly underrated. Conversely, bone size was slightly overrated in men when compared to women imaged with identical protocols. We did not include subjects with fragility fractures; thus, it is still unclear whether fragility fractures are linked to similar micro-structural pathologies in men and women. In conclusion, our results suggest that skeletal hypertrophy is present in men and women with T2DM, but appears attenuated at the cortical sites of the lower extremities in diabetic men. Future investigations are needed to provide explanations for this sex-specific pattern of diabetic bone disease.
2018-04-03T02:23:24.811Z
2017-04-06T00:00:00.000
{ "year": 2017, "sha1": "0e1c60b5c8fce3301d281007fc1f8ee0ac0b7636", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0174664&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0e1c60b5c8fce3301d281007fc1f8ee0ac0b7636", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119215588
pes2o/s2orc
v3-fos-license
Brans-Dicke theory in the local potential approximation We study the Brans-Dicke theory with arbitrary potential within a functional renormalization group framework. Motivated by the asymptotic safety scenario of quantum gravity and by the well-known relation between f(R) gravity and Brans-Dicke theory at the classical level, we concentrate our analysis on the fixed-point equation for the potential in four dimensions and with Brans-Dicke parameter omega equal to zero. For two different choices of gauge, we study the resulting equations by examining both local and global properties of the solutions, by means of analytical and numerical methods. As a result of our analysis we do not find any nontrivial fixed point in one gauge, but we find a continuum of fixed points in the other one. We interpret such inconsistency as a result of the restriction to omega equal to zero, and thus we suggest that it indicates a failure of the equivalence between f(R) gravity and Brans-Dicke theory at the quantum level. Introduction Many models of modified gravity have been proposed and studied over time in an attempt to address the problems encountered in quantum gravity and cosmology [1]. New models are most commonly postulated as the starting point of a new research direction, however, one instance in which the logic is partially reversed, and a new model of gravity is hoped to emerge as the final result, is the asymptotic safety scenario [2,3,4,5,6]. The general idea behind such scenario is that the theory of quantum gravity should be sought within a large class of theories (e.g. all possible theories described by an action functional of a single metric field), out of which one single theory (or few isolated ones) should emerge with the peculiar characteristic of being a fixed-point of the renormalization group (RG) flow. IR-unstable trajectories emanating from such fixed point(s) would then define nonperturbatively renormalized theories of gravity. The use of functional renormalization group equations (FRGEs) has provided considerable evidence in support of the existence of a nontrivial fixed point theory for gravity, in a large number of formulations and approximations (see list of references in [5,6]). Even within a given class of models, specified by a choice of variables and symmetries, it is obviously impossible to explore the entire space of theories and one has to resort to approximations. A common approximation in the asymptotic safety literature consists in truncating the theory space to a finite-dimensional subspace by making a guess of what might be the most important terms to keep track of in the effective action. The guess is then supposed to be repeatedly refined until little improvement of the results is gained by new refinements. In practice, even this is quite hard, and only recently such program has been implemented to a high order of refinement for truncations that only retain polynomials of the Ricci scalar [7,8,9,10,11]. At the same time, the functional nature of the renormalization group methods being used has just started being exploited further, opening up the possibility of exploring infinite-dimensional subspaces of the theory space. The main idea is to study the RG flow of gravity in a spirit similar to the derivative expansion of scalar field theory [12,13,14,15]. There, the leading order approximation is known as local potential approximation (LPA), and it consists in projecting the flow equation on a constant scalar field, thus allowing to study the flow of a generic effective potential V k (φ) = Γ k [φ = const.]/ d d x. The next to leading order includes a term with two derivatives and any field dependence, and so on at higher orders. At each order of the derivative expansion one is lead to study partial differential equations for functions of the field φ and of the running scale k. In gravity there is much more structure, and there are probably many options in organizing an expansion of this sort. A very natural option is to organize the action as if it was an expansion around a maximally symmetric background. For the latter the only non-zero component of the Riemann tensor is the Ricci scalar R, which is constant: we have ∇ µ R = S µν = C µνρσ = 0, where S µν is the traceless Ricci tensor, and C µνρσ is the Weyl tensor. The analogue of the derivative expansion can then be an expansion in S µν , C µνρσ and their derivatives (by the Bianchi identity ∇ µ R = 2d d−2 ∇ ν S µν ), with arbitrary dependence on R at each order. In the leading order of such an expansion we are left with an f (R) theory, whose study in such spirit was begun in [16,17,18,19,20]. As compared to the LPA for scalar field theory, in the f (R) approximation for gravity we face a number of additional technical complications, in particular a larger number of contributions to the FRGE, with a more complicated dependence on the unknown function, and the challenge of evaluating functional traces on a curved background. The latter in particular introduces some subtleties related to the presence of zero modes in compact backgrounds and to the staircase nature of the results obtained for the traces when using cutoffs with step functions [16]. Also for these reasons progress has been slow in this direction, and it is desirable to find alternative ways to study the same problem. One possible simplification, which we will explore in this paper, is suggested by a well known classical property of the f (R) theory [21,22]. The classical action for f (R) gravity, 1 can be traded for an equivalent action, describing a scalar-tensor theory, The relation between the two Lagrangians is given by a Legendre transform, where R(φ) is found by solving the equation φ = −f ′ (R) for R, and as usual, regularity of the transform is guaranteed if f ′′ (R) = 0. From a RG perspective the advantage of such formulation is that we can study the running of the potential by projecting the FRGE on a flat background, thus sidestepping all the complications of curved backgrounds. In fact, we will construct flow equations for a more general version of (1.2), that is, a generic Brans-Dicke theory with a potential (see (2.1)). 2 Projection on a flat background will allow us to study such theory without truncating the potential to a polynomial form, thus performing an analysis similar to that of pure scalar theory [28,29,30,31,32,33]. Of course, at a quantum level the two theories might be inequivalent. They are both perturbatively nonrenormalizable, and standard perturbative reasonings could only apply at an effective field theory level. When looking for a UV completion in the form of a nontrivial fixed point, we study the RG equations in two different theory spaces, and in the full fixed point theory the scalar field might couple to other geometric invariants, or acquire its own dynamical term. Moreover, since the functions f (R) or V (φ) are not chosen a priori, but have to be found such that they correspond to an RG fixed point, it could happen that the regularity of the transform fails at one or several field values (or even in the full range of definition if for example f (R) is linear at the fixed point). As a consequence, if fixed points were to be found in both formulations, they might describe different physics. It might also happen that one formulation admits an asymptotic safety scenario and the other does not. 3 However, we also do not know a priori whether the (nonper-1 As in most asymptotic safety research (for a rare exception see [23]), we will be working in Euclidean signature. 2 Note that in the context of asymptotic safety, Brans-Dicke theory with ω = 0 was considered in [24] as a RG improvement of the Einstein-Hilbert truncation, in which the running gravitational and cosmological constants were promoted to fields as a result of an identification of scales with spacetime points. Clearly our work differs substantially from [24], as we study the RG equations directly for the Brans-Dicke theory. In a sense our work relates to [24] like the general f (R) studies [16,17,18,19,20] relate to the f (R) actions obtained by improvement of the Einstein-Hilbert truncation [25,26,27]. 3 In addition, we should also notice that often in the cosmology literature other "frames" are considered, in which a new metric field is defined via a conformal map, often together with a redefinition of the scalar field as well. Again, at the classical level these are all equivalent theories (although there has been some confusion on the issue in the past [34]), but at the nonperturbative quantum level this is not guaranteed (perturbatively there is on-shell equivalence, as shown for example in two dimensions [35], but at the nonperturbative level this has not been shown, although it might be possible following the developments of [36,37]). We will not study here other frames, having always in mind the original pure metric theory, whose metric we assume to define the coupling to ordinary matter. turbatively renormalized) quantum theories are equivalent or not, and only a direct comparison (which we can at least do at the level of truncations) could allow us to settle the question. In any case, given that in asymptotic safety we are in principle allowing for extra degrees of freedom, there seems to be no reason to consider only pure metric theories of gravity, and the study of scalar-tensor theories is of interest in its own. Brans-Dicke theory is one of the oldest modifications of general relativity [38], and together with its variations and generalizations it finds plenty of applications in cosmology [1], and in quantum gravity (e.g. [39,40,41]). Other versions of scalar-tensor theories have already been studied in the context of asymptotic safety [42,43,44], but to the best of our knowledge, no study of this sort has been done before for the Brans-Dicke theory in the formulation we consider here (sometimes referred to as Helmholtz-Jordan frame). We will introduce more precisely our ansatz and setup in Sec. 2, together with the two choices of gauge fixing that we are going to employ. In Sec. 3 we will derive the FRGEs for both gauges for general dimension and Brans-Dicke parameter, while in Sec. 4 we will proceed to analyze their local properties in d = 4 and ω = 0. Finally, we will present the results of numerical integrations in Sec. 5, concluding in Sec. 6 with a discussion of our findings and of future prospects. In App. A we analytically solve the special case d = 2, which helps illustrating some of the points discussed in the conclusions. The general setup: ansatz, variations and gauge fixing The action (1.2) is a particular case (ω = 0) of the more general Brans-Dicke theory with a potential,Γ which in turn is a special case of dilaton gravity. The f (R) theory in the Palatini formalism is related to the same theory but with ω = −3/2 [45]. We have introduced a subscript k which stands for the running scale at which the effective average actionΓ k is defined [14]. To a large extent we will keep ω general, only to concentrate on the specific case ω = 0 for our numerical analysis (studying the running of ω would require using a non-constant background, or looking at the 2-point function, which we leave for future work). Note that (2.1) differs from other scalartensor theories studied in the asymptotic safety literature [42,43,44] in two important aspects: it is not invariant under φ → −φ (and of course φ is not restricted to be positive), and the kinetic term (when present, that is, when ω = 0) contains an inverse of the field. The point of view we wish to adopt in this paper is that the action (2.1) is a LPA approximation for the effective action, and that only to next order we would promote ω and the function coupled to R to general functions of φ. As we explained in the introduction, we will project the flow equation for (2.1) on a flat background and study only the running of the potential. However, for future reference, we will present in this section the results of variations and gauge fixing for a general maximally symmetric background metric and constant background scalar field. Introducing the background splitting we make the usual approximation for the effective average action [46] and neglect the running of the gauge-fixing and ghost actions, S gf and S gh . For the FRGE we will need the second variation of the effective average action, therefore we expandΓ and find (omitting from now on the field dependencies of the action functionals) (2.5) We can exploit the gauge-fixing freedom to simplify the Hessian operator, adding to the original action the gauge-fixing term for some choice of gauge-fixing constraint F µ and of non-degenerate operator G µν . Physical results should be independent of the gauge choice, however, it is well known that the off-shell effective action is not gauge independent, and furthermore, the approximations we employ in the FRGE lead to additional gauge dependences. It is then important to test our analysis against different choices of gauge. We present in the following the two types of gauge which we will use in the forthcoming sections. Feynman gauge First we consider a Feynman gauge (α = 1) with and G (F )µν = φ g µν . (2.8) The total quadratic action becomes (2.9) Decomposing h µν =ĥ µν + 1 d g µν h, with g µνĥ µν = 0, we finally obtain (2.10) We note that via the gauge-fixing procedure we have introduced a kinetic term for the auxiliary field ϕ even in the case ω = 0. The kinetic term disappears for ω = −1/2, which is a special value for the Brans-Dicke theory in this gauge. For the gauge sector we employ a standard Fadeev-Popov determinant which we rewrite in terms of a quadratic integral over complex Grassmann fields C µ andC µ . For constant background scalar field, the ghost action reads (2.11) Landau gauge As an alternative choice of gauge, we consider a Landau gauge (α = 0) with and The interesting aspect of such gauge is that it does not modify the kinetic term of ϕ, in particular it does not introduce one for ω = 0. In this case, in order to simplify the non-minimal operators that appear in the second variation, we use the transverse-traceless decomposition of the metric fluctuations, given by with the component fields satisfying In the α → 0 limit, the ξ µ and σ field components decouple completely from the rest of the Hessian, and their contribution to the FRGE cancels exactly with the ghost contribution, when properly implemented [47]. We thus write the second variation of the action directly omitting the contribution of the longitudinal components: (2.16) Because of the change of variables (2.14), in this case there is also a Jacobian to keep track of, which we do by introducing auxiliary fields. The Jacobian for the gravitational sector leads to the auxiliary action [47] where the χ T µ and χ are complex Grassmann fields, while ζ T µ and ζ are real bosonic fields. The Jacobian for the transverse decomposition of the ghost action is given by with ψ a real scalar field. The flow equation The flow equation can be evaluated by means of the Functional Renormalization Group Equation (FRGE), which takes the generic form [12,13,14,15] being Γ (2) k (x, y) = and where Φ is a superfield collecting all the fields involved in the quantum action, i.e. Φ ≡ {ϕ, h µν , · · · }. R k is a generic cutoff operator, t ≡ log(k) is the RG running scale and STr identifies a functional supertrace, carrying a factor 2 for complex fields and a factor −1 for Grassmann fields. We will construct the cutoff operator in such a way to implement the substitution rule being r(z) a dimensionless smearing function. That is, we choose a cutoff of the form R k = Γ k . A convenient choice of smearing function, leading to a considerable simplification of the functional traces, and which we will therefore use here, is the so-called "optimized" cutoff [48] which reads where Θ(x) is a Heaviside step function. Feynman gauge The Hessian of the effective action is mostly diagonal in field space, with the only exception of the {h, ϕ} sector, thus the supertrace in (3.1) can be easily decomposed into standard functional traces. In the Feynman gauge we obtain where H k is the modified inverse propagator, namely H k = Γ (2) k + R k . The evaluation of the first trace requires to invert the h-ϕ matrix, which is trivial since the matrix elements commute. The ghost term takes a factor of minus two with respect to the other terms, because of the complex Grassmannian nature of the ghost fields. The trace over a generic Riemannian manifold can be evaluated by means of a heat kernel expansion, but since we are interested in projecting the flow equation on a flat background we can evaluate the trace over modes as a simple integral over momenta. The derivative of the cutoff operator with respect to the RG time returns which reduces to the sole Heaviside step function using the property that the distributional product of the delta function with its argument is zero. Because of the step function, the trace reduces to a momentum integral between 0 and k, thus automatically rendering the functional traces UV finite, a well-known feature of the FRGE. Performing the trace we obtain The trace over the tensor structure gives the factor d(d + 1)/2 − 1 for the h T µν contribution and a factor d for the ghosts, counting the number of their independent components. Since we are working on a flat manifold and constant background field both the Ricci scalar and the kinetic operator vanish, so that equation (3.5) reduces to an RG flow equation for the dimensionful potential. We cast the equation in an autonomous form, i.e. with no explicit dependence on k, by introducing the dimensionless quantities is the classical part of the equation, which is linear in the potential, and 14) is the quantum part, which contains all the loop contributions, and which is responsible for the nonlinear character of the equation. Landau gauge Working in the Landau gauge the supertrace in (3.1) reads 15) where the contributions of ghosts and longitudinal modes have been omitted, since they exactly cancel each other as explained before. After performing the integral over momenta we obtain The RG flow equation for the dimensionless potential in such a gauge reads then where the classical part T tree is the same as in (3.12), and the quantum part reads Analytical study of the equation We want to search for fixed point solutions of equation (3.11) and (3.19), i.e. search for scale invariant solutions such that ∂ tṼk = 0, requiring them to be globally analytic [28,49,31]. The latter requirement has a well-understood physical and mathematical justification, being a necessary condition for the existence of the average effective action at all values of k, and hence of the full effective action in the limit k → 0 (which in d > 2 requires the existence of the solution forφ → ±∞, see (3.10)). In addition, the condition of global analyticity is expected to reduce the continuous set of solutions to a discrete subset of acceptable ones. For ∂ tṼk = 0, both (3.11) and (3.19) reduce to second order ordinary differential equations, thus we expect 2-parameter families of local solutions, parametrized by the initial value conditions. Extending such local solutions to global ones, we generally have to impose constraints coming from the analyticity requirement and from the symmetries of the problem. In our case we do not have any constraints originating from symmetries (e.g. we have noφ → −φ symmetry, hencẽ V ′ (0) = 0 in general), and we will have to study the equation on the full real line imposing asymptotic boundary conditions atφ ∼ ±∞. The latter, due to the non-linear nature of the equations, could contain less than two free parameters, implying that global solutions would also necessarily be parametrized by less than two degrees of freedom. Other explicit constraints can originate from fixed singularities of the equation, requiring analyticity conditions (e.g. [50]), and it is hoped that the equation does not have too many such fixed singularity, which would require an over constraining of the solutions [16,18]. We will apply the following strategy to select solutions: i) we look for singularities of the equations, either fixed or movable, and study the behavior of the solution in a neighborhood of the singularity, ii) we study the large field asymptotic solutions of the equation and count the degrees of freedom of each class of solutions, iii) we numerically look for global solutions satisfying all the constraints. The study of the large field asymptotic solutions is important also for other two reasons, namely, the derivation of the full effective action at the fixed point [16], and the relation to the f (R) theory, as we will explain in the concluding section. We will present most of the analysis for the case ω = 0, although occasionally we will refer to other values. As in the Landau gauge the ω = 0 value is a critical value, analogous to the ω = −1/2 value for the Feynman gauge, we will treat separately the two gauges, starting with the Feynman gauge. Most of our considerations apply to generic dimension d > 2, although we will most often specialize to d = 4. In Appendix A we will treat the special case d = 2. Fixed singularities In order to look for fixed singularities, we search for poles of the denominator of the scale invariant flow equation ∂ tṼk = 0 written in normal form, i.e. where N and D are polynomial functions obtained from (3.11). For d > 2 the only zero we find is atφ = 0, while for d = 2 the equation reduces to a first order equation with no fixed singularities. To test the consequences of such singularity in d > 2 we impose analyticity, and study the equation in a Laurent expansion. Locally, imposing analyticity means requiring the existence of a Taylor expansion of the solution, in other words we make the ansatzṼ (φ) = n≥0 v nφ n , and after plugging it into the equation we expand the latter in a Laurent series centered at the origin. At leading order, the equation in the Feynman gauge reduces to which vanishes either restricting to ω = −1/2 (the analogous case in Landau gauge will be ω = 0, see 4.2.1), or fixing the potential in the origin to As a consequence for d > 2 and ω = −1/2 we have one constraint, thus reducing the number of degrees of freedom at the origin to one. For technical reasons, when integrating the equation numerically, we need to start from an arbitrary small value of the field ǫ. The boundary condition at ǫ can then be parametrized in terms of the derivative of the field in zerõ being τ =Ṽ ′ (0) the free parameter, and evaluated by means of a MacLaurin series and higher order coefficients are likewise obtained solving the equation order by order in ǫ. Movable singularities The constrained differential equation admits now a one parameter family of local solutions parametrized by τ . Still, because of the non linearity of the equation, we expect most of the solutions to end at a movable singularity, i.e. at a singularity whose location depends on the initial condition. We want to study the behavior of solutions in the neighborhood of such singularities, in order to confirm analytically the existence of such singularities and be able to recognize them in the numerical integrations, as well as to discuss possible interpretations in the terms of the f (R) theory. We will present in the next section the results of our search for a set of values of τ for which the singularity goes to infinity. Letφ c be the value of the field at which the singularity occurs, and suppose that the singular behavior is such that there exists an n 0 ≥ 0 such thatṼ (n) (φ c ) ∼ ∞ for every n ≥ n 0 . In order to understand what values of n 0 can occur for our equation, it is convenient to recast the equation (3.11) in the following form where the P i are two polynomials containing the same monomials but with different coefficients. As the polynomials P i have the same structure we deduce that forφ →φ c their ratio will in general go to a constant for any value n 0 . Special situations can arise when some cancellation occurs in P 2 which does not happen in P 1 , and such cases will have to be discussed separately. As a consequence, in the general case the linear part of the equation cannot diverge, otherwise it could not be balanced by the rational part, i.e. both the potential and its first derivative do not diverge at the singularity, restricting the possible value of n 0 to n 0 > 1. At this stage, we can assume that in the neighborhood ofφ c the potential can be written as and that γ > 1 (so that n 0 > 1), and we can try to determine the value of γ by means of the method of dominant balance. In order to do so we can start with the guess that the second derivative is divergent atφ c , that is 1 < γ < 2. In such case, by studying the balance of terms we arrive at the equation γ − 1 = −γ + 2, leading to γ = 3/2, in accordance with our guess. Plugging (4.7) with γ = 3/2 into (4.6), we can iteratively work out all the coefficients in the expansion as functions of the parameter u 0 and of the singular field valueφ c . For example, in d = 4 we find for the leading order terms. The subleading corrections can be computed iteratively, but their expression is very long, and not particularly enlightening. Other singular behaviors are possible if P 2 has a zero. Such situations are more easily uncovered by studying the equation written in normal form, (4.1). Assuming that the first derivative of the potential is divergent (or more divergent than the potential itself) atφ ∼φ c , we obtain the equationṼ leading to a simple pole solutionṼ (φ) ∼ (φ −φ c ) −1 , which is consistent with the assumption. Subleading corrections can be worked out, confirming the possibility that such type of singular behavior can appear in a solution of the fixed point equation. Behavior at large field values We apply here the method of dominant balance to study the large field regime of the differential equation (3.11). We have already seen in (4.6) that whatever is the leading term (forφ → ∞ in this case) the quantum part of the equation in general goes as a constant plus subleading corrections, hence we have two possibilities: either the potential diverges at infinity, and the classical part of the equation defines the leading order, or the potential goes to a constant, and there must be some balance between linear and nonlinear part. In the first case, in theφ → ∞ limit the solution goes asṼ (φ) ∼ Aφ d d−2 + subleading terms , (4.10) where A is a free parameter. Subleading terms can be calculated by solving iteratively the differential equation for an ansatz of the typẽ (4.11) For d = 4, for example, the first few coefficients a n (A) are The coefficients are all inversely proportional to the bare parameter A, so that this expansion cannot be continued to A = 0, and that case must be treated separately. The asymptotic solution so far constructed defines a one-parameter family of solutions parametrized by the variable A, but as the equation is second order, we can ask if the asymptotic solutions have more degrees of freedom. In order to answer such question, we follow [31,18] and perturb the flow equation in the neighborhood of the solution we just found, i.e. we introduce a perturbation to the potential, substitute it into (3.11), and expand to linear order in ǫ. ReplacingṼ (φ) with (4.11), and keeping only the leading terms in the coefficients of the linear operator acting on the perturbation, in d = 4 we obtain the linear equation which allows a solution which goes asymptotically like where B 1 and B 2 are two integration constants. Note that eq. (4.14) seems to reduce to a first order equation for ω = −1/2, but as we will see for the Landau gauge for ω = 0 (which is the analogue of the case ω = −1/2 in the Feynman gauge) for that critical value of ω we simply need to include the subleading correction of the coefficient of δṼ ′′ (φ). Whereas the power-law solution in (4.15) merely shifts A in (4.11), the exponential solution would seem to be a new degree of freedom. However, for positiveφ (and ω > −1/2, otherwise the role of positive and negativeφ are interchanged) it grows faster than the solution it is perturbing, contradicting our asymptotic analysis, hence it must be discarded. On the other hand, for negativẽ φ it is an exponentially small perturbation, hence it is acceptable. As the perturbation is smaller than any power at largeφ, while the leading solution (4.11) contains only powers, it is not difficult to see that the full equation decomposes in a hierarchy of equations, according to powers of the exponential correction, that is, the exponential acts like an ǫ parameter and we can iteratively solve the equation to obtainṼ whereṼ [0] (φ) is the leading solution (4.11), while for ω = 0 we find Z(φ) = 768 π 2 A 2φ3 + 4224 π 2 Aφ 2 + 64 24 A + 769 π 2 φ , (4.17) and so on, leaving A and B as free parameters. The presence of a new degree of freedom atφ ∼ −∞ creates an interesting situation, as we already know that we have an analyticity constraint atφ = 0, hence if we had just one-parameter families of solutions at both plus and minus infinity it would be unlikely to have a global solution. 4 There remains to consider the special case A = 0, which we now proceed to examine for d = 4 and ω = 0. From the previous discussion of dominant balance we would expect in such case a solution that asymptotes to constant. Nevertheless, we should be careful as in that analysis we have excluded special cases leading to cancellations in the denominator of the quantum part of the equation. By plugging into the equation an ansatz of the typẽ we find at leading order the equation A 1 = 0, in accordance with the previous analysis. However, a careful look at the higher orders of the expansion reveals the presence of poles at A 1 = 1 and A 1 = 3/2, meaning that for those values the general expansion is not valid, and a separate treatment is needed. In fact, we find that such special values of A 1 also lead to solutions that are solvable with an iterative algorithm. 5 In all three cases (A 1 = 0, 1 and 3/2) we find no free parameter in the expansion (4.19), but by studying the linear perturbations we discover the presence of exponentially small corrections at negativeφ for A 1 = 0, exponentially small corrections at both positive and negativeφ for A 1 = 1, and a non-integer power correction at negativeφ for A 1 = 3/2. It is quite easy to see that exponentially small corrections always carry one new degree of freedom, while the analysis in the case of the non-integer power is slightly more tedious and we have not pushed it further (also because in our numerical analysis we saw no evidence of the A 1 = 3/2 asymptotic behavior for the Feynman gauge). Just as an example of the type of results, for A 1 = 0 we find that the coefficients in and so on, leaving B as the only free parameter. In conclusion, we found four isolated sets of solutions atφ → ±∞. As we will explain in the concluding section, from the point of view of the f (R) theory the most interesting solutions are those in the first class, i.e. (4.10), for which we have found the presence of two degrees of freedom atφ → −∞ and one atφ → +∞ (or the opposite for ω < −1/2). Fixed singularities We repeat here the analysis of the analyticity of the differential equation for the Landau gauge, starting with the study of the fixed singularity inφ = 0. Following 4.1.1, we recast the differential equation in its normal form (4.1) and then we expand it in a Laurent series employing a Taylor expansion for the potential. In this gauge we find that at leading order the equation reduces to which vanishes constraining the potential at the origin as 25) or restricting to ω = 0, which is the case we are interested in. Comparing (4.24) with (4.2) we note once more that the case ω = 0 in the Landau gauge is analogous to the case ω = −1/2 in the Feynman, so that the analytic properties of the equation in the two gauges are the same for those two particular values. For ω = 0 we have now an equation free of singularities. As a consequence, since the equation is unconstrained, we have (for d > 2) two degrees of freedom at the origin,Ṽ (0) andṼ ′ (0), and at least one atφ ± ∞, so that it seems more likely to find global solutions. On the technical side, the absence of a singularity atφ = 0 also means that in this case it is possible to integrate numerically from the origin without employing a MacLaurin expansion. Movable singularities As in the Feynman gauge we expect the non linearity of the equation to involve the presence of movable singularities. Since the polynomials P i in equation (4.6) contain the same monomials in both gauges, the analysis carried out in 4.1.2 with the method of the dominant balance still holds and we find in general the singular behavior (4.7) with γ = 3/2. However, because of the gauge dependence of the off-shell effective action, we end up with different coefficients for both the analytic and divergent part. For example, for d = 4 and generic ω we obtain etc. Also similar to the Feynman gauge is the presence of simple pole singularities, with (4.9) replaced byṼ (4.28) Behavior at large field values Since the method of the dominant balance leads to similar conclusions for both gauge choices, we expect also for the Landau gauge to find generically an asymptotic solutions of the form (4.29) We can iteratively solve the differential equation for this ansatz, obtaining in d = 4 and so on. As for the other gauge, we see that the coefficients are inversely proportional to A, so that also in this gauge we have to treat separately that case. Before studying those other solutions we focus on the number of free parameters of (4.29), by introducing a perturbation δṼ . We then linearize the equation for the perturbation and study the leading terms, obtaining the equation For ω = 0 the analysis is similar to the one we presented for the Feynman gauge. For ω = 0 we need to include the next order term in the coefficient of δṼ ′′ , and thus consider the equation which admits solutions with the asymptotic behavior The novelty here is that the leading power in the exponent is fourth rather than third order (a consequence of the different power ofφ in the coefficient of δṼ ′′ in (4.32) with respect to (4.31)), so that the solution does not discriminate positive from negativeφ, but rather leads to constraints on A. For A < 0, the solution (4.33) contains an exponential degree of freedom which grows faster then the perturbed function in both positive and negative field regimes, so that we must discard it. Interestingly such sector is the unphysical one, since negative A defines the asymptotic behavior of an unbounded potential. On the other hand, for A > 0 the perturbation is exponentially small both at positive and negativeφ, hence it is always acceptable, and we can work out the subleading corrections as done before for the Feynman case. The higher power in the exponent means that we have to solve more iteration steps before getting to the power-law corrections, but as we do not gain any qualitative insight from such analysis, we do not report further on that, the main message being that now we have two degrees of freedom at both plus and minus infinity. Regarding the case A = 0, making the ansatz (4.19) we find again (d = 4 and ω = 0) the same three special values A 1 = 0, 1 and 3/2, as in the Feynman gauge. The main difference appears in the case A 1 = 3/2, for which the expansion (4.19) now contains one degree of freedom, i.e. b 1 is a free parameter in terms of which all the other b n are expressed: By perturbing around such solution we find that in order to discover new solutions we have to include at least the next-to-leading order coefficients for largeφ in the linear equation, yielding (4.35) whose asymptotic solutions are a superposition of a solution that simply perturbs (4.34), and a series of logarithmic corrections, that carries a second degree of freedom, namely the free parameter c 1 . Numerical results In order to find global solutions we integrate out fromφ = 0 and search for a set of initial conditions τ such that the movable singularity goes to infinity in both the positive and negative field region. We present here our analysis for both gauges for ω = 0 and d = 4, starting with the Feynman gauge. Feynman Gauge We start a numerical integration at the origin (actually atφ = ±ǫ as explained in Sec. 4.1.1), and similarly to what done in [31], we plot the location at which we hit a singularity, as a function of the free parameter τ =Ṽ ′ (0). When we see a spike in such a plot, we interpret it as a hint of a possible global solution. Since spikes can occur as artifacts due to the scale of the plot, ending instead at a finite value, the next step is to show that such spike can be made arbitrarily long by increasing the numerical precision and by refining the mesh. In addition, in our case we have to produce such type of plots at both positive and negativeφ, looking for spikes that occur at the same value of τ in both ranges. At negativeφ the plot of the singularities looks like in Fig. 1. We apparently find a spike in the negative region for an initial condition τ ∼ 1.638, which however, when zooming in, reveals a richer fine structure, actually three peaks being present (only two of which are shown in the right panel of Fig. 1). Such triple peak can be understood in terms of transition between different types of singular behavior. The most clear explanation is obtained in terms of the numerator and denominator of the normal equation, N and D in (4.1), which we plot in Fig. 2 for four representative cases. We find that for τ 1.638534 and τ 1.638597 both N and D diverge, together with their ratio, at someφ c thus signaling the pole type of singularity found in (4.9). In the range between those two value we find that D vanishes at someφ c , reaching zero with an infinite slope; at the same N reaches a finite value, and we deduce that we are hitting a singularity of the type (4.7) with γ = 3/2. The transitions between γ = −1 and γ = 3/2 coincide with two of the peaks observed in the fine structure of Fig. 1. We interpret the remaining spike at τ ∼ 1.638591 as signaling a transition (as τ increases) from a regime in which N is always positive, to one in which it changes sign twice before hitting hittingφ c . As seen in the zoomed plot in Fig. 1, spikes can be pushed farther away from the origin, however, high precision is needed and we have not tried to reach much beyondφ c ∼ −0.1. In fact, it turns out that a more detailed investigation of the spikes is not worth, as the remaining part of the plot, for positiveφ, turns out to be quite disappointing. Integrating in the positive field region for any initial condition, including the neighborhood of τ ∼ 1.638, we encounter a singularity, as can be seen in Fig. 3, so that we would have not in any case a global solution. Only one type of singular behavior is found in the positive domain, a typical example of which is shown in Fig. 4, and from which we recognize a behavior consistent with (4.7) and γ = 3/2. We did not find other spikes in both negative and positive region for other values of τ (outside the plot range in Fig. 3), so that in the end we conclude that there are no global solutions in d = 4 and ω = 0 in the Feynman gauge. Landau Gauge The search of global solutions is more complicated in the Landau gauge since we have two degrees of freedom at the origin. In order to search for fixed points we adopted the following strategy: i) we integrate numerically from the origin (since there is no fixed singularity we can directly impose initial conditions atφ = 0) for a fixed value ofṼ (0) varying the initial condition τ =Ṽ ′ (0), ii) we repeat the integration for a discrete set of positive and negative values ofṼ (0). As for the Feynman gauge we restrict our research to ω = 0 and d = 4. We start withṼ (0) > 0, for which we illustrate a representative outcome at negativeφ in Fig. 5. In this case we find a spike at τ = 1.5 and a continuum set of analytic solutions occurring for τ < τ c , where τ c is a critical value which depends on the initial conditionṼ (0), i.e. τ c ≡ τ c (Ṽ (0)). The peak at τ = 1.5 actually corresponds to an exact solution of the differential equation in normal form, which for generic d > 2 is given by the simple linear functioñ v(φ) (implying also that (5.1) does not admit linear perturbations). We are thus led to deem (5.1) unacceptable. Regarding the continuum set at negativeφ, we find it for an initial conditions τ smaller then a critical value τ c which, as we already mentioned, depends on the value of the initial conditioñ V (0). VaryingṼ (0) we observed the value of τ c to oscillate between a minimum value τ min ∼ 0.96 and a maximum τ max ∼ 1.12. By increasing the numerical precision we were able to prolong at will the entire group of solutions and we found all of them to behave asymptotically as Aφ 2 , being A a function of the initial conditions. A typical solution is illustrated in Fig. 6. The seemingly sharp edge in the second derivative is actually an optical artifact: working at high precision, and zooming around the edge one finds that the curve is smooth, as depicted in Fig. 7. We can understand the presence of such a short-scale transition as the rapid vanishing at largeφ of the exponential part of the solutions we discussed As it can be seen in Fig. 5 all the numerical integrations performed using initial conditions with τ > τ c lead (with the exception of τ = 3/2) to a singularity, which we found to be characterized by the exponent γ = 3/2. An acurate analysis reveals a transition in the way the solutions behave before reaching the movable singularity (i.e. the large field regime of the solution), from V (φ) ∼ A φ 2 at τ ∼ τ c , toṼ (φ) ∼ 3 2φ at τ ∼ 3/2. Such transition, together with the spurious solution (5.1), makes the equation particularly stiff around τ = 3/2, as it can be seen from the noise in Fig. 5. However, because of the presence of a singularity we did not put much effort on a more precise numerical integration of this group of solutions. Integrating towards positiveφ we discover an interesting situation: forṼ (0) > 0 no solutions meet any singularity. We were able to push the integration to arbitrarily largeφ > 0 without encountering singularities for all values τ , and we found solutions with τ < 3/2 to behave asymptotically likeṼ (φ) ∼ 3 2φ , and solution with τ > 3/2 to go asṼ (φ) ∼ Aφ 2 . Combining our findings for positive and negativeφ we conclude that the solutions withṼ (0) > 0 and τ < τ c form a continuous set of global solutions. Atφ = 0 andṼ (0) = 0 the equation is singular. Imposing an analyticity condition at the origin we find that τ = (1 ± √ 19)/4. We did not study these special solutions in detail. ForṼ (0) < 0 the typical situation is depicted in Fig. 8. All the singular solutions we found, for both positive and negative field values, diverge with exponent γ = 3/2. We found in the positive field region a continuum of solutions which do not end on a movable singularity for τ > 3/2, while at negativeφ we met no singularity for τ < 3/2, in both cases with an asymptotic behavior V (φ) ∼ 3 2φ . The two sets have no overlap, hence there are no global solutions in this case. In conclusion, in the Landau gauge in d = 4 and ω = 0, we found a two parameter family of global solutions forṼ (0) > 0 and τ < τ c (Ṽ (0)). Such result could have been expected to some extent, as in the Landau gauge we have no fixed singularity at the origin, and we have at least two classes of asymptotic behavior with two degrees of freedom each at both positive and negativẽ φ. The global solutions we found behave asymptotically asṼ (φ) ∼ Aφ 2 forφ → −∞, and as V (φ) ∼ 3 2φ forφ → +∞. The latter is an indication of an unusual character of such solutions, as that type of asymptotic behavior is the result of a balance between the classical and quantum parts of the RG equation, to be contrasted to the usual situation, where for k → 0 (i.e. the large field regime) only the classical part survives. Conclusions In this article we have presented a study of the Brans-Dicke theory (2.1) for an arbitrary potential V (φ) in the framework of the functional renormalization group. We have derived a differential equation in the local potential approximation for a generic parameter ω and dimension d, subsequently focusing our analysis on the case ω = 0 and d = 4, because of its classical equivalence with the f (R) theory. The main motivation for this work came from the asymptotic safety scenario of quantum gravity, which in the literature has been investigated mostly in the pure metric formulation, by means of truncations of the exact renormalization group equations. An important test for such approximation methods would be to show that at least some subclasses of truncations correspond to a series expansion of a functional approximation that explores an infinite dimensional subspace of the full theory space, a classical successful example of that being the local potential approximation in scalar field theory. However, for the case of gravity, such direction is progressing slowly because of the notorious difficulties related to working on curved backgrounds. The idea we have explored in this paper was to exploit the classical equivalence between Brans-Dicke theory and f (R) gravity in order to be able to study a functional approximation of gravity on a flat background. Besides such motivation, a study of alternative formulations of gravity or modified theories is interesting in its own, and studies like the one we presented here can help address the question of quantum equivalence of different formulations. In order to achieve our goals we have evaluated the renormalization group equation for a generic potentialṼ (φ) in two different gauge choices, namely a Feynman and a Landau gauge, allowing us to discern possible gauge artifacts. As a result of our study, we found a number of important differences between the two gauges. In particular, we found no global solutions of the fixed point equation in the Feynman gauge, whereas we found a two-parameter family of global solutions in the Landau gauge. While some gauge dependence was expected (due to the approximations employed and to the fact of working off-shell, see for example [51,47]), we would have expected that at least qualitative features like the number of fixed points, and of associated relevant directions would be gauge independent (in principle together with any observable quantity, but in practice this property is expected to hold only approximately due to the approximations used). Being the results in our two gauges so different even at a qualitative level, we are led to infer some inconsistency of the model under consideration in the present approximation. Motivated by the relation to f (R) gravity, we did not analyze the case ω = 0 in detail, but we can identify the freezing of the Brans-Dicke parameter to ω = 0 as the culprit of the inconsistent scenario we uncovered. We expect the strong gauge dependence to be lifted once the Brans-Dicke parameter is promoted to a running coupling ω k , in the sense that in any gauge there will be some critical value ω c where something special happens (e.g. a discrete or continuous set of fixed points appears), the value of ω c being gauge dependent, but not so the overall picture. 6 For example, we already know that in the Feynman gauge the value ω = −1/2 gives very similar results to the Landau gauge at ω = 0, and it would be interesting to test whether such critical values correspond to fixed points of ω k for the two gauges, reached either in the UV or in the IR. In view of our results and of the possible solution we just outlined, we can draw an important conclusion: due to its renormalization group flow, the Brans-Dicke theory at the quantum level needs a running coupling ω k = 0 in order to be consistent. Since for ω = 0 the equivalence with the f (R) theory is broken, we are led to suggest that Brans-Dicke theory and f (R) gravity are inequivalent at the quantum level. Needless to say, this should not be intended as a proof of inequivalence, but rather as a logical interpretation of the results we found. We should point out another aspect which also hints to a non-equivalence of Brans-Dicke theory and f (R) gravity at the quantum level. As we explained, the condition for a solution of the FRGE to be a valid fixed point is that it should be a global solution. While it is quite clear from our analysis that, at least within the present approximation, no nontrivial fixed point can be found for the Brans-Dicke theory at ω = 0 in the Feynman gauge, we should be careful in translating such statement back into f (R) gravity. Due to the nonlinearity of the Legendre transform it could happen that a problematic singularity in one theory would turn into a harmless one in the other, or vice versa. We should indeed remember that the following relations hold (here in dimensionless variables): As a consequence, if a singular point |φ c | < ∞ is such that the first derivative of the potential is divergent, then in the f (R) theory it simply means thatφ c is mapped toR c = ±∞, depending on the sign ofṼ ′ (φ c ). Although that would correspond to a strange situation in whichf ′ (R) does not diverge at infinity (usually the asymptotic behavior is a power law dictated by the tree level part of the equation [16,18,19], implying that at infinityf ′ (R) diverges for any d > 2), that would not be something we can discard as unacceptable. This is precisely what happens in reverse for the Landau gauge: we found global solutions forṼ (φ), but their first derivative is such that asymptoticallyṼ ′ (φ) ∼ 3/2 forφ → +∞, and thus their transform would lead to an f (R) theory valid only up toR c = 3/2. On the other hand, if the potential is such that only its derivatives of order greater or equal to two are divergent, then the singular point is mapped to |R c | < ∞, and thus also the transform of the potential is not a global function. The latter is precisely the case for the Feynman gauge, for which we saw that the singularities at positiveφ are characterized by an exponent γ = 3/2, that is, they have a finite first derivative at the singular point. Regardless of its connection to the f (R) approximation, the study of Brans-Dicke theory is interesting in its own, and, being a nonrenormalizable theory, it is natural to wonder whether an asymptotic safety scenario applies to it. From such point of view, we should emphasize that what we have presented here is the result of the leading order in an approximation which should be systematically improved. The local potential approximation we employed can be considered, in fact, as a "double LPA" since we neglected both the renormalization of the coupling Z of the operator φ R (having set from the start Z = 1) and of the parameter ω. Both could be promoted to functions Z(φ) and ω(φ), thus leading to a next-to-leading order approximation which could uncover an anomalous scaling of φ and the existence of nontrivial fixed points. A The two-dimensional case In two dimensions the fixed point equations in both gauges reduce to ω-independent first order equations. The analysis is thus quite different in this case, it is actually much easier, and we can proceed mostly by analytical means. Explicitly, the equations in d = 2 reduce to 7 for the Feynman gauge, and toṼ for the Landau gauge. Both equations can be easily integrated, leading to algebraic equations implicitly defining the solutionṼ (φ). As equation (A.1) is slightly more complicated to study than equation (A.2), but at the end it leads to very similar results, we will present the explicit analysis only for the Landau gauge. The fact that the two gauges lead to similar results in this ωindependent case supports our hypothesis that in higher dimensions the strong gauge dependence we found is an artifact of the restriction to ω = 0. The constant of integration C = 4 π (v 0 − φ 0 ) + log y 0 parametrizes a one-parameter family of global solutions, which hence are all acceptable fixed points. The asymptotic behavior of the Lambert function is such thatṼ (φ) ∼ φ for φ → +∞, andṼ (φ) ∼ e 4πφ+C for φ → −∞. We can study the linear perturbations around such fixed points, by writing as usual 7 The field is dimensionless in two dimensions, hence we omit the tilde. A being an arbitrary normalization constant, which we can fix to one. Given the exponential fall-off at φ ∼ −∞ of the fixed point solutionṼ (φ), we see that we must impose the constraint λ ≤ 2 in order to avoid exponentially growing perturbations. Indeed the asymptotic behavior of the eigenperturbations is v(φ) ∼ 1 − 2−λ 2 (4πφ) −1 for φ → +∞, and v(φ) ∼ (4π) for φ → −∞. Apart from the upper bound on λ, we do not have other restrictions, hence the perturbations form a continuous spectrum. However, for λ < 2 all the perturbations are redundant, corresponding to a field redefinition φ → φ + (Ṽ ′ (φ)) −λ/2 . We are left with only one essential perturbation, the constant one, v(φ) = A. One special solution of the fixed point equation isṼ (φ) = 0, whose linear perturbations satisfy with solutions v(φ) = A e (2−λ)2πφ . In order to avoid exponentially growing solutions in this case we have to restrict to λ = 2, that is, the only allowed perturbation is again a constant potential, which is a relevant perturbation, and which actually is an exact solution of the full flow equation. We conclude noting that all the solutions in d = 2 do not admit an f (R) interpretation, as (6.1) together with the asymptotic behavior of the fixed point solutions imply thatR ∈ (0, 1). The departure from f (R) is of course most evident in theṼ (φ) = 0 case, where the equation of motion obtained by varying φ is simply R = 0.
2014-01-29T22:12:26.000Z
2013-11-05T00:00:00.000
{ "year": 2013, "sha1": "1eeaa8f65dd13a9f3da82f8d97e9d74496648000", "oa_license": "CCBY", "oa_url": "http://iopscience.iop.org/article/10.1088/1367-2630/16/5/053051/pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "2af34ccd822962a67501f43026c6cf59edb2b85c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
212554270
pes2o/s2orc
v3-fos-license
Water quality in a complex geomorphological lagoon at the Gulf of Mexico, based in recent and compared with information 30 years ago The river water arrives coastal systems having a composition depends edaphology watershed, generating a gradient between freshwater and marine through tides which may be local variables and amplitudes. To this must be added the climate and its variability, which results in different concentrations in dissolved chemical compounds in water both spatially and temporally, with intervals that can be considered normal, as long as knowledge it has such variation and not subject to official regulations requiring specific concentrations and do not recognize the environmental change. A stranger to the nature of coastal environments factor is the influence of anthropogenic activities, which go beyond the normal range of physicochemical conditions of water and tend to deleterious conditions not only to aquatic organisms, but for men as well. There are chemical determinations practices that define the physicochemical or water quality; these are: temperature, salinity, pH, dissolved oxygen and their saturation, chemical oxygen demand or COD (measuring the organic load) and so-called nutrients (salts of nitrogen such as ammonia, nitrites, nitrates, total nitrogen that includes organic) and orthophosphates with total phosphorous (which also includes organic).1 Introduction The river water arrives coastal systems having a composition depends edaphology watershed, generating a gradient between freshwater and marine through tides which may be local variables and amplitudes. To this must be added the climate and its variability, which results in different concentrations in dissolved chemical compounds in water both spatially and temporally, with intervals that can be considered normal, as long as knowledge it has such variation and not subject to official regulations requiring specific concentrations and do not recognize the environmental change. A stranger to the nature of coastal environments factor is the influence of anthropogenic activities, which go beyond the normal range of physicochemical conditions of water and tend to deleterious conditions not only to aquatic organisms, but for men as well. There are chemical determinations practices that define the physicochemical or water quality; these are: temperature, salinity, pH, dissolved oxygen and their saturation, chemical oxygen demand or COD (measuring the organic load) and so-called nutrients (salts of nitrogen such as ammonia, nitrites, nitrates, total nitrogen that includes organic) and orthophosphates with total phosphorous (which also includes organic). 1 To determine the physicochemical characteristics of the water of a lagoon and its tendency to increase, it is necessary to have previous information to compare their condition, modification or impact.If it not possible to have the necessary background of a coastal lagoon, at least look information systems or other surrounding lagoons in the country. It is noteworthy that the regulations previously NOM ECOL 001 (Mexican Norm) and now NOM 001-SEMARNAT-2003, 2 is unfortunately developed for wastewater discharges and even treated to induce misinterpretations and serious mistakes. In a variation of full scale in the chemical parameters, now you have to take into consideration the influence of climate change also affects the coast by rising sea levels and declining contributions from rivers discharging into the area coastal result of its decline and damming for agricultural purposes and urban services in increments of substances harmful to ecosystems. Therefore, it is necessary to make adequate simples to each water body spatial and temporal planning to determine the extent of natural and influence changes as well as anthropogenic factors affecting water quality and treatment capacity. In Mexico there are just over 125 coastal systems 3 which are influenced by various anthropogenic activities local and waterways downloads from the origin of its basin until the arrival coastal systems, bringing a wide variety of contaminants collected in its path organic and inorganic materials such as nitrogen and phosphorus, fertilizers, biocides, metals, general trash where plastics include, among others, which have differences in the degradation time and even its stay in pelagic or benthic environment (flora and fauna) and sediments lagoons, estuaries and bays (marine environments) even in the ocean. 4 Based on the above was chosen to Mandinga lagoon, Ver., In order to determine their conditions and physicochemical changes or quality of water in a in rains (August 2016) and another in drought (March 2018) considering not only two climatic seasons, but also the increase in population; it will assess water quality by natural climatic effects of season and those caused by human activities settled on its periphery with their downloads on the lagoon system. Likewise compare with previous studies since the early 1970s as Vazquez, 5 Contreras, 6 Aguilar, 7 de la Cruz, 8 CONABIO, 9 Contreras, 10 Contreras Espinosa,11 BarreiroGüemez, 12 Herrera Silveira, 13 among others. Study area Mandinga Lagoon is located between 19°00 'and 19°06' north latitude and meridians 96°02 'and 96°06' west ( Figure 1) long with an area of 3,250ha and climate: AW2 (w) (i ') w'. 14 The rainfall is 1500mm per year, the rainfall occurs in the months of June to October and the drought season from November to May. It has a north-south orientation while the nearby coast adopts a northwest-southeast direction, forming the atoll of Antón Lizardo. To the northwest lagoon system, it is separated from the sea by a barrier of sand dunes. It is associated with Jamapa River, which originates with snowmelt from Pico de Orizaba, that runs 150km; also receives several small tributaries and Jamapa River that flows into the Gulf of Mexico. Methodology In August 2016 eight were chosen sites sampling, same that were considered in the collection March 2018 ( Figure 1). Sampling consisted of in situ determination: temperature, salinity, pH, dissolved oxygen and their saturation and turbidity, recorded in the first five to 10 cm of the surface and deep as the depth and through the computer HIDROLAB YSI 556 MPS. In addition, water samples were collected for determination of chlorophyll "a", nutrients to conserved refrigerated at 4°C in the laboratory, as recommended by Strickland and Parsons 15 and the APHA. 16 Likewise, techniques, the results were compared with previous information from 1980 to 2013 in order to estimate possible changes in physical chemistry due to peripheral human settlements in the space between the decade of the 80's and 2013. In addition, the trophic status of the lagoon was calculated through trophic index by Carlson. 17 Results and discussion Temperatures were in August, especially in surface between 29.78 and 31.11°C, interval with a little difference at the bottom between 29.13 to 30.94°C. In April surface interval was between 25.13 to 27.11°C and at the bottom of 24.91 to 25.89°C; marking a time difference that is greater in lower rainfall and dry, respectively. Intervals have been recorded by other authors since the early 80s of last century, considered typical of a Mexican coastal lagoon of the tropical latitude, dependent to the epoch year. Salinity in August (rainy) interval ranged from 8.61 to 18.11ups result of the location of the sampling site with highs in marine intercommunication, a place where the influence of Jamapa River was recorded in conjunction with input seawater by bottom; with lower salinity sites sampling that was in the middle of the lagoon. In April (dry) increased the variation of salinity more than doubled, with the maximum in marine mouth (31.64-31.78 ups) with lower and homogeneous in the other stations in both surface and deep levels (29-36-27.65ups). 28 years ago, in the months of September-October and May, as Bornn 18 determined a saline range from 4.58 to 29.1 ups in the rainy and dry, respectively. Amador and Cabrera 19 determined a range of salinity from 3.1 to 33 ups, Morales 20 noted that the haline variation ranged from 13.9 to 34.6ups in the dry season, like other authors like Gomez 21 and Castan 22 intervals. This conservative parameter depends on the rains, the dry season, the influence of river discharge and therefore the geomorphology and location of the sampling site. The dissolved oxygen content in both samples were heterogeneous result of being a factor by non conservative effects of photosynthesis and respiration, not by the epoch, with the lower scarcely contained in at the bottom. As for the percent saturation in the rainy season was estimated at 111.3% to 43.7% with a punctual site where the lowest level site was considered risky for biota; He noted at the dryness the saturation range between 107.8 to 112.1% with greater uniformity, which denoted higher photosynthetic activity. a single low concentration was recorded at station 2 (Estero el Conchal) with 43.7% saturation in the bottom, critical level for the survival of benthic organisms. Cabrera 19 determined a level range between 1.7 to 9mg/L. like other authors like Gomez 21 and Castan 22 intervals. The pH range was between 7.4 and 8.8, levels considered normal for such coastal environments. For its Cabrea 19 reported a pH range between 5.9 (acidic condition) to 8.3 (alkaline condition), result of the importance of geomorphology and time of sampling which involves the influence of diurnal variation in phytoplankton activity and respiration of plankton. As nitrates and nitrites in both samples ranged between undetectable (0-M) up to 1.23uM until 0-14 (micromoles) concentrations considered low; even like determined by Bornn 18 and Contreras 10 ; more than 30 years and 20 years ago respectively, which means the maintenance of good environmental status, regarding these nutrients. The predominant form ammonia was specifically in April where sampling recorded> 12 uM. few differences in the concentrations of total nitrogen in the two sampled months, highlighting in August and April contents were approximately twice in both sites 7 and 8 (lagoon center) at surface and bottom, compared with the other stations. The concentrations determined by Bornn 18 , Contreras 10 and Barreiro 12 determined that the nutrients were more abundant in the rainy season, where nitrates ranged between 0.7 and 3.4µM and ammonium, 2 to 8µM; Similar amounts within the given range in 2012-2013. Accordingly, these nitrogen nutrients define the lagoon without problems on water quality. The range of variation of orthophosphates and total phosphorus (Table 1) ( Table 2) showed lower levels in April were considered within normal variation for coastal systems; However, in August sites 7 and 8 (middle of the lagoon) were high that exceeded recorded as normal in coastal lagoons. Comparatively, in 1982 Bornn 18 quantificate lower contents both orthophosphates (1.7-2.8µM) and total phosphorus (2-12µM) and 10 estimated between 0.01 to 5.0µM orthophosphates and 5 to 10µM average total phosphorus; in 2002 Barreiro 12 determined a phosphate content, between 0.2 and 7µM. According to these authors the levels indicated a satisfactory state of ecological health, but the determination between 2012-2013 in the center of the Mandiga Lagoon (7 and 8 sites), exceeded more than doubled on average, probably due to different chemical factors between the water column and sediment, 23 although the contribution of wastewater is not discarded of the human settlements in peripheral villages. The contents of chlorophyll "a", were heterogeneous in space and time; in August which amounted to 56.4mg/m 3 at a site 5 (Laguna La Redonda) and even station 3 with 34.1mg/m 3 concentrations that can be slightly elevated compared to the rest of the sampled sites. Amador 19 quantify levels of this pigment between 54.19mg/m³ and 40.27mg/m³ in May and August respectively. According Barreiro 12 determined highest concentration of chlorophyll a, (22±2.3 mgm -3 ) recorded in the dry season, which corresponds to the spring; while the lowest content (3.4±1.3 mgm -3 ) occurred during the dry season (April) too. Based on the Carlson index 17 trophic level of Mandinga Lagoon was heterogeneous in space and time, ranging from eutrophic by phosphorus and eutrophic hypertrophic for chlorophyll content "a", as presented in Tables 1, Tables 2 and Tables 3. However those categories, can be uncertain because of the wide variation of the parameters used in this study considered normal and in the case of using these indices, the category would be within natural eutrophication and certain sites sampled cultural eutrophication. Conclusions It notes that some physicochemical characteristics or water quality of Mandinga Lagoon have remained for several decades oscillating within normal ranges even with higher concentration called natural eutrophication for coastal area spatiotemporal variations. However, content of orthophosphate, total phosphorus and chlorophyll "a" had increased in specific sites in the peripheries to levels eutrophic or hypereutrophic using index Carlson 17 in this case it is possible called cultural eutrophication. This condition may be due to increased population in 1.87% (SEFIPLAN2016). 24
2020-03-07T14:28:21.107Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "68454f43a4f6dda80db5f3ff175eddf4d1667c87", "oa_license": null, "oa_url": "https://doi.org/10.15406/jamb.2018.07.00216", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "68454f43a4f6dda80db5f3ff175eddf4d1667c87", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
236517470
pes2o/s2orc
v3-fos-license
APPLICATION OF ELECTROCHEMICAL ACTIVATED SOLUTION ON BROCCOLI SEEDS Electrochemical activated solution (EAS) possesses a wide variety of antimicrobial activities. EAS has been known as a super disinfectant solution with high ability to kill most bacteria and fungi and is safe for humans. Therefore, it has been studied and used in many different areas of life such as medication, food processing industry, etc. However, there are few reports on the effect of EAS in agriculture. This study was conducted to determine the effects of using EAS to treat broccoli seeds on seed germination rate and growth of sprouts. The EAS was generated from KCl solution, which was then diluted with distilled water at 1.0, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2 and 0.1 strengths of the source EAS. The results showed that treating seeds with EAS could reduce the quantity of coliform on the surface of broccoli seeds, without affecting seed germination rate and sprout growth. The 0.3 strength EAS (pH 6.7; oxidants concentration of 8.6 ppm; ORP 560 mV) was the most suitable solution for killing coliform on broccoli seeds. INTRODUCTION In recent times, people tend to switch to healthy living and healthier foods, which increases their consumption of raw sprouts. Raw sprouts are very popular in salads and many other dishes in many parts of the world, especially in Viet Nam. Sprouts are low in calories and fat, and provide substantial amounts of key nutrients, such as vitamins, minerals, proteins, enzymes, folate, and fiber [1]. Despite being a popular healthy food, multiple outbreaks linked to the consumption of raw sprouts have occurred. Most sources of these outbreaks have been traced to seeds contaminated with Salmonella and Escherichia coli O157:H7,followed by Listeriamonocytogenes, Staphylococcus aureus, Bacillus cereus, and Aeromonashydrophilia [2 -9]. The largest outbreak linked to bean sprouts contaminated with Salmonella occurred in Ontario, Canada [10], and resulted in more than 600 cases of illness. In 2007, a study was conducted a region-wide assessment of the microbiological quality of retailed mung bean sprouts in the Philippines. Ninety-four percent samples were tested, and it was positive for the presence of Salmonella spp. and some samples had Coliform counts as high as 5.90 log10CFU/g, while Escherichia coli counts reached 5.50 log10CFU/g [11]. Tournas [12] conducted a survey of fresh and minimally processed vegetables and sprouts in the Washington DC area and found that yeasts were the most prevalent organisms in these samples. Levels of yeasts can range from less than 100 to 4.0 × 108 CFU/g, mold counts generally ranged from less than 100 to 4.0 × 104 CFU/g [12]. One of the major causes make sprouts infected to microorganism is due to the seed used. As with many other crops, the seeds used for sprouting are obtained from plants grown in open fields without special measures, with subsequent commercial seed sprouting conditions favoring microbial growth, including that of pathogens [13,14]. Therefore, in sprout production, assuring the absence of pathogens on seeds is regarded as a critical control point, as defined by the Codex Alimentarius Commission [15]. There are many seed decontamination methods that have been investigated over the years [16,17]. These include chemical treatments (single chemical compound and/or combination of several chemicals) [18 -21]. These methods gave good sterilizing efficiency but they created a quantity of chemicals released into the environment. It is reported in a study that using chemicals (HP/Carvacrol) to treat seeds reduced the germination percentage to unacceptable levels [21]. Dry-heat treatment in combination with irradiation treatment has been studied by Bari et al. [2], they found out that dry heat in combination with radiation doses of up to 1.0 kGy did not negatively impact the seed germination rate or length of alfalfa, broccoli, and radish seeds but did decrease the length of mung bean sprouts. Up to now, a preeminent method to be applied in the sprouts production, which requires the reduction of microorganisms on the surface of sprouts without affecting the germination of seed, is still being sought. EAS is usually generated by electrolysis of a saline solution in an electrochemical chamber with a diaphragm [3]. Many researches showed that EAS has the strong disinfection activities and is safe to the human and environments. It has been widely used in food processing industry [22,23]. However, few literatures reported the effect of EAS in agriculture, especially on seed treatments. Thus, in this study, EAS was applied in the broccoli seed treatment in order to reduce the coliform on the surface of seeds, and its effect on the growth of sprouts was evaluated. Materials Fresh broccoli seeds used in the experiment were obtained from VinEco Agricultural Investment, Development and Productions LLC. Seeds that are uniform in shape and size were selected for the experiment and kept at 4 o C until used. The seeds were infested with coliform at around 6 log CFU/g before experiment. Experimental solutions preparation An EAS was generated by dissolving 20 g of KCl, as an electrolyte, in 20 L of distilled water and electrolyzing by diaphragm electrolyze with the following electrochemical conditions: 8V; 0.7A; anode flow rate 8 L/h; cathode flow rate 2 L/h. The electrochemical solution from anode and cathode were mixed together with an anode:cathode ratio of 8:0.5. The mixed solution was diluted with distilled water at 1.0, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2 and 0.1 strengths. The pH value and Oxidation reduction potential (ORP) of EAS were measured by the HACH SenSion-156 device. The available chlorine concentration (ACC) in EAS was determined by an iodometric method (SMEWW 4500-Cl.B) and a photometric method with HACH DPD reagent (USA) on a measuring equipment DR 2800 (HACH -USA). Seed treatments In each experiment, 25 g of broccoli seeds in 250 mL beaker were soaked in the treating solutions (different EAS) for 5 min, then washed with 250 mL of distilled water. Enumeration of total bacterial counts The total bacterial counts on the broccoli seeds after being treated with different EAS and a control sample (not treated) were determined according to ISO 9308-1:2000. All plates were incubated at 37 o C for 24 h. After incubation, colonies of total natural bacteria were enumerated and expressed as colony-forming units log per gram (log CFU/g). Determination of germination percentage The germination percentage was determined as described by Hu et al. [24]. 5 g of samples of the control and treated seeds were placed in sterile hydroponics sponges for 3 days at an environment temperature of 25 o C (± 2 o C), with sterile water added periodically to maintain a high-moisture environment. The total number of seeds and germinated seeds left in the containers were then counted, where the germination percentage was defined as the ratio of seeds that germinated to the total number of seeds. Determination of broccoli sprouts growth To determine the growth of broccoli sprouts, 100 sprouts of the germinated seeds in each experimental sample and control were left to continue growing for 4 days under the same cultivation conditions. The length of broccoli sprouts was measured by a ruler. Statistical analysis All trials were replicated three times under the same experimental conditions and using broccoli seeds from the same source. Reported plate count data represent the mean values obtained from three individual trials, with each of these values being obtained from duplicated samples. Data were subjected to analysis of variance using the Microsoft Excel 2019 program. Significant differences in plate count data were established by the smallest difference at 5 % significance level. Solutions properties The pH, ACC and ORP values of different experimental solutions are shown in Table 1. Distilled water was of pH 6.9, ACC < 0.05 ppm and ORP 350 mV. It can be seen from Table 1 that the ACC of 10 different strengths of EAS were between 2.9 -28.6 ppm, the pH values were 6.5 -6.7 and the ORP were 420 -850 mV. Upon the dilution of EAS, the ACC and ORP of the diluted solution decreased, while the pH value slightly increased probably because of the reduction of ACC in the solution. Effect of EAS on the total bacterial counts on the surface of broccoli seeds The effect of EAS on the total bacterial counts on the surface of broccoli seeds is presented in Fig. 2. The coliform counts of seeds in control sample and treated with tap water reached as high as 6.17 and 4.92 log CFU/g, respectively. Meanwhile, the coliform counts on the seeds treated with EAS declined sharply to a mere 1.05 to 2.57 log CFU/g. At the available chlorine concentration of 28.6 -8.6 ppm (EAS strength 1.0 to 0.3), the coliform counts on seeds reduced strongly (total coliform between 1.05 and 1.22 log CFU/g), but there was no considerable difference of total coliform among these samples. In the meantime, the total bacterial counts were substantially higher on the seeds treated with EAS of 0.2 and 0.1 strength (ACC of 5.7 and 2.9 ppm, correspondingly), at 2.16 and 2.57 log CFU/g, respectively. This result showed that the sterilization capacity of EAS for broccoli seed increased with an increase in available chlorine concentration in it. However, at available chlorine concentrations of 8.6 -28.6 ppm (EAS strength 0.3 to 1.0), the effect of EAS was not significantly different. In comparison with other methods about positive effect, like the combinations of highpressure treatment, temperature and antimicrobial compounds method studied by Peñas et al. [21], at a hypochlorite concentration of 18.000 ppm and a pressure of 200 MPa, the microbial reduction was achieved at 4.5 -5 log CFU/g, while the EAS treatment method gave the coliform counts reduction up to 5.12 log CFU/g at an ACC concentration of a mere 28.6 ppm in normal condition. This strong disinfecting effect was due to the fact that the EAS contains not only chlorine-containing oxidants but also many other strong oxidants such as atomic oxygen, single molecular oxygen 1 O2, O3, free radicals, etc. These oxidants have been shown to have strong disinfecting effect even at a small concentration. In addition, the EAS always exists in a state of stable pseudo-stimulation, whereas the oxidants composition is always changing. As such, microorganisms are unable to adapt to resist [25]. Meanwhile, the small concentration of oxidants has ensured safety for human health and the environment during long-term use [26]. This EAS treatment method not only has sterilization effect as high as the combinations of highpressure treatment, temperature and antimicrobial compounds method, but also is easier to apply and cheaper because high pressure and temperature condition is not necessary. Another method is the dry-heat treatment in combination with irradiation treatment proposed by Bari et al. [2], where under the condition of dry heat for 17 h associated with irradiation at a dose of 0.25 kGy, the E. coli from broccoli seeds (the E. coli counts before treatment was 5.2 log CFU/g) was completely eliminated. This method gave higher sterilization effect than the EAS method. However, the dry-heat treatment in combination with the irradiation method requires a long treatment time (dry heating for 17 h) and consumes a lot of electricity. Therefore, the Bari's method is difficult to be widely applied because of the equipment and operating costs. On the other hand, the EAS method is much faster and easier to apply, but still can provide the necessary sterilization efficiency. Furthermore, since the production cost of 1 liter of EAS was about only 2.000 VND (about 0.1 USD), comparing to other seed treatment methods, the cost of applying this method was much lower. In comparison with other chlorine compounds, using EAS is more efficient and safer due to the significant limitation of the formation of organic halogen compounds [27]. Effect of EAS on the germination percentage of broccoli seeds The effect of EAS on the germination of broccoli seeds is shown in Fig. 2. After treatment with different EAS strengths and tap water the germination rate of broccoli seeds was around 96 -99 %. There was no considerable difference in the number of germinated seeds among these experiments. This result showed that the seed treatment by EAS did not have a significant negative impact on the germination percentage of broccoli seeds. Effect of EAS on the growth of broccoli sprouts The effect of EAS on the growth of broccoli sprouts was evaluated by the total length obtained after sowing for 7 days and the results were shown in Fig. 3. treated seed at different strengths of EAS had an average length of 97 to 108 mm. However, the growth differences of broccoli sprouts were not obvious among the treatments. This showed that the seed treatment with EAS (at pH 6.5 -6.7; ACC 2.9 -28.6 ppm) had no impact on the growth of broccoli sprouts. CONCLUSIONS The electrochemical activated solutions with diferrent strengths (1.0; 0.9; 0.8; 0.7; 0.6; 0.5; 0.4; 0.3; 0.2; 0.1), pH values of 6.5 -6.7 and available chlorine concentrations of 2.9 -28.6 ppm were successfully generated to be applied for broccoli seed treatment. The results showed that the EAS had good effect on reducing the coliform on the broccoli seed surface, without affecting the germination percentage or average sprout length. The most suitable condition for the treatment of broccoli seeds was at pH 6.7 and ACC of 8.6 ppm. This EAS holds great promise for use in seed treatment because of its high coliform killing effect, lowcost, time saving and ease of application.
2021-07-30T01:25:57.528Z
2021-05-17T00:00:00.000
{ "year": 2021, "sha1": "6a0602efb82adc6e3bf8bf214ac9b510c1be4292", "oa_license": "CCBYSA", "oa_url": "https://vjs.ac.vn/index.php/jst/article/download/15618/103810384580", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6a0602efb82adc6e3bf8bf214ac9b510c1be4292", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
245973152
pes2o/s2orc
v3-fos-license
Assessing Groundwater Withdrawal Sustainability in the Mexican Portion of the Transboundary Santa Cruz River Aquifer The impact of climate uncertainties is already evident in the border communities of the United States and Mexico. This semi-arid to arid border region has faced increased vulnerability to water scarcity, propelled by droughts, warming atmosphere, population growth, ecosystem sensitivity, and institutional asymmetries between the two countries. In this study, we assessed the annual water withdrawal, which is essential for maintaining long-term sustainable conditions in the Santa Cruz River Aquifer in Mexico, which is part of the U.S.–Mexico Transboundary Santa Cruz Aquifer. For this assessment, we developed a water balance model that accounts for the water fluxes into and out of the aquifer’s basin. A central component of this model is a hydrologic model that uses precipitation and evapotranspiration demand as input to simulate the streamflow into and out of the basin, natural recharge, soil moisture, and actual evapotranspiration. Based on the precipitation record for the period 1954–2020, we found that the amount of groundwater withdrawal that maintains sustainable conditions is 23.3 MCM/year. However, the record is clearly divided into two periods: a wet period, 1965–1993, in which the cumulative surplus in the basin reached ~380 MCM by 1993, and a dry period, 1994–2020, in which the cumulative surplus had been completely depleted. Looking at a balanced annual groundwater withdrawal for a moving average of 20-year intervals, we found the sustainable groundwater withdrawal to decline from a maximum of 36.4 MCM/year in 1993 to less than 8 MCM/year in 2020. This study underscores the urgency for adjusted water resources management that considers the large inter-annual climate variability in the region. Introduction According to the International Groundwater Resources Assessment Centre (IGRAC), a total of 468 transboundary aquifers have been identified worldwide [1], a figure that has steadily increased over the last decade due to advances in transboundary aquifer assessment. Groundwater from transboundary aquifers constitutes a significant source of fresh water for the environment and numerous communities in almost every nation [2,3], representing a valuable, invisible, and finite resource that needs to be managed sustainably. Historically, the United States and Mexico have engaged in insightful binational cooperation and dialogue regarding water resources. A vivid example of such cooperation, the 1994 Treaty for the Utilization of Waters of the Colorado and Tijuana Rivers and of the Rio Grande, along with its interpretations (Minutes), addresses specific border, environmental, and water-related issues. Yet, U.S.-Mexico relations surrounding water resources have not been exempted from conflict, such as the diplomatic dispute regarding the United States ronmental, and water-related issues. Yet, U.S.-Mexico relations surrounding water resources have not been exempted from conflict, such as the diplomatic dispute regarding the United States unilateral decision to build the All-American Canal in California that affected groundwater recharge in Mexican territory. In addition, the institutional asymmetries between the two countries, which are detailed in [2,4,5], could also jeopardize possible cooperation on water resources management, as described by [6]. Fortunately, among other outcomes, cooperation between the United States and Mexico has resulted in transboundary-aquifer assessment efforts to improve the understanding of their shared water resources. A solid scientific foundation on groundwater resources is a needed first step in developing groundwater management strategies in transboundary settings [2]. It is also essential in places that rely on groundwater resources for their basic activities or are currently affected by climate uncertainties, such as the Transboundary Santa Cruz Aquifer (TSCA) shared between the United States and Mexico [3] (Figure 1). Water supply in the TSCA, the binational aquifer recharged by the Santa Cruz River, is highly sensitive to climate variability and largely depends on compliance of local and international water and wastewater transfer agreements (e.g., [3,[7][8][9]). The TSCA recharge results from riverbed infiltration and mountain front recharge in Mexico and the United States. Thus, the TSCA is a binational aquifer in which the water-resources management and natural processes on one side of the border directly impact the neighboring country. Because of the region's scarce water resources, population increase, and growing groundwater demands on both sides of the border, the TSCA was selected for the U.S.- Mexico Transboundary Aquifer Assessment Program (TAAP). The TAAP was signed in 2009 by the principal engineers of the International Boundary and Water Commission (IBWC) and aimed to improve the knowledge of U.S.-Mexico transboundary aquifers [10]. The principles of the TAAP Cooperative Framework include elements that promote trust between the United States and Mexico (e.g., data sharing, development of binational aquifer assessment activities, the establishment of technical advisory committees, and the establishment of technical groups). These elements are crucial to maintaining the binational cooperation necessary when researching shared aquifers. Transboundary aquifer assessments worldwide have effectively employed these elements, including the Guarani, Nubian Sandstone, Saharan Aquifer, and Genevese Aquifer [2]. This study is part of the TAAP's effort to better understand the TSCA, particularly in the Mexican portion of the aquifer. The TSCA comprises four political-administrative domains: the Santa Cruz Active Management Area (SCAMA) in Arizona, with an areal extent of 1,854.43 square kilometers (km 2 ); the San Rafael Valley, with an areal extent of approximately 465 km 2 ; the Nogales Aquifer in Mexico, with an areal extent of 120 km 2 ; and the Santa Cruz River Aquifer in Mexico (SCRA-MX), with an areal extent of 952 km 2 ( Figure 1). The region's water supply relies on a relatively limited-storage, alluvial aquifer system underneath the Santa Cruz River Valley. The dominant source of recharge for the aquifer is the episodic streamflow events in the intermittent Santa Cruz River and its ephemeral desert tributaries. These episodic streamflow events are triggered by highly variable, seasonal (winter and summer) precipitation events (e.g., [7]). Thus, due to this region's limited groundwater storage and its reliance on episodic streamflow events, even small changes in groundwater recharge patterns coupled with increased water demand from border communities can adversely affect the water-supply reliability. Additionally, precipitation projections for the Upper Santa Cruz River Basin point to significant uncertainty and increased interannual variability, which will likely challenge water providers in meeting the water demands of the border communities [3,7,9,11]. Though previous studies have analyzed water resources in different portions of the TSCA, only a few have addressed the Santa Cruz River Aquifer in Mexico (SCRA-MX). For instance, studies have assessed the impact of urban growth on water resources, focusing on the "Ambos Nogales" region, which is located within the Nogales Aquifer and the SCAMA regions in Mexico and the United States [12,13]. Other studies developed ecosystem-services tools to assess the impacts of climate change and urban growth in the U.S. portion of the Santa Cruz Watershed [14] and to evaluate flood risk in the Ambos Nogales region, considering various scenarios of land-use changes [15]. In addition, climate change and water-resources assessments through hydrologic frameworks have also been developed for the SCAMA, attempting to bridge the gap between scientific findings and stakeholders [3,7,8,16,17]. Studies focusing on the SCRA-MX include hydrogeological characterizations of the aquifer [18], regional studies that assessed the impacts of climate change on local water resources [11,19], and the water availability reports published by the National Water Commission in Mexico (CONAGUA) [20-23]. These studies have improved the knowledge of the TSCA and have helped to develop tools that assist with water-resources-management decisions. However, a deeper understanding of the TSCA system, particularly the SCRA-MX, is needed to develop management strategies focused on groundwater sustainability. Sustainable groundwater withdrawal can be generally defined as the amount of water that can be withdrawn from an aquifer without causing undesirable environmental, economic, or social consequences [24,25]. Undesirable outcomes of unsustainable groundwater withdrawal may include a decrease in water availability for populations and the environment, a deterioration of the groundwater quality, riparian vegetation die-off, an intrusion of contaminated water or seawater, and land subsidence. This study aims to identify, through a water-balance model, the annual groundwater-withdrawal rate from the SCRA-MX that maintains sustainable conditions. Although sustainable groundwater withdrawal can have various definitions and nuances, we define groundwater-withdrawal sustainability as the withdrawal rate that maintains a multi-year balance between the water fluxes into and out of the basin. Study Area From its headwaters in the San Rafael Valley in Arizona, the Santa Cruz River flows southward to cross the U.S.-Mexico border into Sonora, Mexico. The river then curves northward and returns to the United States, just east of Nogales, Arizona; from there, it flows north to merge with the Gila River, a tributary of the Colorado River ( Figure 1). In the Mexican territory, water from the TSCA is primarily used by the city of Nogales and the town of Santa Cruz. According to Mexico's 2020 census, the number of registered residents was 264,782 and 1,835 in Nogales and the town of Santa Cruz, respectively. These numbers mark a 20.2% population increase for Nogales and an 8.16% decrease for the town of Santa Cruz compared with the 2010 census. On the other side of the border, in the 2020 census for Nogales, Arizona, the population declined from 20,837 (2010) to 19,770 (2020). During the same period, the total population in Santa Cruz County, Arizona, was almost unchanged (47,420 in 2010 and 47,669 in 2020). In Mexico, the national Law of the Nation's Waters (in Spanish, Ley de Aguas Nacionales, or LAN), signed in 1992, defines the role of the National Water Commission (CONAGUA) as the federal agency responsible for water management. Grounded in the Constitution, the LAN ordains in Article 20 that "the exploitation, use, or non-consumptive use [e.g., energy production] of the nation's water resources should be carried out through a concession or asignación (in Spanish) granted by the Federal Executive Branch or Basin Councils" [26,27]. Asignación is the legal term that the legislation utilizes to describe water appropriation for urban or domestic purposes. This appropriation cannot be transferred to other users. A concession defines the amount of water that can be extracted from a specific well/aquifer. The duration of concessions ranges from five to thirty years, and users can apply for an extension [28]. The concessions and asignaciones are registered in the Public Registry of Water Rights (in Spanish, Registro Público de Derechos de Agua, or REPDA). CONAGUA is also responsible for publishing groundwater availability reports for each aquifer in the Official Federal Gazette (in Spanish, Diario Oficial de la Federación, or DOF). These reports, which are published every three years, guide the appropriations of water concession and allocation volumes. In CONAGUA reports, water balance models are used to assess groundwater availability. The premise of these water balance models is that the Mean Annual Groundwater Availability for a given aquifer is equal to the difference between Mean Annual Recharge and the Mean Annual Groundwater Extractions and the Natural Discharge for environmental needs. For example, in 2020, CONAGUA published groundwater availability reports for 653 aquifers and reported the available volume for appropriation in the SCRA-MX to be 33.85 MCM/year [29]. It should be noted that the actual volume of groundwater withdrawal is often not monitored by CONAGUA and may therefore deviate from REPDA's authorized volumes. In the SCRA-MX groundwater concessions and asignaciones have increased from 19.2 MCM/year in 1995 to 33.85 MCM in 2020 ( Figure 2) [30]. This increase is primarily attributed to a gradual increase in appropriated concessions for agriculture, from 0 in 1995 to approximately 9 MCM/year in 2020. Additional appropriation of approximately 2 MCM/year was allocated since 2011 to the industrial sector for supporting copper mining operations. In the Nogales Aquifer in Mexico (Figure 1), groundwater allocations (concessions and asignaciones) have ranged from 0.003 MCM/year to 1.37 MCM, since 1997. It is important to note that additional water has been transferred for decades from both the SCRA-MX and Los Alisos aquifers to supply the water needs of the city of Nogales [3,23,31]. According to CONAGUA, since 1997, most concessions authorized in the Nogales Aquifer have been industrial, consistent with the main economic activity reported by the Ministry of Economy Materials and Methods Our study assessed the amount of annual groundwater withdrawal that maintains long-term sustainable conditions in the SCRA-MX. Sustainable groundwater withdrawal can be generally defined as the amount of water that can be withdrawn from the aquifer without causing undesirable environmental, economic, or social consequences [24,25]. Undesirable implications due to unsustainable groundwater withdrawal may include the decrease in water availability for populations and the environment, deterioration of the Materials and Methods Our study assessed the amount of annual groundwater withdrawal that maintains long-term sustainable conditions in the SCRA-MX. Sustainable groundwater withdrawal can be generally defined as the amount of water that can be withdrawn from the aquifer without causing undesirable environmental, economic, or social consequences [24,25]. Undesirable implications due to unsustainable groundwater withdrawal may include the decrease in water availability for populations and the environment, deterioration of the groundwater quality, riparian vegetation die-off, intrusion of contaminated water, intrusion of seawater, and land subsidence. Within the U.S. side of the border, the term safe yield is often used to describe a management goal that maintains sustainable conditions. Safe yield is defined by ADWR as a groundwater management goal that attempts to achieve and maintain a long-term balance between the annual amount of groundwater withdrawn and the annual amount of natural and artificial recharge (A.R.S. § 45-561 (12)). The terms "safe yield" and "sustainability", with respect to groundwater management, are often interchangeably used. Safe yield was historically defined as the attainment and maintenance of a long-term balance between the amount of groundwater withdrawn and the amount of recharge (e.g., [33]). Adhering to this definition, in order to reach safe yield conditions, groundwater withdrawal should not exceed natural recharge. This practice, however, ignores other long-term water fluxes out of the basin such as discharge, evapotranspiration, or springs that extract unaccounted for groundwater, which eventually may deplete the aquifer. Regardless of the term selection, the selected term should be clearly defined for each specific aquifer considering its management goals and the potential hydrologic, economic, or ecologic harms inflicted by unsustainable management [34]. To estimate the amount of annual withdrawal that maintains sustainable conditions, we used a modeling framework that consisted of a water balance model (WBM) and a hydrologic model. The WBM was developed to account for all annual water fluxes into and out of the basin of the SCRA-MX and to calculate the long-term cumulative water deficits or surpluses. In an arid environment that relies on highly inter-annual climatic variability and therefore highly variable year-to-year natural recharge, the deficits and surpluses should be assessed over multiple years. For instance, the current ADWR recommendation for a quantitative assessment of safe yield is to consider a 20-year moving average interval for the natural components of the water budget (e.g., natural recharge) and a three-year running average for the artificial components (e.g., groundwater withdrawals and incidental recharge). In our study, we assessed the sustainable withdrawal by first considering the entire period of the historical record (1954-2020) and second, by considering 20 year moving averages, as recommended by ADWR. Water Balance Model Adapted from CONAGUA (2020) [23], the annual mass balance in the SCRA-MX basin is calculated using the following equation: where ∆S represents the annual positive or negative water storage changes in the aquifer and vadose zone, Q in and Q out are the Santa Cruz River streamflow in and out of the basin. GW in and GW out are the groundwater fluxes into and out of the basin; Re is the natural groundwater recharge component; Ag is the return flow from irrigated agriculture; ET is the actual evapotranspiration losses; and Pu is the groundwater withdrawal. The units for all the terms in Equation (1) In this study, we solved the WBM equation to determine the groundwater withdrawal (Pu) that maintains the long-term changes of ∆S in sustainable conditions. This simulation was implemented at an annual time step to assess the overall long-term balance. In the following section, we describe the WBM components considered in this study. Precipitation Hourly precipitation time series are needed as input to the hydrologic model. Hourly precipitation records since 1949 are available from the Nogales 6N station (USC00025924; Figure 1). However, we found many disagreements when this record was compared to the 1954-2020 daily quality-controlled dataset from the same station. We decided, therefore, to use the daily time series and disaggregate it to hourly (see Figures S1-S6). The disaggregation was carried out by using the hourly time series to identify the hourly diurnal distribution with reported daily precipitation days. If hourly events were unavailable for the target date, we selected from the hourly time series a rainy day within a short duration from the target date. The hourly precipitation was then spatially interpolated over the study area using the 1958-2020,~4 km 2 gridded monthly rainfall, available from the TerraClimate dataset [35]. The interpolation was carried out by using the ratios of the station's grid cell with the other TerraClimate grid cells for the matching months. These ratios were used as multipliers for the interpolation to derive 4 km 2 hourly time series. Prior to 1958, a randomly selected month from the same wetness tercile as the station's record was used for the interpolation. The interpolated 4 km 2 grid was then averaged over the area of the modeling units to derive the hourly Mean Areal Precipitation (MAP) time series, which were used as input to the hydrologic model. This spatial interpolation method assumes that the Nogales gauge well represents the occurrence of hourly events over the study area, and that the hourly rainfall distribution throughout the month is uniformly distributed in space. These assumptions are particularly challenged during the North American Monsoon summer rainfall characterized by small-scale local convective thunderstorms. Streamflow (Q in and Q out ) Observations of surface inflow and outflow to and from the Mexican portion of the Santa Cruz River are available from the USGS hydrometric stations at Lochiel (USGS 09480000) and near Nogales (USGS 09480500). The Lochiel hydrometric station, approximately 2. It has a daily streamflow record from 1913 to the present, with some missing years during the 1920s. We note that although the 1954-2020 observed average streamflow out of the basin was 22.1 MCM/year (range 0-181 MCM/year), the streamflow out of the basin was likely generated from rainfall over the basin and therefore was not considered as a negative flux in Equation (1). For this study, a hydrologic model was used to simulate the inflow and outflow (i.e., Q in and Q out ) as a function of precipitation. The hydrologic model we used is the Sacramento Soil Moisture Accounting (SAC-SMA) model [36], as it was configured for this basin by the Colorado Basin River Forecast Center (CBRFC), U.S. National Weather Service (see Figures S7 and S8). The SAC-SMA model is a continuous hydrologic model that keeps track of the water content at the basin's top and subsoil layers. It uses precipitation and evapotranspiration (ET) demand as input to simulate runoff, recharge, actual evapotranspiration, and soil moisture. The CBRFC's primary purpose is to warn for high-flow events. Therefore, they focused their SAC-SMA model calibration on capturing episodic flow events. In our study, the model was used to account for the overall streamflow influx into the area of interest. Therefore, the model required additional calibration to capture the range of flow regimes. The calibration was carried out by comparing the simulated streamflow on the Santa Cruz River in Lochiel and near Nogales to observed flow from the USGS gauges. The assessment was carried out for ranging time scales of daily, seasonal, and annual flows. The CBRFC SAC-SMA model configuration for the SCRA-MX basin is based on three hydrologic units. parts of the SCRA-MX basin, respectively. The upper part of the basin (617 km 2 ) drains areas higher than 1515 meters, while the lower part of the basin (537 km 2 ) drains areas lower than 1515 meters. In our implementation, the surface runoff generated by the lower part of the basin was considered for the flow simulation at the outlet. The runoff from the upper part, below 50 m 3 /sec was considered as the groundwater recharge component. This assumption is warranted, as it is seen that most of the flow at the Nogales gauge is attributed to local rainfall events. During large storms in the upper basin, the flow contribution to the basin's outlet is delayed and later appears as baseflow [7,16]. In Table 2, summary statistics for the 1954-2020 estimated annual recharge are provided. Notice that CONAGUA (2020) [23] estimated the vertical recharge at 23.8 MCM/year, comparable to our estimated annual average. However, as it is apparent from the values presented in Table 2, the large inter-annual variability of the groundwater recharge may not be well represented by the sample's first-moment indicator. Groundwater (GW in and GW out ) The border crossing groundwater inflow and outflow mainly occur at the alluvial aquifer underneath the river's channel bed. These fluxes are not measured and are estimated from previous studies. Although these fluxes are likely dependent on the aquifer pressure gradients near the international border, we assume constant groundwater fluxes. In our analysis, we adopted CONAGUA (2020) [23] estimate of +10. Evapotranspiration (ET) Evapotranspiration (ET) from the basin can be divided into ET from the soil, ET from irrigated agriculture fields, and ET from the shallow groundwater aquifer through riparian vegetation and exposed surface water sections of the stream. In Equation (1), the ET variable refers to the latter component. The hydrologic model calculates the ET from the soil, and it is implicitly accounted for in the recharge and streamflow terms. The ET from the agricultural field is considered in the calculation of the agricultural return term. In CONAGUA (2020) [23], the total ET losses from the aquifer were estimated as 8.8 MCM/year. This estimate assumes that ET from the groundwater is linearly reduced with depth-to-water up to an extinction depth of 10 m. In CONAGUA (2020) [23], the surface area estimate of the aquifer's water levels was provided as a base for the ET estimate. This procedure assumes that the aquifer's water level and the potential evapotranspiration are not changing from year to year. Using the hydrologic model simulations, we found that the average actual ET from the soil is 314 MCM/year, and the average actual ET is 88% of the annual precipitation. The actual ET is highly correlated with precipitation and ranges from 130 to 530 MCM/year, 62 to 103 percent of the annual precipitation, respectively. These actual ET estimates are comparable to findings by Minjarez et al. (2011) [19]. Agricultural Return Flow (Ag) To estimate the agricultural return flow, we used the CONAGUA (2020) [23] procedure. It was based on calculations of crop consumptive use, which is the amount of transpired water during the growth period of the crop. The agricultural return is then calculated as the irrigated water and precipitation that is in excess of the estimated consumptive use. In CONAGUA (2009 to 2020) [20-23], the irrigated agriculture area was estimated as 8.3 km 2 of alfalfa (60%), oat (30%), and sorghum (10%). Using the modified Blaney-Criddle equation [40], the integrated consumptive use of these crops was estimated as 901 mm/year (7.5 MCM/year), and the agricultural return was estimated as~4.1 MCM/year. In our implementation of the WBM, we used CONAGUA's estimate of consumptive use and the dynamic year-to-year change in precipitation to estimate the groundwater withdrawal that was needed for irrigation. The 1954-2020 average annual precipitation over the agricultural fields was 2.6 MCM/year (range 0.9-5.5 MCM/year), and the average groundwater withdrawal that satisfied the irrigation demand was 4.9 MCM/year, ranging from 1.9 to 6.6 MCM/year. This demand calculation assumes that precipitation occurred during the growing season, and the irrigation was optimized to satisfy the crops' consumptive use. It is important to note that the National Institute of Statistics and Geography (Instituto Nacional de Estadística y Geografía) estimated the irrigated agriculture area in the basin to be 15.7 km 2 [41]. Using a 30 m near-infra-red band of Landsat-8 images from May 2018 and May 2019, our team estimated an area of approximately 17 km 2 of agricultural fields. Thus, the water consumption, as well as the areal extent of irrigated agriculture in the basin, is uncertain and requires a comprehensive survey. Results Using 1954-2020 climate dependent recharge, Qin and Ag (as explained above), we solved Equation (1) for the amount of groundwater withdrawal (Pu) yielding a ∆S annual average of zero. The Pu that maintains a 1954-2020 average ∆S of zero is 23.3 MCM/year. This Pu is in addition to the Pu used for irrigation that satisfies the estimated consumptive use of the cultivated fields, as described in CONAGUA (2020) [23]. Using this estimated Pu, the average fraction of the inflow and outflow fluxes from the basin are presented in Figure 3, and the average quantities of these various fluxes are presented in Figure 4. The largest influx to the basin is the natural recharge, a highly variable flux (see Table 2) that is mainly controlled by the inter-annual variability of precipitation over the SCRA-MX basin. The cumulative changes of the ∆S using the estimated Pu of 23.33 MCM are shown in Figure 5a. It is seen that out of the 67 water years, approximately 33% have shown a surplus while most years ended with a deficit. The cumulative surplus consistently increased from 1965 to reach a surplus of approximately 385 MCM in 1992. These surplus years can be related to frequent El Niño-Southern Oscillation conditions and positive Pacific Decadal Oscillation [8]. However, since 1992, only two years showed an annual surplus (positive ∆S) and in 2020, the entire surplus that had been gained until 1992 was depleted. These long periods of accrued surplus (1965-1992) and deficit (1995-2020) exemplify the dependence of the sustainable Pu on the period of analysis. The increasing and decreasing trends shown in Figure 5 seem to support ADWR recommendations for examining 20-year intervals, a duration sufficiently long to capture the observed multi decadal trends. The fluxes estimated for the WBM can be grouped to three general categories: climate driven annually variable fluxes (Qin, Qout, Re), satisfying water demand fluxes (Ag, Pu), and constant annual fluxes (GWin, GWout, ET). The first category is based on a hydrologic model that uses sub-daily precipitation and evapotranspiration demand time series as input to simulate the fluxes needed for the WBM. While the simulated Qin and Qout were compared to observed streamflow records, the Re estimate cannot be compared to observations. As discussed in the results section, the Re is the largest flux into the basin (Figures 3 and 4) and has large inter-annual variability ( Table 2). Considering moving averages of 20-year intervals, the estimated Pu is shown in Discussion Overall, there has been constant cooperation and dialogue over water resources shared between the United States and Mexico. A remarkable example is the 1944 Water The fluxes estimated for the WBM can be grouped to three general categories: climate driven annually variable fluxes (Q in , Q out , Re), satisfying water demand fluxes (Ag, Pu), and constant annual fluxes (GWin, GWout, ET). The first category is based on a hydrologic model that uses sub-daily precipitation and evapotranspiration demand time series as input to simulate the fluxes needed for the WBM. While the simulated Q in and Q out were compared to observed streamflow records, the Re estimate cannot be compared to observations. As discussed in the results section, the Re is the largest flux into the basin (Figures 3 and 4) and has large inter-annual variability ( Table 2). Considering moving averages of 20-year intervals, the estimated Pu is shown in Figure 5b (average Pu of 26.3 MCM/year (ranging from 8.1 to 36.8 and an S.D of 9.6 MCM/year). As expected, the estimated 20-year annual Pu has continuously declined since the mid-1990s to approximately 9 MCM/year since 2012. Discussion Overall, there has been constant cooperation and dialogue over water resources shared between the United States and Mexico. A remarkable example is the 1944 Water Treaty that has allowed sharing surface water among both countries. However, this agreement does not include groundwater management. This absence has not been addressed, although some steps have been taken-for instance, creating the TAAP that allows technical cooperation between both countries and sharing information on groundwater resources. The Santa Cruz River Aquifer in Mexico (SCRA-MX) is part of the Transboundary Santa Cruz Aquifer (TSCA), an aquifer shared by the United States and Mexico. The TSCA is located in a semi-arid region characterized by limited groundwater storage, dependency on climate variability, and physical water and wastewater transfers within Mexican territory and between the two countries [3,11,19]. Because of this region's limited groundwater storage and the border communities' reliance on groundwater as their sole resource, even small changes in groundwater recharge patterns coupled with increased water demands can detrimentally impact the water-supply reliability. Previous efforts on the TAAP have focused on understanding the aquifer characteristics of the TSCA, particularly the U.S. portion of the aquifer e.g., [3,12]. Our study improves the understanding of the SCRA-MX, contributes to the overall knowledge of the binational TSCA, and provides information that could serve as a reference for developing a fully binational water budget model. This analysis, along with previous studies for the TSCA (e.g., [3,11]), reported a substantial decline in regional precipitation since the early 21st century. For example, summer and winter precipitation has declined by 10% and 33%, respectively, according to comparisons of precipitation records from 1955-2000 to 2001-2020. These declines are substantially larger when comparing the same periods of the observed streamflow records out of the SCRA-MX basin (65% and 78% for summer and winter flow, respectively) [8]. Moreover, climate model projections for the mid-21st century for the SCRA-MX basin point to changes in precipitation regime, although these changes are highly uncertain e.g., [8]. These projections will pose additional challenges for water providers in meeting the demands for border communities [3,7,9,11]. To date, most water resources studies in the TSCA have focused on the Ambos Nogales region or the SCAMA (e.g., [3,[7][8][9]). Excluding the CONAGUA water availability reports, only a few studies have examined the impact of groundwater extractions in the SCRA-MX (i.e., [18,19,38]). In our study, we used a water balance model approach to estimate the amount of groundwater that could be withdrawn from SCRA-MX, while maintaining a long-term balance between water flowing into and out of the basin. Our study only assessed long-term water resources availability while not examining other potential ecological, economic, water quality degradation, or other harms that water resources management practices may cause. Although our analysis yields deterministic estimates for sustainable annual groundwater withdrawal, based on the best available data and information to derive the input for the WBM equation, it is important to note the analysis' main assumptions and the known sources of uncertainties that may have influenced the results. The main assumption that may require additional examination is that multi-year cumulative surpluses can be indefinitely stored in the basin and used to compensate for shortages in deficit years. It is likely that the aquifer's storage of surpluses is limited by the size of the aquifer and the dynamic of the groundwater interaction with the stream and atmosphere. Since natural groundwater recharge is highly variable in space and time, an accurate measure of this dominant flux is impractical. However, additional hydrological and hydrogeological measurements could advance understanding of the basin's hydrological process to potentially reduce the uncertainty in the natural recharge estimate. The uncertainty source in the second category stems from a lack of groundwater withdrawal monitoring. Following CONAGUA's procedure we assumed that the groundwater withdrawal was equal to the appropriated concessions and asignaciones, as reported by REPDA. Additional information is needed to understand how well the appropriated concessions represent the actual groundwater withdrawal in the basin. An additional source of uncertainty, as discussed before, is the areal appraisal of the cultivated and irrigated fields. The third category of fluxes, which were assigned as constants following CONAGUA's estimates, is also likely to vary in time. The main reason for assigning them as constants is the lack of information and data to understand their temporal variability. With the available information on the economic activities in the Nogales and Santa Cruz municipalities, it is possible to identify a positive relationship between increased industrial activities and water allocations from 2009 to 2020. While the groundwater surplus has reduced since 1995, allocations for agricultural and industrial activities have increased. Considering this trend, it would be desirable that the national authority assess the potential negative impacts of groundwater over-allocation and its availability to maintain a long-term balance between water flowing into and out of the basin. It is generally possible to monitor groundwater extraction for asignaciones because they are dedicated to public services, for which municipal and state government agencies are responsible for reporting to CONAGUA. However, for groundwater concessions, the monitoring is limited. Additionally, as mentioned above, concessions can be transferred to other users, and although these changes must be reported to CONAGUA, they are often not being promptly reported. Future TAAP efforts on transboundary aquifer assessment include the evaluation of the uncertainty associated with the water balance model that was developed for the TSCA and the identification of specific actions that can substantially reduce uncertainty in WBM simulations. In addition, development of recommendations for a model and data management framework for binational watersheds with similar setting to the TSCA. Conclusions In this study, we assessed the amount of groundwater withdrawal that maintains sustainable conditions in the SCRA-MX, which is part of the TSCA. In this part of the aquifer, the regulatory allocated groundwater concessions had steadily increased from approximately 18 MCM/year in 1995 to approximately 34 MCM/year in 2020. The increase in groundwater withdrawal concessions was primarily attributed to new allocations for agricultural and industrial usage. In this study, we used a water balance model (WBM) that accounts for all the annual water fluxes into and out of the basin to determine the amount of multi-year groundwater withdrawal that maintains sustainable conditions. In our study, "sustainable conditions" is defined as the amount of annual groundwater withdrawal that maintains a long-term difference of zero between the water fluxes into and out of the basin. We developed a hydrologic model to estimate the year-to-year WBM fluxes of natural recharge and streamflow into and out of the basin (i.e., Sacramento Soil Moisture Accounting). This contribution adds information to current CONAGUA publications. The SAC-SMA model, which was constructed for the region as three sub-basins, uses hourly precipitation and evapotranspiration demand as model input to continuously simulate streamflow, soil moisture, actual evaporation from the soil, and groundwater recharge. The hourly precipitation time series for the SAC-SMA model was developed for 1954-2020 using a gauge located near the border and interpolated using monthly gridded climatology. The average annual groundwater withdrawal amount that maintained sustainable conditions from 1954-2020 was 23.3 MCM/year. However, by implementing this constant annual withdrawal, there was a period of accrued surplus followed by an accrued deficit (1994-2020). We also estimated the annual groundwater withdrawals that maintain sustainable conditions in a moving average of 20-year intervals, as recommended by ADWR for safe yield assessment in the SCAMA. For the analysis of the moving average of 20-year intervals, the groundwater withdrawal that maintained sustainable conditions peaked in 1993 at 36.4 MCM/year and had since declined to less than 8 MCM/year in 2020. CONAGUA, in their latest groundwater availability report [23], estimated that groundwater withdrawal of 26.4 MCM/year yields an additional 2.2. MCM/year of available water that could be allocated. This study demonstrates the sensitivity of water resources management in the Mexican part of the Santa Cruz River basin and its high dependence on natural recharge, which depends on precipitation variability. It points to the challenge of identifying a management scheme that yields sustainable conditions. These challenges are exacerbated by the recent dry period and the projected uncertain precipitation in the region [12]. These mounting challenges call for careful adaptive management and planning of the aquifer to maintain sustainable conditions and long-term reliable water supply into the future. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/w14020233/s1, Figure S1: Average annual, summer (JJAS), and winter (NDJFM) rainfall over the USCRB (outlined in red). Data are available from the TerraClimate monthly 4 km 2 1958-2020 dataset, Figure S2: A scattergram of 1958-2020 monthly rainfall of the gauge record from Nogales 6N and the matching grid cell available from the TerraClimate dataset, Figure S3: Locations of the of daily rain gauges, Figure S4: A comparison between the total summer rainfall cumulative distributions of the spatially disaggregated Nogales gauge (black) and seven rain gauge records (red). Note that the seven gauges have different durations, as indicated at the subplots' titles, Figure S5: A comparison between the total winter rainfall cumulative distributions of the spatially disaggregated Nogales gauge (black) and seven rain gauge records (red). Note that the seven gauges have different durations, as indicated at the subplots' titles, Figure S6: A scattergram of the summer (red) and winter (blue) total precipitation of the spatially disaggregated observed Nogales record compared with matched observed record from the seven gauges, Figure S7: The 1949-2020 cumulative distributions of simulated (red) and observed (black) streamflow on the Santa Cruz River at Lochiel for the summer, winter, and annual, Figure S8: The 1949-2020 cumulative distributions of simulated (red) and observed (black) streamflow on the Santa Cruz River at the Nogales gauge for the summer, winter, and annual.
2022-01-16T16:12:37.647Z
2022-01-13T00:00:00.000
{ "year": 2022, "sha1": "03b4a37fcc1d421f0edc572a19d5ae9b65ea12ef", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4441/14/2/233/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "841aa0fe78e71715af1a3b4a3402f462ad8c2f76", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [] }
18304061
pes2o/s2orc
v3-fos-license
A Randomized Prospective Study of Lumpectomy Margin Assessment with Use of MarginProbe in Patients with Nonpalpable Breast Malignancies Background The presence of tumor cells at the margins of breast lumpectomy specimens is associated with an increased risk of ipsilateral tumor recurrence. Twenty to 30 % of patients undergoing breast-conserving surgery require second procedures to achieve negative margins. This study evaluated the adjunctive use of the MarginProbe device (Dune Medical Devices Ltd, Caesarea, Israel) in providing real-time intraoperative assessment of lumpectomy margins. Methods This multicenter randomized trial enrolled patients with nonpalpable breast malignancies. The study evaluated MarginProbe use in addition to standard intraoperative methods for margin assessment. After specimen removal and inspection, patients were randomized to device or control arms. In the device arm, MarginProbe was used to examine the main lumpectomy specimens and direct additional excision of positive margins. Intraoperative imaging was used in both arms; no intraoperative pathology assessment was permitted. Results In total, 596 patients were enrolled. False-negative rates were 24.8 and 66.1 % and false-positive rates were 53.6 and 16.6 % in the device and control arms, respectively. All positive margins on positive main specimens were resected in 62 % (101 of 163) of cases in the device arm, versus 22 % (33 of 147) in the control arm (p < 0.001). A total of 19.8 % (59 of 298) of patients in the device arm underwent a reexcision procedure compared with 25.8 % (77 of 298) in the control arm (6 % absolute, 23 % relative reduction). The difference in tissue volume removed was not significant. Conclusions Adjunctive use of the MarginProbe device during breast-conserving surgery improved surgeons’ ability to identify and resect positive lumpectomy margins in the absence of intraoperative pathology assessment, reducing the number of patients requiring reexcision. MarginProbe may aid performance of breast-conserving surgery by reducing the burden of reexcision procedures for patients and the health care system. device or control arms. In the device arm, MarginProbe was used to examine the main lumpectomy specimens and direct additional excision of positive margins. Intraoperative imaging was used in both arms; no intraoperative pathology assessment was permitted. Results. In total, 596 patients were enrolled. False-negative rates were 24.8 and 66.1 % and false-positive rates were 53.6 and 16.6 % in the device and control arms, respectively. All positive margins on positive main specimens were resected in 62 % (101 of 163) of cases in the device arm, versus 22 % (33 of 147) in the control arm (p \ 0.001). A total of 19.8 % (59 of 298) of patients in the device arm underwent a reexcision procedure compared with 25.8 % (77 of 298) in the control arm (6 % absolute, 23 % relative reduction). The difference in tissue volume removed was not significant. Conclusions. Adjunctive use of the MarginProbe device during breast-conserving surgery improved surgeons' ability to identify and resect positive lumpectomy margins in the absence of intraoperative pathology assessment, reducing the number of patients requiring reexcision. MarginProbe may aid performance of breast-conserving surgery by reducing the burden of reexcision procedures for patients and the health care system. Breast-conserving surgery (BCS) has been an established approach to surgery for early-stage breast cancer for more than 30 years. 1 Contemporary series report that 60-75 % of American women with early-stage breast cancer are treated with BCS. 2 BCS for noninvasive and invasive cancer includes a lumpectomy procedure, with sentinel node biopsy in cases of invasive disease, and postoperative radiotherapy in most cases. A successful lumpectomy requires complete removal of the malignancy, including a margin of surrounding normal breast tissue. This can be challenging to accomplish because the microscopic extent of breast cancer can be difficult for the surgeon to discern. Multiple studies have demonstrated the association of involved or positive lumpectomy margins with an increased risk of ipsilateral breast tumor recurrence, even in the presence of radiotherapy. [3][4][5][6] Although there is no universally accepted definition of negative surgical margins, at least 20 % of patients undergo more than one procedure to achieve acceptable margins as part of breast-conserving strategies. 2,7,8 The MarginProbe (Dune Medical Devices Ltd, Caesarea, Israel) was developed to provide surgeons with real-time intraoperative assessment of lumpectomy margins. Designed to be used as an adjunct to current surgical methods, the device measures the local electrical properties (in the radiofrequency range) of breast tissue. These properties are dependent on membrane potential, nuclear morphology, and cellular connectivity and vascularity that differ between normal and malignant tissue. 9 The device's sensing diameter is 7 mm, and it provides a positive/negative reading for each measurement taken. The threshold for a positive reading was set based on readings directly compared to pathology results. 10 The diagnostic performance was sensitivity 70-100 % and specificity 70-87 %, depending on the cancer feature size. The performance was similar for all histology types, including ductal carcinomain situ. In a multicenter trial where patients were randomized to usual surgical technique versus usual technique with adjunctive use of the MarginProbe, the rate of reexcision surgery was reduced by 56 % in the device arm of the trial. 11 There was no difference in cosmetic outcomes. The current study examined the contribution of adjunctive use of MarginProbe to identification of all involved lumpectomy margins, reduction in the number of patients with positive margins at the completion of primary lumpectomy surgery, and decrease in the necessity for repeat surgical procedures to achieve acceptable margins. METHODS This study was a prospective, randomized (1:1), doublearm, controlled trial involving 21 institutions and 53 surgeons. Participating centers represented a variety of practice settings, including academic, community-based, and private practice sites. Institutional review board approval was obtained at each site. Inclusion criteria included patients over 18 years with nonpalpable intraductal and invasive breast cancers. All patients had opted for BCS. Patients with multicentric or bilateral disease, those with prior radiotherapy or neoadjuvant chemotherapy, and those with a history of surgery in the ipsilateral breast were excluded, as were patients who were pregnant or lactating. Informed consent was obtained from all patients. Patients underwent preoperative localization of their lesions and removal of main lumpectomy specimens as per surgeons' usual practices. All main lumpectomy specimens were oriented to delineate the six surfaces of the tissue (superior, inferior, medial, lateral, anterior, and posterior). After main lumpectomy specimen removal, surgeons used their usual methods of intraoperative assessment, including inspection and palpation. Intraoperative pathology assessment was precluded. If a margin was deemed to be positive or close, additional tissue was excised. Patients were then randomized to device or control arms (Fig. 1). In the control arm, surgeons completed the lumpectomies, including utilizing information from intraoperative imaging, per their routine. In the device arm, the MarginProbe was additionally used by the surgeon to examine all six surfaces of the main lumpectomy specimens, with 5-8 measurements per face. A single positive reading identified a margin as positive. Device output was recorded. Surgeons were required to excise additional tissue from the corresponding surface of the lumpectomy cavity from every device-identified positive margin. Additional tissue removed from the lumpectomy margins was not examined by the device, nor was the lumpectomy cavity. Because the device should be used within 20 min after specimen excision, device arm intraoperative imaging, with additional excisions if indicated, was performed after device use. In both study arms, main lumpectomy specimens were inked. All specimens were evaluated by pathologists who were blinded to study arm. Tissue dimensions, margin status, and margin distance for all surfaces were recorded. Specimen volume was calculated based on the Ellipsoid formula: (p/6) 9 L 9 W 9 D. Subjects were followed (including additional surgical procedures) until the completion of surgical treatment. Data were collected until the earliest of the following events: 2 months after the patient's last operation; conversion of the subject to mastectomy; or initiation of chemotherapy. There were no restrictions placed on surgeons in terms of the performance of additional surgical procedures. For the purposes of this study, a positive margin was considered to be disease identified at B1 mm from the inked edge of tissue. Diagnostic measures, including false-negative and false-positive rates, were evaluated by comparison of device readings to pathology gold standard on a margin-by-margin basis. All statistical analyses were performed by SAS software (SAS, Cary, NC, USA). Numerical variables were tabulated using mean and standard deviations. Categorical variables were tabulated using number of observations and percentages. Statistics were performed at a = 0.05 twosided significance level. Rates between arms were compared by Fisher's exact test. Reexcisions were compared by Poisson's regression. No missing data were imputed. Safety was evaluated by reports of serious adverse events and adverse events. Safety reports were tabulated by group, body system, and relation to treatment. RESULTS A total of 596 patients were randomized, with 298 in each arm of the trial. Patient demographics and baseline characteristics are listed in Table 1. Patients underwent extensive imaging before surgery. The mean extent of disease was similar in the two groups. The main specimen volume was similar in both groups, reflecting no difference in surgical procedure before randomization. The disposition of patients in both arms of the trial is shown in Fig. 2. In similar proportions of patients in both arms, the main lumpectomy specimen contained at least one positive margin (Fig. 2, phase I). In patients with positive margins on initial lumpectomy specimens, an average of two margins was involved, with no difference between the two arms. With reference to the patients with positive main specimen margins, surgeons correctly identified all positive margins on the main specimen and removed additional tissue from those involved margins (Fig. 2 (p \ 0.0001). Patients for whom positive margins on the main specimen were not identified remained with positive final margins after the lumpectomy (Fig. 2, phase III, branches C1 and D1). Although the main specimen was cleared, some final margins were persistently positive because of disease identified at the edge of the additional tissue resected (phase III, branches C2 and D2) in 8 and 22 cases for the control and device arms, respectively. Interestingly, additional tissue was removed from the lumpectomy cavity in both arms in cases where the main specimen was found to have clear margins, resulting in positive final margins (phase III, branches C3 and D3) in 2 and 8 patients in the control and device arms, respectively. Table 2 lists the patients' final margin status after the primary lumpectomy procedure. In the control arm, 41.6 % (Fig. 2, branches C1, C2, and C3) of patients had positive margins compared with 30.9 % (Fig. 2, branches D1, D2, and D3) of patients in the device arm (p = 0.008), representing a 26 % reduction in the positive margin rate. Even though these patients had positive margins, surgeons determined that certain patients were not candidates for reexcision because the involved margins were recorded to be at skin or fascia. Excluding these patients, the significant difference in candidates for reexcision was maintained, favoring the device arm (p = 0.013). More patients in the control arm were candidates for reexcision because of positive margins originating from the main specimen. In contrast, there were more candidates for reexcision in the device arm on the basis of additional cavity shavings removed. As shown in Table 2, 19.8 % of patients in the device arm underwent second procedures for reexcision of lumpectomy margins compared with 25.8 % of patients in the control arm, representing a 6 % absolute (23 % relative) reduction associated with MarginProbe use. The analysis of this difference also accounted for the small but statistically insignificant (prerandomization) difference between arms in the number of main lumpectomy specimens with positive margins (Fig. 2, phase I). With regard to reexcision procedures that were required because of positive margins originating from the main lumpectomy specimens (Fig. 2, branches C1 and D1), the control arm rate was 20.8 % compared with 10.0 % in the device arm, a 47 % reduction (p = 0.002). To further evaluate device performance, the volume of tissue resected was analyzed. Both true-positive and falsepositive device readings resulted in excision of additional breast tissue. Therefore, total volumes of excision were calculated across all surgeries ( Table 3). As expected, the volume of tissue in main lumpectomy specimens was identical in the two arms. In the device arm, there was more tissue removed in the first surgical procedure, representing both true-positive and false-positive margin excisions. However, there was more tissue removed in reexcision procedures in the control arm. This led to an overall difference of 8.5 ml in tissue volume removed between the two study arms. When normalized to baseline breast volume, the difference between the arms was 2.6 %. The performance of the MarginProbe in the provision of diagnostic information was also evaluated. The marginlevel sensitivity of the device was 75.2 % (95 % CI: 69.4-81.0), with that of the control arm being 33.9 % (95 % CI: 27.6-40.2). False-negative rates were 24. 8 . False-positive rates were 53.6 and 16.6 %, in device and control arms, respectively. Similar adverse event rates were observed in both groups: device, 6 events (2 %), and control, 5 events (2 %). Of these reports, only 1 event was possibly related to the study device (wound infection). DISCUSSION BCS is an established approach to the treatment of earlystage breast cancer, providing an equivalent outcome with mastectomy while allowing for preservation of the breast. An ongoing challenge is the requirement of negative lumpectomy margins, to reduce the risk for in-breast recurrence. There is variability in defining acceptable margin width among surgeons and radiation oncologists. 12,13 Although reported reexcision rates vary, it is clear that a significant proportion of women who undergo BCS require multiple operations to achieve acceptable margins. Current techniques for intraoperative assessment have limited efficacy, particularly in cases of nonpalpable and intraductal disease. [14][15][16] The current trial evaluated a novel device for intraoperative assessment of lumpectomy margins in a challenging population with nonpalpable disease. Adjunctive use of the MarginProbe required little additional operating time (approximately 5 min) and resulted in a statistically significant improvement in complete identification of all positive margins on main lumpectomy specimens. This study did not test whether the device would allow for less surgery to be performed if the specimens were carefully examined intraoperatively by pathologists, with or without the selective use of frozen section. However, not all candidates for reexcision underwent these surgeries during the study period. Although 31 % of patients in the device arm had at least one positive margin at the end of the procedure, only 20 % had reexcisions. In the control arm, 42 % of patients had positive margins, and 26 % underwent reexcisions. Some patients had involved margins at skin or fascia, which are not amenable to reexcision. The design of this study did not constrain surgical decision making. The decision to perform a reexcision may be appropriately influenced by many factors, including the urgency to initiate systemic therapy, the results of genetic testing, and medical comorbidities. Although reexcision procedures were collected for 2 months after initial surgery, these factors may have had some effect on the recorded rates. The device was designed with an emphasis on sensitivity to provide maximal detection of all positive margins. It was expected that this increase in sensitivity (decrease in falsenegative results) would come at the expense of a reduction in specificity (increase in false-positive results), as was a Accounting for the difference between arms in the number of main lumpectomy specimens with positive margins (Fig. 2, phase I) observed. The cosmetic result after BCS has multiple components and may be affected by volume of tissue excised, tumor location within the breast, size of the primary tumor, size of the breast, and postoperative radiotherapy. There is also evidence that reexcision procedures negatively affect cosmetic outcomes. 17 Although cosmesis was not directly assessed in this study, the only factor potentially affected by MarginProbe use is volume of tissue excised. Our results suggest that use of the MarginProbe should have little impact on the cosmetic result of BCS. Some studies have demonstrated a significant reduction in reexcision rates when additional tissue is routinely removed from all six surfaces of the lumpectomy cavity. 18,19 However, a recent report from Massachusetts General Hospital showed no difference in reexcision rates in patients undergoing lumpectomy surgery, or lumpectomy plus selected or full-cavity shavings. 20 The total tissue volume removed was smaller in the patients who underwent select or complete cavity shavings, suggesting that performance of the main lumpectomy was altered when removal of additional tissue was anticipated. This change in surgeons' approach to the main lumpectomy specimen has also been reported in other studies. 19 At this point, full-cavity shaving has not been widely adopted. Achieving acceptable margins at the time of primary lumpectomy surgery may be increasingly important as techniques for intraoperative radiotherapy evolve and ablative approaches to the lumpectomy cavity are explored. 21 When oncoplastic closure techniques are used, it is especially important to avoid positive margins. Reexcision procedures may be difficult in these cases because it can be virtually impossible to accurately identify the specific margin to be reexcised. 22 Use of MarginProbe, as depicted in this study, is not the complete solution to the complex problem of lumpectomy margins. This device provides incremental improvement in reducing reexcision procedures, which is meaningful because these additional unanticipated procedures burden patients and the health care system. Although this device adds some additional cost, it is offset by the cost of reexcision procedures and costs related to positive margins. More work is needed to understand the relationship between various margin distances and in-breast recurrence rates. Additional evaluation of the new margins of cavity shaving specimens would also provide important intraoperative information. The use of MarginProbe or other technology to interrogate the lumpectomy cavity might provide additional data regarding the adequacy of resection. Novel methods for preoperative breast imaging might also provide a more accurate roadmap for surgical planning. The number of patients opting for mastectomy procedures is on the rise. It is possible that the frequent need for multiple excisions to achieve adequate lumpectomy margins contributes to this trend. CONCLUSION The current study supports the use of the MarginProbe in lumpectomy surgery in the absence of routine intraoperative pathologic assessment. The device provides surgeons with intraoperative assessment of lumpectomy margins, allowing directed reexcision of positive margins and reducing the proportion of patients with positive margins at the conclusion of surgery. A decrease in reexcision procedures can reduce the burden of breast cancer surgery for the patient and the health care system.
2017-08-02T21:40:13.776Z
2014-03-05T00:00:00.000
{ "year": 2014, "sha1": "37e173fe493a6e76c3cd026e2861ecead46a1592", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1245/s10434-014-3602-0.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "37e173fe493a6e76c3cd026e2861ecead46a1592", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4571742
pes2o/s2orc
v3-fos-license
Pilot Study of a Device to Induce the Hanger Reflex in Patients with Cervical Dystonia The hanger reflex (HR) is an involuntary head rotation that occurs in response to a clothes hanger encircling the head and compressing the unilateral fronto-temporal area. Here, we developed an elliptical device to induce the HR and examined its utility for the treatment of cervical dystonia (CD). The study included 19 patients with rotational-type CD. The device was applied to each subject’s head for at least 30 min/day for 3 months. Severity scores on part 1 of the Toronto Western Spasmodic Torticollis Rating Scale were evaluated at baseline and after the 3-month trial. Mean scores without and with the device were significantly different both at baseline (16.6 vs. 14.7, respectively; P < 0.05) and after the trial (14.9 vs. 13.6, respectively; P < 0.05). This preliminary trial suggests that our device can improve abnormal head rotation in patients with CD. Introduction Cervical dystonia (CD) is a type of dystonia in which patients demonstrate an involuntary abnormal head position. The exact cause of CD remains unclear, but it is thought to result from an acquired neurological abnormality. The reported incidence of CD is 1.07 per 100,000 person/years. 1) A variety of treatments exist for CD, including electrical stimulation, 2) biofeedback, 3) physical therapy, 4,5) botulinum toxin (BTX) injection, 6) spinal cord stimulation, 7) deep brain stimulation (DBS), [8][9][10] and selective denervation. 11) According to the guidelines of the European Federation of Neurological Societies, intramuscular BTX injection is the first-line treatment for CD; 12) however, BTX treatment requires repeated injections every 3 months and is not financially feasible for some patients. Deep brain stimulation targeting the globus pallidus or subthalamic nucleus is recommended for patients who do not respond to BTX treatment. [8][9][10] While DBS is effective in a proportion of patients, it is both invasive and costly. When the head is encircled with an ordinary wire clothes hanger (Fig. 1A) and the fronto-temporal region is compressed by the long side of the hanger, reflexive head rotation towards the compressed side occurs (Fig. 1B). 13) We named this phenomenon as the hanger reflex (HR). [13][14][15] We induced the HR in patients with CD and found that it reduced abnormal head rotation, indicating its potential utility for the treatment of CD. 16 17,18) We then performed a preliminary clinical trial to assess the efficacy of the HR device for the treatment of CD. Subjects A total of 23 patients with rotational-type CD were recruited from 7 facilities in Japan. Written informed consent was obtained from all subjects prior to study participation. Methods During screening, a steel wire hanger was applied to each subject's head to confirm presence of the HR (Figs. 1A and 1B). Our portable HR device was elliptical and made of fiber-reinforced plastic with an interior urethane rim ( Fig. 2A). We prepared 7 sizes of the device; the smallest size was 547 mm in circumference and additional sizes were each 2 cm larger. A size was selected for each subject based on his or her head circumference. Briefly, the device was applied to the subject's head and rotated in the same direction as the patient's torticollis, to compress the opposite-side fronto-temporal region (Fig. 2B). During the trial, patients self-applied the device for more than 30 min total each day for 3 months. The patients were allowed to complete the 30 min application over multiple periods (e.g., 3 periods of 10 min each). The patients returned for outpatient follow-up visits each month after starting the trial to confirm proper application of the device and procedural compliance. Scores on the Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS) part 1 (severity scores) were evaluated at baseline and after the 3-month trial period, before and during device application. 19) The score was first measured without the device and subsequently with the device in place. After the conclusion of the trial, a questionnaire was administered to assess any side effects of the device. Statistical analysis All data are expressed as the mean ± standard deviation. Wilcoxon rank-sum tests were used to compare scores on the TWSTRS part 1 using JMP 11 software (SAS Institute Inc., Cary, NC, USA). This study was approved by the Ethics Committee of the University of Toyama (ID 23-141) and each institutional ethics committee (trial registration: UMIN 000007772). Results All subjects displayed the HR prior to the trial; however, 4 patients were excluded due to logistical difficulties with attending regular hospital visits. Accordingly, 19 patients (12 men and 7 women) with a mean age of 52.8 years (range, 23-86 years) were included in this study. The median disease duration was 36 months (range, 4-348 months). Figures 3 (line chart) and 4 (boxplots) show TWSTRS part 1 scores before and after the trial, evaluated with and without the device. Mean scores at baseline without and with the device were 16.6 ± 4.2 ( Fig. 4A) and 14.9 ± 1.7 (Fig. 4B), respectively, whereas scores after the 3-month trial period were 14.7 ± 4.3 (Fig. 4C) and 13.6 ± 4.6 ( Fig. 4D), respectively. The scores were significantly decreased immediately after device application at baseline (Fig. 4, A vs. B [immediate change,] P < 0.05) and at the end of the 3-month trial period (Fig. 4, C vs. D, P < 0.05). Without application of the device, the scores were significantly decreased between baseline and the end of the 3-month treatment period (Fig. 4, A vs. C [lasting change] P < 0.01). All patients experienced a tolerable level of local pain while wearing the device; but no patient dropped out due to pain. Additionally, there were no skin problems reported in association with device use, such as skin abrasions or other side effects related to device application. None of the participants returned the devices after the trial; rather, they all expressed the desire to continue using the device. Illustrative case A 45-year-old woman presented with a 4-month history of CD. Her head was rotated to the right (patient's right) and she had difficulty rotating to the left (Fig. 5A). The patient's condition was refractory to multiple medications as well as acupuncture. She was then enrolled in the present study. Application of the portable device to the patient's head allowed her to rotate to the left (Fig. 5B). After the 3-month trial, the patient was able to move her head to the left even without application of the device (Fig. 5C). The TWSTRS part 1 scores without and with the device were 23 and 16 at baseline, and 17 and 14 after the 3-month trial period, respectively. The patient did not report any side effects during the trial. Discussion In this study, we observed significant immediate and lasting changes in patients with CD after HR device application. The TWSTRS scores decreased immediately after device application (Fig. 4, A vs. B, immediate change) and after the 3-month trial period, Fig. 4 Changes in the severity scores of the Toronto Western Spasmodic Torticollis Rating Scale part 1 (boxplot). Boxplots of baseline scores without (A) and with the device (B) and after the trial without (C) and with the device (D). (A and B) show the immediate effects of the device, whereas (C and D) demonstrate the absence of a habituation effect over a 3-month period. All changes were statistically significant (P < 0.05). Compared to A and C, the scores were significantly decreased (P < 0.01), which means the effect lasted even without using the device (lasting change). A B C Neurol Med Chir (Tokyo) 58, May, 2018 even without additional device application (Fig. 4, A vs. C, lasting change). These findings indicate that our HR device not only allowed rotation of the head during use but also improved abnormal head rotation in a long-lasting fashion. The persistent effects of repeated device application on abnormal head rotation (Fig. 4, C vs. D) also indicate that there was no habituation effect to repeated HR induction or to use of our HR device. The phenomenon named by us as the "hanger reflex" was first reported in 1991. 20) Christensen applied a square cardboard box to the heads of 2 patients with spasmodic torticollis at approximately 45°, such that the boxes pressed on the fronto-temporal forehead, and abnormal head rotation was reduced. Matsue et al. demonstrated that this phenomenon was related to non-noxious compression of the fronto-temporal region. 14) In 2009, a specialized machine was developed to produce involuntary head rotation by pressing on the fronto-temporal region. 15) We previously reported that more than 90% of healthy volunteers experienced a rotating sensation of the head after fronto-temporal compression, indicating that the HR is common in healthy subjects. 13) In the present study, a majority of subjects (85.4%) exhibited the HR in the direction of the compressed side; moreover, the HR restricted abnormal head rotation related to CD. In a previous study, the device was shown to generate sufficient pressure on the fronto-temporal region to restrict pathological head rotation even after its use for only 1 month. 16) Sensory tricks are well-known phenomena that can temporarily relieve the symptoms of dystonia. We do not believe that the HR is a sensory trick because the HR is present in both normal subjects and patients. Moreover, according to previous studies, the HR requires neither pain nor visual information. 14,15) In a previous study, electromyographic (EMG) activity of the sternocleidomastoid muscle (SCM) was monitored in healthy subjects during the HR and showed that when subjects rotated to the left, the application of a hanger to the right fronto-temporal region suppressed EMG activity in the right SCM and caused the subject to assume a neutral position. 21) In a patient with CD, the HR suppressed abnormal EMG activity in the inferior obliquus capitis muscle and improved symptoms. 21) These findings outline a physiological basis for the HR and suggest that the HR relaxes abnormal muscle contraction in patients with CD. We believe the underlying mechanism of the HR is a sensory illusion, rather than a sensory trick. We previously hypothesized that the shearing force of a hanger or device applied to the skin of the head induces head rotation. This hypothesis has been supported by work from our co-authors, who developed a spring-loaded lozenge device to generate shearing force on the skin of the head; 22) regardless of the region of fronto-temporal compression, the head rotated medially when the skin was sheared to the medial side and laterally when the skin was sheared to the lateral side. With our device, rotation first pulled the skin on the fronto-temporal region to the medial side and then pulled the skin laterally after device release (Fig. 2). We thus suggest that shearing force is required for induction of the HR; specifically, we hypothesize that discomfort related to the shearing force causes subjects to move in the direction of the shearing force to resolve the uncomfortable sensation. Accordingly, when the head is continuously sheared towards the compressed side, the head rotates. The temporal line demarcates the muscular origin of the temporal muscle, and is located in the frontotemporal region, where it forms an angle on the frontotemporal forehead. In our previous research, the HR was absent in 7.9% of the 240 trials from 240 volunteers. 13) On this basis, non-responders to HR induction may have a more obtuse angle at the temporal line, making it difficult to induce skin shearing force. Our co-authors tentatively named this phenomenon (with unknown mechanism) the "hanger reflex" in 2008. 14) After publishing a study on the mechanism of the HR in 2014, 22) we believe that the phenomenon is induced by a sensory illusion, and that the HR does not involve a reflex arc of the type that underlies a deep tendon reflex. We discussed whether the term "reflex" should be used, but ultimately decided to continue using the term "reflex" because there were many papers using the term by that time. While the present study noted promising effects of HR device application in subjects with rotationaltype CD, further studies are necessary to examine the efficacy of this treatment for other types of CD. We found that the HR occurred in the anterior-posterior and lateral directions when the device was applied anterior-posteriorly and laterally, respectively. Similar phenomena have been reported in other regions of the body, including the wrist and waist; 23,24) the HR is thus considered to be a universal bodily phenomenon. Knowledge of this phenomenon has the potential to improve the treatment of neurological disorders, including dystonia in various body parts. The HR device is especially beneficial for patients with CD who cannot afford costly treatments such as BTX or DBS. Our HR device should therefore be considered for use in developing countries with limited financial resources. The modified device has recently become available as a general medical device in Japan (approval number, 35519001). There were some limitations to this study. First, only a small number of patients were included in our analysis. Second, given the preliminary nature of this study, and to accommodate the unique demands of patients accustomed to BTX therapy, we used a 3-month observation period. Further improvements might be conferred in a longer trial. Third, the changes in TWSTRS part 1 scores in this study were relatively small; however, these changes were statistically significant over a short observation period. Fourth, this study did not use a control group for comparison, given the difficulty of designing a placebo condition for our experiment. Further research with a large number of patients is required to fully explore the effectiveness and utility of the HR device for the treatment of CD. Conflicts of Interest Disclosure This study was supported by JSPS KAKENHI (23791587) and a Hokugin grant for young scientists. The JSPS KAKENHI and Hokugin grant for young scientists provided unrestricted support and had no role in the oversight or review of the research data or reporting. The authors (TA, KT, MF, and SK) have registered online self-reported COI Disclosure Statement Forms through the website for the Japan Neurosurgical Society members.
2018-04-26T18:25:46.827Z
2018-03-31T00:00:00.000
{ "year": 2018, "sha1": "3fab2670513463e05acb82206df5bb7db946e6db", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/nmc/58/5/58_oa.2017-0111/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3fab2670513463e05acb82206df5bb7db946e6db", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
1857612
pes2o/s2orc
v3-fos-license
Evaluation of the ICT Tuberculosis test for the routine diagnosis of tuberculosis Background Rapid and accurate diagnosis of tuberculosis (TB) is crucial to facilitate early treatment of infectious cases and thus to reduce its spread. To improve the diagnosis of TB, more rapid diagnostic techniques such as antibody detection methods including enzyme-linked immunosorbent assay (ELISA)-based serological tests and immunochromatographic methods were developed. This study was designed to evaluate the validity of an immunochromatographic assay, ICT Tuberculosis test for the serologic diagnosis of TB in Antalya, Turkey. Methods Sera from 72 patients with active pulmonary (53 smear-positive and 19 smear-negative cases) and eight extrapulmonary (6 smear-positive and 2 smear-negative cases) TB, and 54 controls from different outpatient clinics with similar demographic characteristics as patients were tested by ICT Tuberculosis test. Results The sensitivity, specificity, and negative predictive value of the ICT Tuberculosis test for pulmonary TB were 33.3%, 100%, and 52.9%, respectively. Smear-positive pulmonary TB patients showed a higher positivity rate for antibodies than smear-negative patients, but the difference was not statistically significant. Of the eight patients with extrapulmonary TB, antibody was detected in four patients. Conclusion Our results suggest that ICT Tuberculosis test can be used to aid TB diagnosis in smear-positive patients until the culture results are available. Results: The sensitivity, specificity, and negative predictive value of the ICT Tuberculosis test for pulmonary TB were 33.3%, 100%, and 52.9%, respectively. Smear-positive pulmonary TB patients showed a higher positivity rate for antibodies than smear-negative patients, but the difference was not statistically significant. Of the eight patients with extrapulmonary TB, antibody was detected in four patients. Conclusion: Our results suggest that ICT Tuberculosis test can be used to aid TB diagnosis in smear-positive patients until the culture results are available. Background A curable and preventable disease, tuberculosis (TB) continues to be a leading cause of mortality and morbidity worldwide. Early treatment of infectious cases reduces spread of TB. Therefore rapid and accurate identification of infected individuals is mandatory [1,2]. Currently, mycobacteriology laboratory algorithm to detect Mycobacterium tuberculosis (M. tuberculosis) consists of two steps: microscopic examination of a smear prepared from a concentrated specimen (sputum, bronchoalveolar lavage fluid, aspirates, etc) and culture. Smear microscopy allows direct detection of acid fast bacilli (AFB) in the specimen and identification of the most contagious patients. Although smear microscopy provides rapid results and inexpensive ways to diagnose TB, it has limitations. Probably the most important limitation is low sensitivity. Behr et al. reported that smear examination can detect less than 50% of all culture positive patients [3]. Culture, on the other hand, is more effective than smear microscopy. While 10 4 AFB/ml of specimen usually result in 60% of smear positivity, only ten viable bacilli per milliliter are required for culture positivity. The sensitivity of culture is 80%-85%. Additionally, identification and susceptibility testing of the isolates are major advantages of the cultural methods except that they take relatively long growth time. Today, TB diagnosis in resource limited countries mostly depends on clinical and radiological findings, as well as sputum smear microscopy and culture [4,5]. In response to the need for a rapid diagnosis of TB, a number of new approaches and methods were developed for the serological diagnosis. There are several serological tests, which use various native or recombinant antigens such as 38-kDa antigen, lypoarabinomannan, antigen 60 (A60), and tuberculous glycolipids (TBGL's). These include several enzyme-linked immunosorbent assay (ELISA)-based serological tests and immunochromatographic methods to detect antibodies to M. tuberculosis [6][7][8][9][10]. According to WHO Report 2002 on Global Tuberculosis Control, Turkey was classified in Category 1, which includes countries not implementing the DOTS strategy and having an estimated incidence rate of 10 or more cases per 100 000 population. In this report, a number of 18 038 of diagnosed cases and an incidence rate of 27 per 100 000 population of TB for Turkey were reported [11]. However this data do not reflect the true incidence of TB in Turkey due to underreporting and undiagnosed cases [12]. In this study, we evaluated an immunochromatographic assay, ICT Tuberculosis test which detects serum antibodies against five antigens that are secreted by M. tuberculosis during active infection by determining its sensitivity and specificity as compared to standard diagnostic procedures in a university hospital setting in Turkey. Patients Between April 1999 and December 2000, 80 patients with active TB were evaluated. All were human immunodefi-ciency virus negative. There were 15 female (18.75%) and 65 males (81.25%) aged between 14 to 76 years (median 39 years). Of the 80 patients with active TB, 72 (90%) had pulmonary disease and eight (10%) had extrapulmonary disease. Among patients with pulmonary disease, 53 (73.6%) patients were both smear and culture positive, 19 (26.4%) patients were smear-negative and culture-positive. Extrapulmonary disease included pleural disease (three patients), lymphadenitis (four patients), and epididymitis (one patient). Of the eight patients with extrapulmonary disease, two were both smear and culture positive, six patients were only culture-positive. Control group The control group consisted of 54 individuals selected randomly from individuals who applied to different outpatient chest clinics for employment TB screening. All members of control group had no previous history of TB, no signs or symptoms suggestive of pulmonary TB, no evidence of TB on chest X rays. The 54 subjects selected for the control group had a median age of 41 years (age range, 18-75 years) and 43 (79.6%) were males, 11 (20.4%) were females. All were HIV negative. Demographic characteristics of patients and control subjects were similar. Serum samples were obtained from almost all patients before initiation of antituberculosis treatment and stored at -80°C until tested. All patients and controls participated in the study were vaccinated with Mycobacterium bovis BCG. Akdeniz University Medical Faculty Ethical Committee approved this study. Microbiological analysis Routine TB examination included demonstration of AFB in Ziehl-Neelsen stained smears from sputum, bronchoalveolar lavage (BAL) fluid, aspirates or tissue biopsy samples, and culture according to standard procedures [5]. ICT Tuberculosis test The ICT Tuberculosis diagnostic kit (ICT Diagnostics, Bangowlah, New South Wales, Australia) is designed for the detection of antibodies to M. tuberculosis. Briefly, five highly purified antigens (including one of 38 kDa) secreted by M. tuberculosis during active infection are immobilized in four lines on the test strip. When serum or plasma applied, it flows past the antigen line. Bound antibody is detected by a goat anti-human IgG antibody conjugated to colloidal gold particles which produces one or more pink lines when bound to human antibody. The whole procedure is completed within 20 minutes. ICT Tuberculosis test doesn't require special equipment and technical skill. Statistical analysis Validity of ICT Tuberculosis test was measured by sensitivity and specificity. Negative predicitive value was also calculated. Antibody positivity rates between smear-positive and smear-negative patients were compared by chi-square test. Results Antibody was detected in 21 of 53 (29.2%) smear and culture-positive patients, three of 19 (4.2%) smear-negative culture-positive patients. Patients with smear positivity showed a higher positivity rate for antibodies than smearnegative patients but the difference was not statistically significant (χ 2 : 3.57, p: 0.058). We found the sensitivity 33.3% and specificity 100% with a negative predictive value of 52.9% for pulmonary TB. Of the eight patients with culture-proven extrapulmonary TB, antibody was detected in four patients. Sensitivity of the test for the extrapulmonary TB patients was not calculated because of the small number of the patients. None of the control subjects tested positive by the ICT Tuberculosis test. Discussion Serologic tests are the oldest methods for the diagnosis of TB. Agglutination of patient's serum with M. tuberculosis was investigated first in 1898 [13]. Since then, a number of serologic tests were developed for detection of host response to M. tuberculosis, but none of them has found widespread clinical use. The specificity of the early serological tests which were prepared from crude antigens were low because of cross reaction with environmental Mycobacterium species. Specificity could be improved by using purified antigens. A variety of antigens have been adapted for the serodiagnosis of TB. Certain antigens were found to have more diagnostic significance, one of them, 38 kDa antigen which is a phosphate-binding protein, was reported to be specific to the M. tuberculosis complex. This antigen has been identified as a potential reagent to be used for the screening of TB [5]. Cole et al. [14] evaluated a rapid membran-based antibody assay which contained only the 38 kDa antigen, for the diagnosis of active pulmonary TB in China and reported a sensitivity of 89% for smear-positive patients and a sensitivity of 74% for smear-negative patients, with a specificity of 93%. Zhou et al. [15] have expanded the control and patient groups and added an extrapulmonary TB group in that they found similar results. In our study, we evaluated the ICT Tuberculosis test to detect IgG antibodies to five antigens (one of them was p38) which were secreted by M. tuberculosis in patients with TB and found that sensitivity and specificity were 33.3%, 100%, and negative predictive value was 52.9%, respectively, for pulmonary TB. Previous investigators reported that the sensitivity of ICT Tuberculosis test for pulmonary TB is variable, ranging from 20% to 73%, specificity rates were reported as 80-100% [7,[16][17][18][19][20][21][22][23]. We found a sensitivity of 33.3% for ICT Tuberculosis test with values ranging from 39.6% to 15.8% in smear-positive and smear-negative patients, respectively. As in previous studies, we found that sensitivity of ICT Tuberculosis test is higher in smear-positive patients than smear-negative patients, but the difference was not statistically significant [7,14,15,17,19,22]. It is most likely that reported higher sensitivity in smear-positive patients may be due to the higher bacillary loads and thus a greater exposure to antigens and a more vigorous antibody response in these patients. Conclusion In conclusion, ICT Tuberculosis test can be used to aid TB diagnosis in smear-positive patients until the culture results are available.
2014-10-01T00:00:00.000Z
2006-02-27T00:00:00.000
{ "year": 2006, "sha1": "dadae9bf28e56c853f44ea61bebdc5c9ac6d53a5", "oa_license": "CCBY", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/1471-2334-6-37", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ee8e8cb4a7591f7c2e703657c10ea09cd17a315f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
251489833
pes2o/s2orc
v3-fos-license
Fabrication of Nylon-6 and Nylon-11 Nanoplastics and Evaluation in Mammalian Cells Microplastics (MPs) and nanoplastics (NPs) exist in certain environments, beverages, and food products. However, the ultimate risk and consequences of MPs and NPs on human health remain largely unknown. Studies involving the biological effects of small-scale plastics have predominantly used commercially available polystyrene beads, which cannot represent the breadth of globally dominant plastics. Nylon is a commodity plastic that is used across various industry sectors with substantial global production. Here, a series of well-characterized nylon-11 and nylon-6 NPs were successfully fabricated with size distributions of approximately 100 nm and 500 nm, respectively. The facile fabrication steps enabled the incorporation of fluorescent tracers in these NPs to aid the intracellular tracking of particles. RAW 264.7 macrophages were exposed to nylon NPs in a dose-dependent manner and cytotoxic concentrations and cellular uptake were determined. These well-characterized nylon NPs support future steps to assess how the composition and physicochemical properties may affect complex biological systems and ultimately human health. Introduction The substantial reliance on plastics in modern society has resulted in an estimated worldwide production of plastic materials near 367 million tons in 2020 [1]. It was predicted that 79% of plastics reside in landfills or the environment as of 2015 [2], spurring solutions towards a circular economy of plastics through technical, economic, and social changes [3][4][5]. Despite these efforts, plastics remain ubiquitous, as evident from an emerging insight that small-scale plastics, termed nanoplastics (NPs) and microplastics (MPs), are present within the environment [6,7], foodstuffs [8][9][10][11], beverages [12][13][14], and drinking water [15,16]. The origins of NPs and MPs have been categorized as either primary sources that are intentionally manufactured (e.g., nurdles, microbeads) or as secondary sources that result from the unintentional degradation of macroscale plastic [17]. These diverse origins, combined with varied environmental exposure during the life of the plastic, have resulted in small-scale plastics with a plethora of shapes (e.g., spheres, fibers, fragments), sizes (e.g., macro-to-micron scale), and compositions. Many NPs and MPs comprise commodity plastics, such as polystyrene (PS), polyethylene (PE), polyethylene terephthalate (PET), polyamide, and polypropylene (PP) [18], but polymer formulations likely include other components including additives and plasticizers. The heterogeneity of these small-scale plastics has called for a consensus on accepted definitions of MPs and NPs [19]. Key uncertainties exist about the potential effects on human health from the consumption and exposure to NPs and MPs. For example, studies have shown the presence of MPs in human stool [20] and lung tissue [21], which raise concerns about the influence of these materials on human organs. In particular, the effects of plastics <1 µm on biological systems is critical to understand [22], given the propensity of nanomaterials to enter cells [23][24][25] Fabrication of Nylon-11 NPs Unless otherwise noted, all reagents for fabricating the nylon-6 and nylon-11 particles were purchased from Sigma-Aldrich (St. Louis, MO, USA). In a 40 mL scintillation vial containing a magnetic stir bar, a solution of nylon-11 was prepared by mixing 0.35 mg nylon-11 (Sigma-Aldrich, Cat. No. 181153) with 20 mL hexafluoroisopropanol (HFIP). Nylon-11 particles were prepared via the dropwise addition of nylon-11 solution (10 mL, 1 mL/min) to ultrapure deionized water (75 mL, 18.2 MΩ·cm resistivity) using a syringe pump (Model # NE-300, New Era Pump Systems, Inc., Farmingdale, NY, USA). Residual HFIP was removed by distillation by subjecting the precipitate to rotary evaporation under vacuum at 60 • C. Once the volume was reduced to~30 mL, an additional volume (~75 mL) of ultrapure deionized water was added and rotary evaporation was continued. This process was repeated a total of five times. Particles containing Nile Red (NR) or Acryloxyethyl Thiocarbamoyl Rhodamine B (ATRB) were formulated using a similar approach as specified above. The stock solutions of both the tracers in HFIP (1 mg/mL) were prepared. An aliquot (1 mL) was then added to the nylon-11 solution before initiating the precipitation protocol specified above. The fabricated particle suspension was centrifuged and resuspended to remove the excess formic acid and methanol. For each wash step, the suspension was centrifuged at 16,000 rcf for 5 min at room temperature, supernatant was removed, and the pellet was resuspended in an equal volume of 0.5 mg/mL PVA. The resuspension of particles was achieved by a 30 s vortex step followed by discrete sonication in a cup horn sonicator (Ultrasonic Liquid Processor S-400, Misonic Inc., Farmingdale, NY, USA) delivering a total of 1680 J/mL. The particles were washed two times and then resuspended one final time. A similar procedure was adopted to fabricate NR-and ATRB-tagged nylon-6 particles. A solution of NR in formic acid (0.1 mg/mL) was prepared from a stock solution of 1 mg/mL. An aliquot of the 0.1 mg/mL Nile Red (1 mL) was mixed with the nylon-6/formic acid solution before it was added dropwise to the sonicated PVA mixture. Likewise, an aliquot (1 mL) of ATRB in formic acid (1 mg/mL) was mixed with the nylon-6/formic acid solution. Fluorescence Leaching An aliquot (400 µL) of fluorophore-loaded nylon NPs was added to tared Amicon ® Ultra centrifugal filter unit (regenerated cellulose, 100 K, MilliporeSigma, Burlington, MA, USA) and was centrifuged at 16,000 rcf for 10 min. The filtrate collected from this spin was subjected to an additional centrifugation step for 10 min at 16,000 rcf using tared Amicon ® Ultra centrifugal filter unit (regenerated cellulose, 3 K). Samples were collected in duplicate and at time 0, 7 days, and 30 days. The NPs were stored in the refrigerator at 4 • C over the 30-day time period. The fluorescence of the second filtrate was determined using Synergy MX multi-mode plate reader. The calibration curves of NR and ATRB in ultrapure water were obtained by serial dilutions of the fluorophores (NR: 2.5 µg/mL stock solution, λ ex = 590 nm, λ em = 667 nm; ATRB: 1.25 µg/mL stock solution, λ ex = 560 nm, λ em = 590 nm). Fourier-Transform Infrared Spectroscopy (FT-IR) Samples of particles were analyzed using a Nicolet 6700 FTIR instrument equipped with a Smart Orbit™ single bounce diamond crystal ATR accessory, a deuterated triglycine sulfate (DTGS) detector, and a potassium bromide (KBr) beam splitter. The method involved 32 scans over a region of 4000-400 cm −1 and a resolution of 4. Prior to running each sample, a background was collected on the cleaned crystal, and the sample was introduced on the diamond crystal. Pressure was applied and the sample data were collected. The suspension of nylon-6 particles that were used for FT-IR were washed using water instead of 0.5 mg/mL PVA. Dynamic Light Scattering (DLS) and Zeta Potential Zetasizer Nano ZS (Malvern Instruments, Malvern, UK) equipped with a He-Ne laser (633 nm) was used to acquire DLS measurements (non-invasive backscatter method with a scattering angle of 173 • ). The hydrodynamic diameters (D H ), polydispersity indices (PDI), and zeta potential of polymer particles were calculated by the instrument software (Zetasizer DTS). For these measurements, nylon-6 particles were suspended in 0.5 mg/mL PVA or cell media, and nylon-11 particles were suspended in ultrapure deionized (DI) water or cell media. Formic Acid Protocol The concentration of formic acid in purified nylon-6 NPs was determined using the formic acid assay kit (Megazyme, Wicklow, Ireland). Briefly, the supplied reagents were suspended in ultrapure water at the specified concentrations in the assay protocol. The blank samples comprised ultrapure water, solution 1 (buffer), and solution 2 (NAD+) mixed at specified ratios and mixed thoroughly. In a 96-well plate, sodium formate standard and nylon-6 NPs were added to the blank solution. The absorbance of the solutions (A1) at 340 nm was measured after approximately 5 min using the Synergy MX multi-mode plate reader. The absorbance was measured every 2-3 min after a 12 min incubation with the supplied enzyme, formate dehydrogenase. The absorbance values (A2) were recorded once they reached a plateau. Formic acid concentration was calculated using the equation: where C f ormic acid is the final concentration of formic acid in the sample, ∆Absorbance sample is A2 − A1 for the nylon − 6 NP samples, ∆Absorbance standard is A2 − A1 for thesodium formate standard, and C standard is the concentration of the sodium formate standard Field Emission Scanning Electron Microscopy (FE-SEM) Samples of particles were characterized using FE-SEM (Auriga FIB/FESEM, Carl Zeiss Microscopy, Peabody, MA, USA). Before imaging, the samples were coated with a few nanometers of AuPd using a Leica ACE200 Sputter Coater (Leica Microsystems, Buffalo Grove, IL, USA). In Vitro Sedimentation, Diffusion, and Dosimetry (ISDD) Modeling The effective dose for the particles was calculated following a protocol by DeLoid et al. [53]. Briefly, the effective density for each particle formulation was determined in growth media using the volumetric centrifugation method. The particle pellet volume was measured using TPP packed cell volume (PCV) tubes and an easy-read measuring device for PCV tubes (TPP Techno Plastic Products, Midwest Scientific, Valley Park, MO, USA). A 25 Cannon-Fenske tube viscometer (Sigma-Aldrich, St. Louis, MO, USA) was used to measure the dynamic viscosity of cell culture media. After calculating the effective density for each of the four particles in growth media, the effective dosimetry was deter-mined by computational modeling using the volumetric centrifugation method-in vitro sedimentation, diffusion, and dosimetry (VCM-ISDD) or distorted grid (DG) model [53]. Cytotoxicity Assays RAW 264.7 cells were seeded at a concentration of 1 × 10 5 cells/mL within a 96-well plate and then incubated for 24 h. In addition to nylon NPs, two types of commercially sourced PS nanoparticles (500 nm and 50 nm PS Yellow nanoparticles; Spherotech, Lake Forest, IL, USA) with size distributions similar to the nylon NPs were also evaluated in the cytotoxicity assays. Cells were exposed to particles suspended in fresh media in twofold dilutions ranging from 0.001 to 1.0 mg/mL. The medium was collected after 24 h of particle exposure and lactate dehydrogenase (LDH, TOX7, Sigma-Aldrich, St. Louis, MO, USA) release measurements were performed (according to the protocol specified by the manufacturer. All studies were conducted in at least biological duplicates and experimental triplicates with data expressed as a percentage of their corresponding controls. A control LDH assay was performed in the absence of cells to assess any interference from the particles with the assay. For 1.0 mg/mL 500 nm PS Yellow nanoparticles,~30% increase in the background value was observed, while no increase was observed for unlabeled nylon-11, unlabeled nylon-6, and 50 nm PS nanoparticles. Fluorescence Microscopy In glass bottom Petri dishes (MatTek, Ashland, MA, USA), 1 × 10 5 cells/mL cells were seeded for 24 h. The cells were then exposed to particle concentrations of 0.01 and 0.1 mg/mL for 16 h. For these microscopy studies, nylon-6 ATRB contained~2.8 µg ATRB/mg particles, nylon-6 NR contained~6.9 µg NR/mg particles, nylon-11 ATRB contained~4.5 µg ATRB/mg particles, and nylon-11 NR contained~14.1 µg NR/mg particles. The vehicle controls included in this study were 2.2 mg/mL PVA solution (control for nylon-6 NPs) and ultrapure deionized water (control for nylon-11 NPs). Cells were subsequentially fixed with 4% paraformaldehyde at room temperature for 30 min. The cells were washed thrice with PBS and then stained with 1:1000 DAPI (Life Technologies, Grand Island, NY, USA) for 20 min at room temperature. Prior to bright-field and fluorescence imaging with a 40× objective, the cells were washed an additional three times with PBS. Imaging was conducted using a Zeiss Observer Z1 3D inverted microscope with three channels. Data Analysis Data are expressed as mean ± standard deviation. To test for significant differences in LDH release due to particle exposure and to understand the role of plastic composition and concentration, a repeated measure two-way ANOVA test was conducted with Tukey's multiple comparisons test. The statistical analyses were performed using GraphPad Prism 7.04 (GraphPad Software, San Diego, CA, USA). Fabrication and Characterization of Nylon-11 NPs Nylon-11 particles were fabricated via a precipitation method similar to a previously published procedure [36]. Attempts at removing residual HFIP through centrifugation or dialysis were unsuccessful as a result of the low pelletization of the small-sized particles (Table 1) and the formation of aggregates. Since these methods of particle purification were ineffective for removing residual HFIP, multiple cycles of rotary evaporation were used to substantially reduce the fluorine signal via 19 F-NMR ( Figure S2, ESI †). An attempt to incorporate RhB into nylon-11 particles to trace NPs within cells resulted in negligible loading of fluorophore. Therefore, alternative fluorescent tracers NR and ATRB were incorporated into the NPs during fabrication. Based on fluorescence spectroscopy, the concentrations of fluorophore in the particle formulations were~4.5 µg ATRB/mg particles and~14.1 µg NR/mg particles. Although rotary evaporation effectively eliminated HFIP while retaining the size distributions of the NPs, this purification approach is not suitable for eliminating the potential presence of excess fluorophore in the solution. The nylon-11 NPs showed a spherical morphology via SEM ( Figure 1) and no morphological differences were apparent for the nylon-11 particles with the ATRB and NR tracers. Hydrodynamic diameters of the purified particles comprising nylon-11 (127 ± 51 nm), nylon-11 ATRB (142 ± 43 nm), and nylon-11 NR (137 ± 39 nm) were unchanged after multiple cycles of rotary evaporation (Table S1, Figure S3, ESI †). The size distributions for the unlabeled, nylon-11 ATRB, and nylon-11 NR particles were also calculated from the SEM images to be 55 ± 19 nm, 88 ± 22 nm, and 84 ± 12 nm, respectively. The slight differences between the size distributions obtained from DLS and SEM could be attributed to the use of dried samples to acquire SEM images. The high zeta potential (Table 1) across all three sets of nylon-11 particles ensured colloidal stability and eliminated the need for the incorporation of additional stabilizers [54]. As the particles were not exposed to particle stabilizers during the fabrication or purification processes, the surface charges of the particles pre-and post-purification were expectedly unchanged (Table 1). Fabrication and Characterization of Nylon-6 NPs A precipitation protocol was initially used to synthesize nylon-6 particles, but large visible aggregates of polymer formed during the precipitation step. The higher hygroscopicity of nylon-6, as compared to nylon-11 [55][56][57], could affect the precipitation step that relies on the self-assembly of the polymers through hydrophobic interactions. Therefore, to successfully prepare nylon-6 particles, a combination of ultrasonication and washing was implemented, similar to a previously published protocol [50]. The formation of particles occurs after the slow addition of a nylon-6/formic acid solution to a second solution containing PVA while exposed to ultrasonicating forces. After the completion of fabrication steps, nylon-6 particles were washed with 0.5 mg/mL PVA to remove residual organic solvents, to stabilize particles and to prepare the NPs for use in biological assays. After washing, the particles revealed low levels of formic acid (Table S2, ESI †), well below the cytotoxic threshold (~7.56-18.66 µmol/mL) [58,59]. Interestingly, the washing steps resulted in a reduction in the average diameter and improved PDI (Table 1). The decrease in the diameter of the nylon-6 NPs after the washing steps (Table S3, Figure S4, ESI †) may be attributed to the purification process as previously reported for other nanoparticle formulations [60]. The average diameter of 465 ± 132 nm after the final wash was also confirmed with SEM ( Figure 2). The SEM images also show that the surface morphology of nylon-6 NPs is irregular with the appearance of multiple nucleation sites. After the wash steps, the zeta potential increased from 5.09 ± 0.62 mV to 22.63 ± 0.05 mV, which could be attributed to the change in PVA concentration (2.2 mg/mL during fabrication; 0.5 mg/mL wash solution) [61]. This method of fabricating nylon-6 NPs enables the incorporation of fluorescent tracers into the particles by first dissolving the fluorophore within the polymer solution prior to the ultrasonication step. NR, ATRB and Texas Red (TR) maintain stable fluorescence at acidic pH, and are therefore compatible with the use of formic acid during the fabrication steps [62][63][64]. Similar trends in size and surface charge were observed for the fluorophoretagged nylon-6 particles as compared to the unlabeled NPs (Table 1). To support microscopy studies in cell cultures, the fabrication of the fluorescent nylon-6 NPs was optimized by testing different loading concentrations of the fluorophore (Table S4, ESI †). For the ATRB tracer, the use of 1 wt% of fluorophore during fabrication was required to achieve a detectable fluorescence signal (~2.8 µg ATRB/mg particles). Using lower concentrations of ATRB during fabrication (i.e., 0.1 wt%) resulted in the undetectable fluorescence in the nylon-6 NPs. For the NR tracer, the use of 0.1 wt% of NR during fabrication was prioritized for cell exposure studies, which afforded~6.9 µg NR/mg particles. The use of higher concentrations of NR in the fabrication (i.e., 1 wt% NR) caused aggregated particles despite rigorous sonication and vortexing. For the TR tracer, the use of 0.1 wt% of fluorophore resulted in~1.65 µg TR/mg particles; however, negligible fluorescence was observed in cells ( Figure S5, ESI †). While TR is highly hydrophobic, the sulfonyl chloride moiety of unreacted TR molecules is susceptible to hydrolysis and converts into water-soluble sulfonate [65]. Since the particles are suspended in aqueous media over several days prior to cell imaging, it is possible that TR hydrolyzed to sulfonate during storage. Coupled with the low fluorophore loading, the extensive washing steps to remove unincorporated fluorophore could have removed the more hydrophilic TR sulfonate and hence, it might not have been detected during fluorescence imaging. Fluorescence Leaching Ensuring the stability of fluorescent-labeled NPs is an important consideration when conducting cell exposure and uptake studies. If the NPs undergo fluorophore leakage, the observed fluorescence in cells could be erroneously represented as the uptake and accumulation of NPs instead of free fluorophore [66][67][68]. The presence of residual fluorophore can also induce cytotoxicity and without the appropriate controls, the ability to differentiate toxicity as a function of particle or fluorophore uptake can be challenging [69,70]. Fluorophore desorption could additionally lead to altered particle characteristics (i.e., changes in dispersibility, size, surface charge, and morphology) [71,72]. The potential leaching of fluorophore across all nylon NPs was assessed by measuring the fluorescence of the solution after the sequential filtration of the NPs through 100 K and 3 K centrifugal filters over 30 days (Table S5, ESI †). The low fluorescence detected in the solution after removing the NPs indicates the fluorophore stability within the NPs and suggests that there is negligible fluorophore leaching from the NPs. FT-IR Characterization of Nylon-11 and Nylon-6 NPs The composition of all nylon NPs was evaluated using FT-IR (Figure 3). For nylon-6 NPs, some characteristic absorption bands include 3299 cm −1 (N-H stretching), 2940 cm −1 , and 2868 cm −1 (C-H stretch from ethylene groups), 1639 cm −1 (amide I), and 1544 cm −1 (amide II) [73]. The nylon-6 particles labeled with NR and ATRB showed similar absorption bands distinctive of the polymer; however, the bands associated with the fluorescent molecule were absent ( Figure S6, ESI †). For nylon-11 NPs, the characteristic absorption bands include 3301 cm −1 and 2919 cm −1 (N-H stretch), 2851 cm −1 (C-H vibration of the methylene groups), 1638 cm −1 (amide I), 1546 cm −1 (amide II), and 1467 cm −1 (C-H bending) [74,75]. The incorporation of NR and ATRB within the nylon-11 NPs did not result in detectable absorption bands from the fluorophore ( Figure S6, ESI †). Because the presence of fluorescent moieties for all nylon NPs was confirmed with a fluorescence assay, the absence of FT-IR bands was likely a result of the low concentration of fluorophore in the particles. Particle Characterization and Stability Prior to initiating cell exposure studies, the size and morphology of NPs in cell media were tested over a fixed time period. Since PS particles are widely used for studying the effects of plastics, two sets of commercially sourced PS particles with size distributions similar to nylon NPs were also evaluated. Although the size distributions between the PS and nylon particles slightly differed, the comparison can still provide useful insight into the differences in composition and size. The introduction of all particle types to cell culture media yielded particles with increased overall diameters, likely attributed to the formation of the protein corona or slight aggregation in cell media (Table 2) [76]. After a 24 h incubation in cell media, minimal changes in the diameters were observed, indicative of colloidal stability over this time. A wide range of zeta potential values was observed across the different types of particles suspended in water: −69 mV to 48 mV. The zeta potential for particles in cell media at both at 0 and 24 h changed significantly, as compared to the NPs in water, suggesting that the cell media components (e.g., proteins) shielded the original particle surface charge (Table 2) [77,78]. In Vitro Sedimentation, Diffusion, and Dosimetry (ISDD) Modeling To further elucidate the impact of particle size, density, and polydispersity in cell media, the effective dosimetry concentration dose for nylon and commercially available PS particles were assessed using ISDD modeling. This model was used to calculate the effective dose for the particles over the time course of 24 h and showed that the concentration and fraction of particles in the growth media immediately above the RAW 264.7 cells (expressed as the volume between the cell monolayer and 10 µm above) varied by orders of magnitudes between the different particle formulations (Table S6, ESI †). Both the smaller particle formulations (50 nm PS and nylon-11 NPs, Figure 4A,B) had comparable settling profiles with mean effective dose concentrations of 3.84 mg/mL and 7.50 mg/mL, respectively. These smaller sized particles measured similar effective densities in cell media (50 nm PS NP: 1.04 ± 0.0 g/cm 3 and nylon-11 NP: 1.03 ± 0.0015 g/cm 3 ) and likely experienced greater Brownian motion and slower rates of settling. For example, only 16.8% (50 nm PS) and 36.7% (nylon-11) particles accumulated at the bottom of the wells over 24 h. Unlike the smaller particles (nylon-11, 50 nm PS), there was a significant difference observed in the settling rates for these two larger sized particle formulations (nylon-6, 500 nm PS). The 500 nm PS particles with a measured effective density of 1.04 ± 0.0013 g/cm 3 exhibited the lowest mean concentration (1.12 mg/mL) along with the lowest fraction deposited in cells (7.2%, Figure 4C). Conversely and expectedly, nylon-6 NPs due to its higher measured effective density (1.46 ± 0.18 g/cm 3 ) rapidly reached a plateau within 4 h and had the highest calculated mean concentration (18.4 mg/mL) with 72.3% sedimented at the bottom of the wells ( Figure 4D). These data underscore the importance of considering both the size and chemical composition when evaluating the effects of microplastics and nanoplastics on biological systems. The use of commercial PS particles to universally evaluate the impact of NPs on biological systems fails to represent the breadth of plastics and supports the need to develop materials that are more representative of ubiquitous commodity plastics. Cytotoxicity Studies with NPs Murine alveolar macrophages, RAW 264.7, were used to assess how different concentrations of NPs affect cell membrane integrity by using the LDH cytotoxicity assay ( Figure 5). For PS nanoparticles (50 nm and 500 nm), concentrations of up to 0.25 mg/mL did not result in any significant cytotoxic effects. This finding aligns with a report from Florance et al. which showed that the viability of RAW 264.7 macrophages was unaffected when dosed with PS nanoparticles (208.63 ± 6.494 nm) at 0.2 mg/mL for 24 h [79]. Despite minimal cytotoxic effects, the authors interestingly showed that PS nanoparticles (0.1 mg/mL) affected cellular homeostatis by increasing the generation of reactive oxygen species (ROS) four hours post-dosing. Another study showed that PS nanoparticles (42 nm) dosed between 0.1 and 10 µg/mL to RAW 264.7 cells triggered ROS generation and proinflammatory cytokines [80]. In the current study, both sizes of the PS nanoparticles (50 nm and 500 nm) exhibited cytotoxicity at a threshold dose of 0.5 mg/mL of NPs, suggesting that alterations in cellular homeostatis may occur before the loss of cell membrane integrity. As indicated in the Materials and Methods section, control studies revealed that 500 nm PS nanoparticles at a concentration of 1.0 mg/mL interfered with the LDH assay, which may explain the higher level of apparent cytotoxicity for this particle type; nylon-6, nylon-11 NPs and 50 nm PS nanoparticles did not interfere with the LDH assay. Figure 5 also shows that nylon-6 and nylon-11 NPs did not exhibit cytotoxicity until a dose of 1 mg/mL. These overall findings, in addition to statistical analyses, suggest that both particle concentration (F 11,66 = 83.4), p < 0.0001 and particle composition (F 3,18 = 8.64), p = 0.0009, along with interaction between the two parameters (F 33,198 = 15.5), p < 0.0001, were contributing sources to the measured outcome. To the best of the author's knowledge, no studies have evaluated the effects of nylon-based NPs on RAW 264.7 cells. However, using dental pulp stem cells, Ma et al. showed that nylon-11 nanoparticles (approximately 50 nm) were cytocompatible at a dose of 400 µg/mL [51]. Nanoparticles comprising another commodity plastic, PET, were recently tested in multiple studies with RAW 264.7 cells [36,37,81]. At concentrations of 15 µg/mL, Aguilar-Guzmán and colleagues showed the PET nanoparticles affected RAW 264.7 cells by increasing the production of ROS, altering cell proliferation, and upregulating certain genes likely related to foreign particle responses and cell maintenance [81]. For nylon-based NPs, future studies are required to understand the potential effects of these particles on gene expression. Figure 5. Cytotoxicity of 50 nm PS nanoparticles (black), 500 nm PS nanoparticles (green), nylon-11 NPs (blue), and nylon-6 NPs (red) after exposure to different doses of NPs for 24 h. The graphs show mean ± standard deviation. Significant difference between corresponding vehicle control and dose treatment is shown in stars for each particle. Two-way ANOVA showed that nanoparticle composition contributed to the variation. Fluorescence Microscopy of Macrophages Exposed to NPs The uptake of fluorescently labeled nylon-6 and nylon-11 NPs by macrophages (RAW 264.7 cells) is shown in the overlay of bright-field and fluorescence microscopy images ( Figure 6). After exposing cells to nylon-11 and nylon-6 NPs for 16 h, the spatial distribution in the cellular cytoplasm was evident. The images in Figure 6 show a demarcation between the cell nucleus (blue; DAPI stain) and the presence of fluorescent clusters of ATRB-and NR-tagged nylon NPs in the cytoplasm. The presence of nanoparticles within the cytoplasm of mammalian cells was also reported for nanoparticles comprising other commodity polymers, such as PS [23], PVC [82], and PET [36]. As mentioned previously, TR-tagged nylon NPs did not exhibit any fluorescence across the tested particle concentrations ( Figure S5, ESI †). The uptake of nylon NPs in macrophages occurred in a dose-responsive manner, with negligible cellular uptake occurring after exposure to the lowest concentration 0.01 mg/mL of NPs and the highest particle uptake observed at 1 mg/mL of NPs. In agreement with the membrane integrity results in Figure 5, the microscopy images overall showed an intact cellular morphology after the exposure of NPs at lower concentrations (0.01 mg/mL and 0.1 mg/mL), but signs of membrane damage and instability are apparent at the highest 1.0 mg/mL concentration. Conclusions The existence of MPs and NPs in the environment, food, and beverages has raised key questions around the potential for downstream effects in human health. Given the diversity of polymer formulations in modern society, there is a need for well-characterized NPs comprising commodity polymers to systematically test the effects on complex biological systems. This manuscript describes the preparation of well-characterized nylon-6 NPs and nylon-11 NPs with hydrodynamic diameters of 465 ± 132 nm and 127 ± 51 nm, respectively, for unlabeled particles after purification. To facilitate studies in biological systems, NPs were also successfully labeled with NR or ATRB fluorescent tracer to aid with the in vitro visualization and intracellular tracking. A 30-day shelf stability study of nylon NPs in aqueous media showed no leaching of fluorescent tracer from the particles. The exposure of RAW 264.7 macrophages to nylon NPs over the concentration range of 0.001-1.0 mg/mL resulted in cytotoxicity at 1 mg/mL. Fluorescence microscopy images showed the uptake of NPs within the macrophages and indications of membrane damage at the highest 1.0 mg/mL concentration of nylon-6 and nylon-11 NPs. These well-characterized nylon NPs support future steps to fully understand the influence of these small-scale plastic materials on biological systems and ultimately human health. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/nano12152699/s1, Figure S1: Chemical structures of nylon-6 and nylon-11 and their corresponding images of raw pellets used in the fabrication of NPs; Figure S2: Stacked 19 F NMR spectra for known concentration of HFIP standard (black), nylon-11 NPs before purification (pre-rotovap, red), and after purification (post-rotovap 5×, blue). A strong HFIP peak was detected in the sample pre-rotovap, but a low peak associated with HFIP was detected in the sample after purification; Figure S3: DLS measurements of nylon-11 (A), nylon-11 ATRB (B), and nylon-11 NR (C) NPs after each rotary evaporation cycle. Each DLS profile is an average of three measurements; Figure S4: DLS measurements of nylon-6 (A), nylon-6 ATRB (B), and nylon-6 NR (C) NPs after each wash and re-suspension in 0.5 mg/mL PVA solution in water. Each DLS profile is an average of three measurements; Figure S5: Fluorescence microscopy images of RAW 264.7 cells exposed to different concentrations of nylon-6 0.1 wt% TR NPs, exhibiting lack of fluorescence visualization in cells. The blue color results from the DAPI stain for the cellular nucleus; Figure S6: FT-IR spectra of fluorescently labeled nylon NPs; Table S1: DLS and zeta potential of nylon-11 NP formulations post-fabrication and after each rotary evaporation cycle and re-suspension in ultrapure deionized water; Table S2: Concentration of formic acid in the washed nylon-6 NP formulations determined by formic acid assay; Table S3: DLS and zeta potential of nylon-6 NP formulations post-fabrication and after each wash and re-suspension in 0.5 mg/mL PVA solution in water; Table S4: Hydrodynamic diameter, PDI, zeta potential, and fluorophore concentration of fluorophore-loaded nylon-6 NP formulations pre-and post-final wash and re-suspension in 0.5 mg/mL PVA solution in water; Table S5: The fluorophore concentrations of solutions after removing NPs via centrifugation. The fluorophore concentration was tested immediately after fabrication of NPs (0 days), as well as 7 days and 30 days post-fabrication; Table S6: In vitro Sedimentation, Diffusion and Dosimetry (ISDD) Modeled value for the NPs at the bottom of the well cultured with RAW264.7 cells at 24 h. The concentration and fraction are calculated for the media column from the well bottom to 10 µm above the cell monolayer, reflecting the cell monolayer exposure. "Mean" refers to the mean value across the duration of the study 0-24 h, while "24 h" is the value at the 24 h time point. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Certain data may be redacted or otherwise restricted for intellectual property reasons.
2022-08-11T15:17:10.600Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "613867c3dd11176dda2a99825cf324341d28ca9f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/12/15/2699/pdf?version=1659891643", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "82511a5d255d807fdd8e63b2b1b9311f53baf61e", "s2fieldsofstudy": [ "Biology", "Materials Science" ], "extfieldsofstudy": [] }
253802033
pes2o/s2orc
v3-fos-license
A Closed-loop Sleep Modulation System with FPGA-Accelerated Deep Learning Closed-loop sleep modulation is an emerging research paradigm to treat sleep disorders and enhance sleep benefits. However, two major barriers hinder the widespread application of this research paradigm. First, subjects often need to be wire-connected to rack-mount instrumentation for data acquisition, which negatively affects sleep quality. Second, conventional real-time sleep stage classification algorithms give limited performance. In this work, we conquer these two limitations by developing a sleep modulation system that supports closed-loop operations on the device. Sleep stage classification is performed using a lightweight deep learning (DL) model accelerated by a low-power field-programmable gate array (FPGA) device. The DL model uses a single channel electroencephalogram (EEG) as input. Two convolutional neural networks (CNNs) are used to capture general and detailed features, and a bidirectional long-short-term memory (LSTM) network is used to capture time-variant sequence features. An 8-bit quantization is used to reduce the computational cost without compromising performance. The DL model has been validated using a public sleep database containing 81 subjects, achieving a state-of-the-art classification accuracy of 85.8% and a F1-score of 79%. The developed model has also shown the potential to be generalized to different channels and input data lengths. Closed-loop in-phase auditory stimulation has been demonstrated on the test bench. I. INTRODUCTION Sleep plays a critical role in a vast array of physiological and pathophysiological processes, including in neurodegenerative diseases such as Huntington's and Alzheimer's disease [1]. New therapies will emerge through an improved understanding of sleep mechanisms. Sleep is composed of an alternation of rapid eye movement (REM) sleep and three non-REM (NREM) sleep stages (i.e., N1-3). Sleep stages can be classified using surface electroencephalogram (EEG). Causal investigations of sleep increasingly rely on closed-loop paradigms in which a real-time sleep stage classifier is used to deliver stage-specific stimulation [2]- [5]. For example, studies have shown that auditory stimulation can be applied in phase with the prominent slow wave activity of NREM to enhance longterm memory [6], [7]. However, closed-loop sleep investigations are only feasible if the sleep stages can be detected accurately in real time and the stimulus signal can be delivered in phase with the sleep oscillation. The system should also minimize adverse effects on the subjects' sleep process, which excludes the use of rack-mounted instrumentation. These requirements motivate the development of a miniature system that supports closedloop on-device sleep modulation and can be worn comfortably during sleep. In this work, we fill this important research gap by developing a closed-loop sleep modulation system that is selfcontained and can be miniaturized. Fig. 1 shows the highlevel block diagram of the system and its operational principle. The system uses a single-channel EEG as input. Sleep stage classification is performed using a novel light-weight deep learning (DL) model implemented in a low-power fieldprogrammable gate array (FPGA). Auditory stimulation is activated on the basis of the specific sleep stage and the detected sleep oscillation. The FPGA-accelerated DL model is a key component of the system. Although machine learning (including DL) algorithms have been developed to classify sleep stages, existing models have several common limitations: (1) demanding too much computational resources that are not available in energyconstrained wearable devices [8]; (2) using too many input channels that result in high power dissipation for signal acquisition and causing inconvenience in electrode placement; (3) using long time series as input (for example, more than a minute [2]), which causes latency in real-time operation and thus is not suitable for closed-loop modulation. In this work, we develop a DL model that uses only one EEG channel as input with a segment of 20 or 30 seconds. A sliding window with overlap is used to further reduce inference latency. The overall model consists of only 1.28 M parameters, which makes it suitable for FPGA implementation. The block diagram of the proposed sleep modulation system is illustrated in Fig. 2. The key building modules include: an analog front-end (AFE) module for EEG acquisition and oscillation detection, an FPGA module for DL based sleep stage classification and closed-loop control, a stimulator module for delivering auditory feedback, and peripheral modules such as Flash memory and power management units. The rest of the paper is organized as follows. Section III presents the DL model for the sleep stage classification. Section IV discusses the FPGA implementation of the DL model and the design of analog modules. Section V shows the experimental results. Finally, Section VI concludes the paper. II. DEEP LEARNING MODEL FOR SLEEP STAGE CLASSIFICATION Machine learning models have been developed to classify sleep stages [9]- [12]. Conventional machine learning models reply upon handcrafted features, including features in the frequency domain (e.g., fast Fourier transform [9]), the time domain (e.g., change in slope sign [11], waveform length [12]), or the time-frequency domain (e.g., discrete wavelet transform [10]). Although machine learning models with hand-crafted features have proven their ability to automate sleep scoring, they often render poor generality when applied to different subjects and electrode placements. Recently, DL models have shown promising results in classifying sleep stages without hand-crafted feature selection [13], [14]. Deep belief networks [13] and convolutional neural network (CNN) [14] models have been developed using time-domain signals directly as input. These models have shown the ability to extract timeinvariant features, but miss time-variant features. To capture time-variant features, such as sleep stage transitions, recurrent neural networks (RNNs) can be used [15]. In this work, we developed a hybrid DL model that takes advantage of both CNN and RNN. Fig. 3 shows the model architecture. The model consists of three parts. The first part uses representation learning to capture time-invariant information from the input vector. The second part uses sequential learning to capture the sleep stage transition using features encoded in the first part. The third part consists of a dense network with residual connection to generate the prediction. The representation learning part consists of two CNN paths, which are trained to learn features with different time scales. One path has a large filter size to capture general shape characteristics (a shape ) with low-frequency content, and the other path has a small filter size to capture detail shape characteristics (a detail ) with high-frequency content. Both CNN paths consist of four 1-D convolutional layers, two dropout layers, and one max-pooling layer. Each 1-D convolutional layer is followed by batch normalization and a rectified linear unit (ReLU) activation function. Two dropout layers are added to reduce overfitting. To demonstrate the generalizability of the model, we designed the representation learning part to accommodate input EEG signals with a length of 20-sec or 30-sec. This is made possible by adjusting the max-pooling layer and the dropout layers in the two CNN paths, providing a similar output data length for the sequential learning part. Transitions between sleep stages often occur in patterns. Therefore, we designed a sequential learning part using a bidirectional LSTM network to capture the sleep transition. Only a detail is used as input to the LSTM model for sequential learning, instead of concatenating a detail and a shape . This allows us to obtain optimal performance at a low computational cost. The sequential learning part outputs the final forward hidden state h f and the first reverse hidden state h r of the extracted features. The last part of the model is a dense network that generates the final prediction. Both the outputs of the representational learning part (a shape , a detail ) and the sequential learning part (h f , h r ) are taken as input. a shape and a detail are given to the dense network as residual connections. They add frequency content that is degraded in sequential learning. h f and h r are concatenated to provide sufficient time domain features in both the forward and reverse directions. III. HARDWARE DESIGN A. FPGA-Accelerated Deep Learning Model The DL model was implemented on Zynq®-7000 XC7Z020-CLG484-1 from AMD Xilinx. The blocks of the FPGA implementation are depicted in Fig. 2. A microcontroller unit (MCU) block integrated in the FPGA manages system control. The weights of the DL model are stored in the Flash memory and loaded to the data engine for processing. The MCU monitors the status of the data engine and loads the corresponding weights into the buffers. The convolution engine processes the EGG data from the MCU subsystem and generates a detail and a shape . Subsequently, the LSTM engine calculates h f and h r from a detail . Then the convolution engine performs the dense operation with the above results and sends the result to the MCU through the output buffer. Finally, the MCU performs softmax and obtains the final detection result. To minimize memory access, the convolution engine shown in Fig. 2 processes 4 kernels in parallel, with ReLu and max pooling operations. The address generator in the controller enables flexible data arrangement of input and output data, so that no additional data moving or reordering is required. The LSTM engine concatenates the input and hidden states of each layer, so that the convolution engine is used to generate intermediate results of the forget gate, input gate, cell gate and output gate in scratch memory in working RAM. Then the LSTM datapath performs Sigmoid and Tanh operations, as well as the following multiplications and additions, to generate the updated cell state and hidden state. An optimized interpolation algorithm with a 38-enity Tanh lookup table is used to implement Tanh and sigmoid operations. The interpolation algorithm shares the same multiplier with other LSTM operations for lower hardware complexity. For simplicity of the memory system and better performance, memory blocks are designed with simple dual-port memory. The engine could process 20-sec input data within 1sec when running at 20 MHz clock. It provides flexibility to support multiple EEG inputs as well as future algorithm enhancement. To further reduce the hardware cost, we applied static quantization for both weight and activation to signed 8 bit. We bench-marked three calibration methods, MinMax, entropy, and percentile. Appropriate data shifting and saturation operations are performed in the operations. Table I summarizes the final resources used for the FPGA implementation. B. Analog Front-end Module Design The AFE consists of an EEG acquisition path and a sleep oscillation detection path. The EEG acquisition path uses a commercial neural amplifier (RHD2216, Intan Technologies) for signal amplification and digitization. The amplifier has a gain of 49.5 dB and a digitization resolution of 16 bits. The sleep oscillation detection path uses a 4th-order biquad filter, as shown in Fig. 4(a). The circuit uses only one operational amplifier per biquad core to save power consumption [16], [17]. The transfer function of the biquad filter is given by: where R eq = (1/R 1 + 1/R 2 ) −1 and α = R 4 /R 1 . The center frequency is and the quality factor is given by A comparator is used to detect the zero-crossing point of the filtered oscillation signal. The detection signal is sent to the FPGA and a programmable delay is added before triggering the stimulation. C. Auditory Stimulation Module Design The pink noise generation was implemented in the analog domain, as shown in Fig. 4(b). A 150 kΩ resistor is used as the source of white noise, which is amplified and filtered by a first-order low-pass filter to generate pink noise. The frequency characteristics can be further shaped by the filter. An energy-efficient class-D amplifier (TPA2005D1, Texas Instruments) was used to drive an 8 Ω piezo transducer speaker (AS01008MR-2-R, PUI Audio). A. Validation of the DL Model A public database from the Montreal Archives of Sleep Studies (MASS) [18] was used to train and test the DL model. The MASS database contains 5 subsets (SS1-5) of adult polysomnography recordings, which were labeled by experts. We evaluated our model on the subset SS2 [19] and SS3 [20]. To evaluate the model performance, we adopted a leaveone-subject out cross-validation strategy for SS2, and leavetwo-subjects-out for SS3 since it contains more data. 10% of the test subjects' data were used for fine-tuning per validation, and the remaining 90% of the data were used for testing. All test data were excluded from training. Adam optimizer was used for training with lr = 10 −4 , beta1 = 0.9, beta2 = 0.999 for 100 epochs. L2 weight decay was adopted to prevent overfitting with a value of 10 −3 . We used a batch size of 256 for general training. The sequence length of 3 is used in sequential learning. The extracted features of the previous two segments and the current segment were used as input to the sequential learning section. We used overall accuracy (ACC), macro F1-score, Cohen's Kappa coefficient (k) and per-class accuracy to evaluate the performance of our model. Table II summarizes the performance of our model based on the evaluation in MASS-SS2. Performance before and after B. Validation of Analog Modules and Closed-loop Operation The analog front-end and auditory stimulation modules have been fully characterized on a bench. Fig. 5 shows the experimentally measured results for biquad filtering and pink noise generation. Closed-loop auditory stimulation has also been demonstrated (at the moment without the DL model). Fig. 6 illustrates the experiment, in which the system successfully detected the sinusoid test signal of 1 Hz and triggered synchronized auditory stimulation (picked up by a microphone). V. CONCLUSION This paper presents a first-of-its-kind closed-loop auditory sleep modulation system, featuring a FPGA-accelerated DL model that delivers real-time sleep stage classification with state-of-the-art performance. This design holds great promise in enabling novel sleep research paradigms with potential for clinical translation.
2022-11-24T06:42:26.995Z
2022-11-19T00:00:00.000
{ "year": 2022, "sha1": "465934d66254b9669582e04435f6787f7e10b3ad", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "465934d66254b9669582e04435f6787f7e10b3ad", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
225416060
pes2o/s2orc
v3-fos-license
Modulation of Frontal Oscillatory Power during Blink Suppression in Children: Effects of Premonitory Urge and Reward Abstract There is a dearth of studies examining the underlying mechanisms of blink suppression and the effects of urge and reward, particularly those measuring subsecond electroencephalogram (EEG) brain dynamics. To address these issues, we designed an EEG study to ask 3 questions: 1) How does urge develop? 2) What are EEG-correlates of blink suppression? 3) How does reward change brain dynamics related to urge suppression? This study examined healthy children (N = 26, age 8–12 years) during blink suppression under 3 conditions: blink freely (i.e., no suppression), blink suppressed, and blink suppressed for reward. During suppression conditions, children used a joystick to indicate their subjective urge to blink. Results showed that 1) half of the trials were associated with clearly defined urge time course of ~7 s, which was accompanied by EEG delta (1–4 Hz) power reduction localized at anterior cingulate cortex (ACC); 2) the EEG correlates of blink suppression were found in left prefrontal theta (4–8 Hz) power elevation; and 3) reward improved blink suppression performance while reducing the EEG delta power observed in ACC. We concluded that the empirically supported urge time course and underlying EEG modulations provide a subsecond chronospatial model of the brain dynamics during urge- and reward-mediated blink suppression. Introduction Clarifying the neural mechanisms of blink suppression in children is important for understanding how mental effort controls behavior, which may still be under developmental influences, unlike a comparable adult model. This understanding has critical value in child psychiatry, for example, in designing a clinical behavioral training program for treating children with Tourette's syndrome (Woods and Himle 2004;Greene et al. 2015). These studies reported clinically important findings that although tics had been considered as a result of biological disorder, operant contingencies using a reinforcer ($2 in Woods and Himle 2004) could suppress tic behavior in children. However, the underlying neural mechanism in children remains unclear. Neuroimaging studies during urge suppression help to localize neural substrata and elucidate their dynamics corresponding to the mental processes. One of the earliest studies investigated "air hunger" (or shortness of breath) and found activations in the mid to anterior right insula, a part of the limbic system (Banzett et al. 2000). The most well-studied experimental paradigm to date is blink suppression. Neuroimaging studies using PET on blink suppression reported activation in right insular cortex and anterior cingulate cortex (Lerner et al. 2009). Similarly, functional MRI activations in right insular cortex, right ventrolateral prefrontal cortex, and bilateral temporal gyri showed correlations with a hypothetical model for the time-course of urge. In the model, urge takes 60 s to build up to the peak, after which a blink occurs and is followed by another 15 s to release (Berman et al. 2012). The same group also studied the effect of neurofeedback training using a blink suppression task and reported changes in functional connectivity between anterior insula and medial frontal cortex (Berman et al. 2013). This study was one of several to support the now established relationship between blink suppression and the activation within the right insula. In addition to insula, other interacting regions, which are mostly distributed in the frontal lobe, have also been implicated in urge suppression. For example, the right ventrolateral prefrontal cortex is another well-established region in response inhibition such as in the Go/No Go task and Stop Signal task (Aron et al. 2004(Aron et al. , 2014. The right ventrolateral prefrontal and insular cortices are a part of circuit that maintains volitional suppression of behavior during an increasing sense of urge. A recent study on healthy adults reported the neural correlates of blink suppression to be in bilateral insula, sensorimotor, anterior prefrontal, and parietal cortices, as well as subcortical regions including putamen and caudate (van der Salm et al. 2018). Another study investigated cough suppression after inhaling capsaicin solution (Mazzone et al. 2011). Regions activated included bilateral insula, cingulate cortex, middle frontal gyrus, and posterior cingulate gyrus, which confirmed the involvement of insula in different types of suppression. Developmentally, adults showed more activation in widespread regions during blink suppression compared with children, but blink-suppression-related inhibition in posterior cingulate cortex was relatively comparable (Mazzone et al. 2010). Importantly, they reported bilateral dorsolateral prefrontal cortices (DLPFCs) and anterior cingulate cortex (ACC) to be key regions for both children and adults. While there is converging evidence from neuroimaging studies, there are still unanswered questions. One critical question remaining is the temporal relationship between increased urge and its associated brain dynamics. As reviewed above, one fMRI study attempted to study the temporal aspect of urge building (Berman et al. 2012). The major limitation in their study was that the hypothetical temporal model of building urge was heuristically determined without empirical data support. Moreover, BOLD signal does not have good temporal resolution compared with electrophysiological measures. In order to answer the question of the relation between urge and brain dynamics, modalities with high temporal resolution such as EEG or MEG are natural choices. However, as far as we know, there have been no EEG studies on the temporal relation between urge and brain dynamics. The other critical question remaining is how reward-facilitated blink suppression is represented in the typically developing brain. It is reported that reward enhances successful tic suppression (Woods and Himle 2004;Greene et al. 2015). However, the neural mechanisms underlying this process are poorly understood. Clarification of this question is particularly important for enhancing behavioral interventions, which are often used for patients with Tourette disorder. In the present study, we conducted an EEG study of blink suppression performed by healthy control children. The following 3 questions were tested: 1) What is the time course of urge development? 2) What are the EEG-correlates of blink suppression? 3) How does reward change brain dynamics related to urge suppression? To investigate the temporal relation between building urge and brain dynamics, the children used a joystick as an "urgeometer" to indicate their subjective experience of urge. Also, to investigate the effect of reward on urge suppression, there were 3 experimental conditions: 1) Blink Freely/No Suppression (No Supp); 2) Verbal Suppression (Supp); and 3) Suppression for Reward (Supp Rwd). The trials in the latter 2 conditions were subsequently separated into 2 subgroups based on urge (Urge High and Urge Low), and the interaction between urge and reward was tested. Sample Participants were 35 healthy control children between the ages of 8 and 12 years old who were recruited as a comparison group for a larger study on Tourette disorder; data on the patient group will be reported separately. In order to ensure enough trials for the event-related EEG analysis, a minimum threshold of more than 20 blinks in NoSupp condition and 10 blinks in Supp and Supp Rwd conditions (conditions will be described later) was used. The final sample for EEG analysis consisted of 26 children (12 males and 14 females) with a mean age of 9.6 years (SD 1.5, range 8-12). The children were recruited from the community through radio and newspaper advertisements, community organizations, local schools, primary care physicians, and local clinics. After receiving verbal and written explanations of study requirements, and prior to any study procedures, all parents/participants provided written permission and informed consent/assent as approved by the Institutional Review Board. Procedures Subjects were excluded from participation if they were positive for any of the following: presence of any major Diagnostic and Statistical Manual (American Psychiatric Association 2013) Axis I diagnosis or taking any type of psychoactive medication, head injury resulting in concussion, or estimated Full Scale IQ < 80. The absence of psychiatric diagnoses was confirmed using a semistructured diagnostic interview, the Anxiety Disorder Interview Schedule, Child Version (ADIS) (Silverman et al. 2001), which was administered by trained and carefully supervised graduate level psychologists. Estimated intelligence (IQ) was assessed using the Wechsler Abbreviated Scale of Intelligence (WASI) (Weschler 1999). Task There were 3 block-separated conditions: blink freely/no blink suppression (No Supp), verbal instruction for blink suppression (Supp), and blink suppression for reward (Supp Rwd). All children were instructed to blink freely during the No Supp block, while trying to suppress blinks during the 2 blink suppression blocks. During Supp Rwd, children were told that the computer would be counting how many blinks they were able to suppress, and that they would subsequently receive a reward for successful suppression. All children received $10 regardless of how many blinks they exhibited. During the 2 blink suppression blocks, children used a custom joystick to indicate their subjective experience of urge for blinking by moving the stick forward when they felt the urge to blink. The joystick would revert back to the neutral condition automatically once pressure was released. The order of the 3 conditions was counterbalanced across subjects. There were other types of cognitive tasks in between the blink freely/suppression blocks, which will be presented elsewhere. Each block length was between 5 and 7 min. EEG Recording EEG signals were recorded using the Electrical Geodesics Incorporated (EGI) hardware and software with 128 Hydrogel electrodes that were embedded in a hydrogel net in an International 10/10 location system. Data were sampled at 1000 Hz and initially referenced to Cz. Electrode-skin impedance threshold was set at 50 kΩ per manufacturer standard for the high input impedance amplifier. Eye movements were monitored by electrodes placed on the outer canthus of each eye for horizontal movements (REOG and LEOG) and by electrodes above the eyes for vertical eye movements. Facial electromyography (EMG) leads were placed on the cheeks bilaterally over the zygomaticus major muscles to assist with detection of facial movements. Key head landmarks (nasion, inion, and preauricular notches) and 3D electrode locations were recorded (Polhemus, Inc.) to allow reconstruction of electrode positions on the scalp. All EEG data were recorded using the Lab Streaming Layer (https://github.com/sccn/labstrea minglayer), which allows integration of multiple data streams including EEG, high-definition video, joy-stick urgeometer, and experimental events. EEG Preprocessing Throughout the preprocessing, EEGLAB 14.1.2 (Delorme and Makeig 2004) running under Matlab 2017b (The MathWorks, Inc.) was used. Custom code was written as necessary. There were 2 central signal processing techniques: artifact subspace reconstruction (ASR) (Mullen et al. 2015;Chang et al. 2018Chang et al. , 2019Gabard-Durnam et al. 2018;Blum et al. 2019;Plechawska-Wojcik et al. 2019), which is an offline version of data cleaning suits from BCILAB (Kothe and Makeig 2013) (see Supplementary Material 7 for detail) and independent component analysis (ICA) (Bell and Sejnowski 1995;Makeig et al. 1996Makeig et al. , 1997Makeig et al. , 2002. These 2 approaches are complementary in that ASR uses sliding-window principal component analysis (PCA)-based subspace rejection and reconstruction so that it can address data nonstationarity such as infrequent short-lasting bursts by touching electrodes, for example, while ICA can find stationary processes and temporally maximally independent sources such as brain EEG sources as well as nonbrain artifact sources like blink, eye movement, and facial and neck muscle activation by using more sophisticated, physiologically valid assumptions than PCA (Onton and Makeig 2006;Delorme et al. 2012). After preprocessing the scalp recordings with these 2 algorithms, we analyzed event-related spectral perturbation (ERSP) on each anatomically defined source cluster to investigate time-frequency-space decomposed EEG power dynamics related to blink suppression. For full details, see Supplementary Material 1. Identifying Blinks We developed an EEGLAB plugin countBlinks() for this project (available from https://sccn.ucsd.edu/eeglab/plugin_uploader/ plugin_list_all.php) to manually annotate all the blinks during the tasks by visually examining the time-series data of the independent component (IC) representing blink/vertical eye movement. The principle in this blink identification is peak detection in the EOG-IC time series; hence, the annotated markers refer to the highest-amplitude moment, rather than the onset, of a blink. The solution does not use an algorithm; an annotator judged whether the currently highlighted blinkinduced-like EOG waveforms (typically 0.5-1.0 s long) should be labeled as blink or not for each candidate waveform. When the data showed stereotypical blink-induced waveforms, annotating 2-3 blinks per second was possible due to the efficient GUI design. Several automated algorithms were tested out before developing our own solution, but their performances turned out to be often unsatisfactory particularly during blocks with suppression conditions. This was probably because participant's physical effort to suppress the blinks prevented generation of stereotypical blink-induced EOG waveforms. Thus, we were motivated to instead use manual annotation in an efficient way. Statistical Testing The full factorial design of the current study was 3 suppression factors (No Supp, Supp, Supp Rwd) x 2 urge factors (Urge High, Urge Low), all within-subject design. However, because urge was measured only for the suppression conditions, the No Supp condition did not have urge data. We determined 3 contrasts of interest; Contrast 1, main effect of Suppression; Contrast 2, main effect of Urge; Contrast 3 main effect of Reward (Fig. 1). Repeated measures ANOVAs were performed for Contrasts 1 and 3, and paired t-test for Contrast 2, on each time-frequency pixel of the calculated ERSP tensor with the dimensions of 100 (frequencies, 1 to 55 Hz) × 252 (latency to blink ERP peak, −4030 to 1000 ms) × number_of_ICs (this varies from IC cluster to cluster) for 12 IC clusters. For multiple comparison correction for the 100 × 252 time-frequency points, weak family-wise error rate (wFWER) control was used (Groppe et al. 2011). t-or F-statistics values were computed for all time-frequency points and thresholded at P < 0.001 and P < 0.005 for Contrast 1 and Contrast 2, respectively. The true mass of cluster, which is the sum of absolute t-or F-statistics within a time-frequency point cluster, was computed for each cluster. Next, data labels were shuffled, and the same procedure was applied, and the largest mass of cluster was taken to build distribution of surrogate mass of cluster. Finally, 99.9 and 99.5 percentiles of surrogate mass of cluster distribution were determined to be used as a threshold value for omnibus correction. Those true mass of cluster entries that showed larger values than the threshold values were declared to be statistically significant after wFWER control. Behavioral Data The number of blinks was counted for each block and normalized into average counts per minute for each subject. The results were as follows: No Supp, M = 17.8 (SD 8.9); Supp, M = 10.7 (SD 6.2); Supp Rwd, M = 8.4 (SD 4.4). Paired t-tests across the 3 conditions confirmed significant reduction of blinks in the order of No Supp, Supp, and Supp Rwd (all P < 0.001, Fig. 2). The result confirmed the validity of the experimental control over blink suppression. The distribution of other blinks relative to a blink is shown in Supplementary Material 4. The grand-mean urgeometer time series (±1 SD) plotted separately for High and Low Urge conditions indicates that urge peak Figure 1. The factorial design of the current study. There were 3 contrasts for which statistical tests were performed. Note that the Contrast 2 and 3 include only 2 suppression conditions because urge data were not collected during "No Supp" condition. was reached slightly earlier than the EOG-ERP peak latency. The peak latency for Urge High was found at −0.4 s relative to blink EOG-ERP peak. Next, the elbow point of the rising curve up to the peak was obtained using a two-line fitting bisection method to find the point where the residual from the two-line fitting is minimized. Relative to the EOG-ERP peak, the elbow point was found at −1.8 s. The result indicated that the urge increase rate is nonlinear, and it became steeper after −1.8 s. Finally, Urge Low showed a flat pattern, indicating that about half of the blinks (i.e., suppression failures) may have occurred with little to no urge experienced by participants. For interest, we characterized the impact of eye blinks and urge on sensor-level ERP and their ICA-decomposition. The results are summarized in Supplementary Materials 2 and 3, respectively. General Descriptive Statistics about Preprocessing, ICA-Decomposed EEG, and Multiple Comparison Correction The total amount of variance reduction after all the preprocessing was percent variance accounted for (PVAF) reduction, M = 99.7%, SD = 0.3, and range 98.6-99.9. This PVAF difference is the result from the following 2 stages of signal processing: reduction to 1.5-55 Hz bandpass filtering and ASR (M = 98.4, SD = 2.5, and range 88.5-99.9); reduction due to the subsequent IC rejection (M = 74.4, SD = 13.3, and range 38.5-96.7). . Cluster-mean scalp topography, power spectral density, and event-related spectral perturbation (ERSP) for each of the 12 clusters determined by Silhouette analysis and averaged across all the conditions. This figure shows a general outline of the whole-brain data right after group-level independent component (IC) clustering. The graph scales are identical across the clusters. In the time-frequency plots, baseline period is indicated as a black line between −4 and −3 s relative to blink onset. Ss, subjects; ICs, independent components. For the group-level analysis, 910/3224 qualified brain ICs were selected from the final sample of 26 participants who showed more than 20 blinks for No Supp and 10 blinks for Supp and Supp Rwd blocks, respectively. The number of brain ICs contributed by individual subjects was M = 35.0 (SD = 13.8, range 10-61). The optimum numbers of IC clusters based on the spatial coordinates of the dipoles were 12 and 14 for Silhouette and Davies-Bouldin, respectively. Calinski-Harabasz did not show an optimum point. To increase the chance of obtaining a higher number of unique subjects per cluster, we chose to generate 12 IC clusters. Mean scalp topography, power spectral density, and event-related spectral perturbation (ERSP) within the cluster and across all the conditions are shown in Figure 3 to show a general outline of the group-level clustered ICs. Main Effect Suppression The statistical test on the main effect Suppression revealed that the IC cluster localized near the left prefrontal cluster differentiated No Supp versus Supp (with or without Rwd) (Fig. 4). The location corresponds with previously reported DLPFC activation during eye-blink inhibition (Mazzone et al. 2010). The timefrequency analysis revealed theta-band (4-8 Hz) power increase for suppression conditions that started approximately −1.5 s prior to blink, which is a failure of blink suppression. These results may reflect increased effort to suppress blinks against increasing urge. Thus, we replicated anatomical location from the previous fMRI study, and furthermore succeeded in characterizing the modulation of brain dynamics as elevation of theta power during suppression with subsecond time resolution. The same comparison for the rest of the IC clusters are shown in Supplementary Material 6. Main Effect Urge The statistical test on the main effect Urge revealed that the IC cluster localized near the anterior cingulate cluster differentiated Urge Low versus Urge High (Fig. 5). The location corresponds with a previously reported ACC activation during eye-blink inhibition (Mazzone et al. 2010). The time-frequency analysis revealed that subjective sense of urge was associated with power decrease in delta band (1-4 Hz) starting from cluster. The contour mask in the time-frequency plots indicates P < 0.001 after controlling weak family-wise error rate (wFWER). Top row, ERSP for No Supp, Supp, and Supp Rwd. Baseline period is indicated as a black line between −4 and −3 s relative to blink onset. Bottom left, cluster-mean IC scalp topography. Bottom center, clustermean dipole density with FWHM = 20 mm and the centroid coordinate in the MNI template head. Bottom right, the mean ERSP values with SE within the significance mask compared across conditions. * * * P < 0.001. 1 s prior to blinks. When we compare this delta-band ERSP suppression with the time-course of the urgeometer data for Urge High, we notice that the nonsignificant left tail of the delta-band suppression in Urge High starting from −3 s may be corresponding to a gradual increase of urgeometer values that started from −4 s. Also, the elbow point determined in the urgeometer data for Urge High (−1.8 s) seems to precede the ERSP difference (−1 s), but it is positioned in the middle of long left tail of this early nonsignificant portion of the continuum. Closer inspection of the significance mask indicates that the midpoint of the mask in time is not on zero but a few hundred milliseconds prior to zero, which may correspond to the fact that the urgometer peak was registered at −0.4 s. The significance threshold of P < 0.005 is arbitrary, and as such, exact agreement between the behavioral data and the EEG data in their timecourses may or may not occur; however, it is possible to see, in a general sense, the temporal correspondence between the urgeometer behavioral data and EEG modulations. The same comparison for the rest of the IC clusters is shown in Supplementary Material 7. Interaction Urge and Reward The same ACC cluster that showed the main effect of Urge reported above also showed significant interaction between Urge and Reward. While suppression of the delta band (1-4 Hz) power was associated with higher urge, the introduction of reward diminished this difference between Urge High and Urge Low; the results are shown in Figure 6. Interestingly, the significance masks from the urge (Fig. 5) and urge × reward (Fig. 6) do not overlap and the latency starts 2 s earlier in the latter analysis. This suggests that offering a Reward for successful suppression equalizes the response of the ACC region, regardless of urge intensity. It is also noteworthy that the significant interaction continued after blink onset, indicating that the ACC region may also be involved in postblink (i.e., suppression failure) processing, such as monitoring and evaluation. For interest, in order to minimize the effect of postblink brain dynamics, we truncated the mask at 0 s and performed the same statistics. The result still showed the same pattern as shown in Figure 6 bottom right, confirming that the obtained result is valid for the suppression period (data not shown). When using weak FWER correction, this operation violates the assumption of the cluster-level correction, so this test is limited to being a confirmatory process only. Discussion In the current study, we asked 3 research questions: 1) How does urge develop? 2) What are the EEG correlates of blink suppression? 3) How does reward change brain dynamics related to urge suppression? Let us describe the answers to each of these questions: 1) There are at least 2 subtypes of urge development, Urge High and Urge Low. Urge High trials showed a well-defined waveform that starts to rise −5 s relative to blink, while Urge Low trials did not show much modulation; 2) Blink suppression was associated with EEG theta band power increase near or in the left dorsolateral prefrontal cortex (DLPFC); and 3) Reward suppressed urge-related EEG delta band power decrease near or in the anterior cingulate cortex (ACC). Below, we will discuss details and significance of the results. Our results showed that trials grouped as Urge High showed a relatively slow time constant that started to rise from −5 s before a blink. At −1.8 s, the increase became steeper. At −0.4 s, the urge reached the peak. Around 2 s, it returned to baseline, and subsequently decreased below baseline (Fig. 2). In a prior fMRI study, a 1-min block-wise hemodynamic response model with linear increase toward the urge peak was used as a block-design regressor (Berman et al. 2012). However, this temporal model was designed heuristically and was not supported by empirical data. As far as we know, this is the first data to show the time course of building urge leading to a suppression failure. We also found nonlinearity of the urge increase, with which we may be able to model building urge more realistically. The result not only improves our understanding of urge time course but the temporal kernel we obtained in this study may be used in fMRI studies to estimate BOLD signal changes correlated with internal urge dynamics. The separation of Urge High and Urge Low, defined by singletrial correlation to their mean value, was an ad-hoc decision as a part of data mining. The validity of this decision can be argued for 2 reasons. The first reason is that the relation between urge and suppression failure is not necessarily established. In a study of tic suppression, which is considered an analogue of blink suppression (van der Salm et al. 2018), it was reported that subjective ability to self-monitor urge increased with age (Banaschewski et al. 2003). This suggests that younger children may not have developed the ability to monitor urge, and failure of suppression may suddenly happen before becoming aware of the urge. Under this hypothetical uncertainty on reliability of self-report, separating single trials of suppression failures into subgroups of with and without self-report seems a valid first step to analyze the behavioral variance. The second reason is that the urgeometer time-series data for Urge Low and High indeed became separated into 2 distinguishable curves. The result plot seems to support the possibility that the (hidden) distribution of urge across single trials is rather binary, urge present or absent, than Gaussian. Note that the joystick we used may have had relatively small range of angle between neutral and maximum stick tilt, which could have made analogue resolution of the selfreported urge value limited. However, even if the input from the analogue joystick was effectively used as binary input, their statistical distribution across trials and participants should still be able to be studied as continuous probabilistic distribution. The rather binary urge distribution, which seems to have effectively 2 status, namely on and off, also seems to explain why taking Figure 6. Event-related spectral perturbation (ERSP) plots for interaction Urge and Reward in the anterior cingulate independent component (IC) cluster. The contour mask in the time-frequency plots indicates P < 0.001 after controlling weak family-wise error rate (wFWER). Left 2 columns indicate ERSPs for the 2 × 2 conditions. Baseline period is indicated as a black line between −4 and −3 s relative to blink onset. Bottom left, cluster-mean IC scalp topography. Top right, cluster-mean dipole density with FWHM = 20 mm and the centroid coordinate in the MNI template head. Bottom right, the mean ERSP values with SE within the significance mask compared across conditions. * * * P < 0.001. UL, Urge Low; UH, Urge High. a simple mean across all the trials is not a good idea here. Our exploratory blink ERP analysis also showed lower peak amplitude in blink ERP, suggesting that blink behavior could be different when urge levels are different. It leads us to speculate that blinks with low urge may be produced in more involuntary and reflexive way, hence they were faster and lighter than blinks with higher urge. Future studies on heterogeneity of single-trial self-reported urge expression with different age groups is awaited. In the ERSP analysis, we focused on the preblink period during which blink suppression was still successfully maintained but about to collapse in a few seconds. The left prefrontal region showed distinctive EEG power increase prior to the blink during suppression conditions. The involvement of prefrontal regions (dorsolateral prefrontal cortex, DLPFC) in voluntary inhibition task has been reported repeatedly (Lerner et al. 2009;Mazzone et al. 2010;Aron et al. 2014). Our finding suggest that left prefrontal power increase is one of the EEG correlates of behavioral suppression, which is in line with these neuroimaging studies. Furthermore, our result provides rich time-frequency information. For example, we found this elevation started about −1.5 s to the blink with the present threshold. The urgeometer data showed that the urge increase rate changed at around −1.8 s, which seems to fit well with the ERSP time course. The data also showed that the EEG power increase was in the theta band (4-8 Hz), which suggests functional separation from, for example, the ACC region that showed EEG power decrease in the delta band (1-4 Hz) during the overlapping preblink period. Analysis on main effect Urge revealed involvement of regions near anterior cingulate cortex (ACC), and Urge High was associated with deeper EEG power suppression compared with baseline period than Urge Low. ACC has been associated with various types of urges such as itch (Hsieh et al. 1994), voiding of urine (Kuhtz-Buschbeck et al. 2005;Griffiths et al. 2007), coughing (Mazzone et al. 2007;Leech et al. 2013), and smoking (Brody et al. 2004). Importantly, the same ACC cluster showed that subjective urge was modulated by availability of reward. This result was in line with our prediction that ACC is involved in subjective feeling, response coordination, self-monitoring, assessment of motivational valence, and initiation of motor actions (Medford and Critchley 2010). ACC has been known to be a region where regulatory and executive processes interact (Paus 2001). Involvement of ACC was also reported in a previous blink suppression study (Lerner et al. 2009) and antisaccade study (Milea et al. 2003). Not only did our scalp-recorded EEG results replicate these findings, our results showed for the first time subsecond temporal dynamics of how reward availability changes brain dynamics during subjective urge in the ACC region. The pattern of the interaction indicated that when reward is available, the urgerelated ERSP power decrease was equalized between Urge Low and Urge High compared with the no reward condition. This may indicate that enhanced motivation by reward availability worked as a reinforcer of the top-down control over urge. This view seems to be in harmony with a network view of ACC together with insula, which we will discuss below. ACC and insula are functionally closely related to each other. Both ACC and insula commonly contain von Economo neurons (Allman et al. 2010), which are large bipolar neurons that are unique to these regions and also unique to great apes and humans. There is anatomical evidence that ACC, specifically Brodmann area 24 here, has reciprocal connection with insular cortex (Mesulam and Mufson 1982;Vogt and Pandya 1987), and this connection may be mediated by von Economo neurons (Craig 2009). Thanks to this reciprocity, not only does insula integrate sensory information to generate awareness, which is then transferred to ACC for evaluating with various other information, making decisions, and initiating motor commands (feedforward connection), but also the result of the processing in ACC may be back-projected to insula to modulate how subjective awareness is formed (Medford and Critchley 2010) (feedback connection). This view seems to be supported by empirical evidence that the placebo effect for antitussive therapies is generally substantial, but it turned out to be associated with modulation in activation of a cortical network including ACC and insula (Leech et al. 2013). The result indicates that the ACC-insula network was one of the major brain regions that received modulation just by top-down belief that lead to change in behavior. We speculate that reward availability in our study might be related to the same network, and ACC may have played a critical role in changing the behavior of blink suppression when reward was available. The current finding may have clinical value for, for example, designing a behavioral training program for children with Tourette's syndrome (Woods and Himle 2004;Greene et al. 2015). Our results provided evidence of neural substrata underlying the behavioral suppression. Together with other literature of neuroimaging studies, our results can provide spatio-temporally resolved neural mechanism of behavioral suppression. Particularly, the parallel time courses of behavioral and electrophysiological dynamics toward suppression failure we showed in this study seem capable of providing a spatio-temporal target for treatment using transcranial magnetic stimulation (TMS) for tic patients. Future studies toward this direction is awaited. Limitation The results of this study are the first of their kind and as such should be considered preliminary until independent replication occurs. Additional limitations are noted as follows. The presence of blink and related muscle artifact in EEG recording typically creates critical limitation. We addressed this issue by using 2 approaches. One of the approaches was to set the main time window of analysis prior to the blink onset. In fact, eye blink in this study indicates the offset of the time window of interest. The other approach was to employ independent component (IC) modeling approach (Onton and Makeig 2006) rather than scalp electrode signal analysis. We performed a post hoc simulation study, which can be found in Supplementary Material 5. The use of the urgeometer in the current study can be argued. The ability to self-monitor urge depends on age (Banaschewski et al. 2003). In addition, using the urgeometer may impose multitasking of self-monitoring and motor execution, which may have interfered with suppression performance and associated brain dynamics. Care needs to be taken when we interpret the ERSP differences between No Supp (no urgeometer use) and other Supp conditions (urgeometer used). We relied on suppression-breaking blinks to define the suppression period. For this reason, we needed to exclude several participants with fewer number of blinks. It may seem possible to define "successful suppression" when urgeometer showed a high value but no blink followed. But this approach has 2 problems: 1) we do not know whether urge could disappear with continued suppression and 2) an additional condition is necessary to consider in which the urgeometer is used irrespective of urge to counterbalance the brain dynamics related to motor planning and execution. Future studies with the suggested conditions may be helpful to validate the use of the urgeometer. Due to known dependencies of scalp EEG recording on cytoarchitecture, source distance and geometry (Nunez and Srinivasan 2006), our main results were limited to cortical sources close to the surface, and contribution of deeper sources such as insula was not detected. Moreover, in the case of insula, it is also reported that active source area spreads rapidly to the surrounding structures (Sun et al. 2015), which makes it difficult to form a temporally stable active cortical patch that can be detected at scalp recording. In epilepsy studies, differentiation of insular seizures from temporal, parietal, and frontal lobe seizures (Isnard et al. 2004;Nguyen et al. 2009) was not possible, suggesting that both interictal and ictal recordings might fail to display epileptiform discharges for insular seizures (Desai et al. 2011;Ryvlin and Picard 2017). Thus, it is generally hard to obtain insular activity with scalp EEG recording. For interpreting the current results, however, there are a large number of anatomical and neuroimaging studies on the insula and ACC-insula network. As we demonstrated above, using the wealth of literature to interpolate the lack of insular and basal contributions seems necessary to interpret the current EEG result in the context of neuroimaging studies on blink suppression. Conclusion We demonstrated that 1) blink suppression was associated with EEG theta band power increase near or in the left DLPFC; 2) reward improved suppression performance, and Reward suppressed urge-related EEG delta band power decrease near or in the ACC; and 3) real-time self-reported urge has single-peaked time-course longer than 7 s (and peaking at −.4 s), but this applies only to half of failed suppression trials. Supplementary Material Supplementary material can be found at Cerebral Cortex Communications online.
2020-08-13T10:09:56.273Z
2020-08-05T00:00:00.000
{ "year": 2020, "sha1": "bea23fc47452c3f383274b9e9e2d828b16178440", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/cercorcomms/article-pdf/1/1/tgaa046/37949963/tgaa046.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "bf98fe53d249c77559f3784adad05da1dec0ea93", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
249265598
pes2o/s2orc
v3-fos-license
Latest Advances in the Implementation and Characterization of High-K Gate Dielectrics in SiC Power MOSFETs Recently high-k gate dielectrics for SiC power MOSFETs attracted increasing research interest thanks to promising results related to improved specific channel resistances and threshold voltage stability. We investigated high-k gate stacks for 1.2kV and 3.3kV SiC power MOSFETs regarding on-state performance and stability during high temperature gate bias tests. Furthermore, we studied the high-k/SiC interface quality and the effect of burn-in pulses using SiC MOSCAPs. High-k SiC power MOSFETs show significant improvement in on-state performance and threshold voltage stability. We found that the burn-in pulses can be shorter for high-k gate dielectrics compared to SiO2-based devices. Introduction Nowadays, power electronics is undergoing an exciting and profound technology shift driven by the steadily growing demand for energy of our digital society and the urgent requirement for low carbon emission transport infrastructures. Si based power electronics reaches its performance limits regarding high energy-efficient power converters for e-mobility and renewable energy applications. SiC MOSFETs have entered the power devices arena and are the frontrunners to replace traditional Si IGBT technology due to their higher breakdown voltage and thermal conductivity. Despite their successful market entry, several challenges that are strongly connected to the state-of-the-art gate stack technology are still to be solved though in order to fully exploit the enormous potential of SiC power MOSFETs. Conventional SiO2 gate oxides for example suffer from highly defective oxide/SiC interfaces with interface state densities (DIT) of in the order of ∼10 13 eV −1 cm −2 [1][2][3]. These defect levels are significantly higher than what it is typically found in SiO2/Si systems (∼10 10 eV −1 cm −2 ) [4]. Thus, the field-effect mobility of SiC MOSFET, which is strongly related to the Dit, is limited by the carrier trapping effect of the interface states, even after nitridation [5]. The origin of such high DIT is not yet fully understood, but there is a general consensus that it might originate from the presence of C in the SiO2 or at the interface [6]. Additionally, the threshold voltage stability and drift are unsolved problems in SiC MOS technology. We have developed a novel MOS gate stack technology based on high-k dielectrics for power electronic devices between 1.2kV and 3.3kV. The concept of integrating high-k dielectrics in SiC power devices has been investigated since the late 1990s, however, it is only recently that the rather low temperature processes needed for high-k have been successfully combined with the high thermal budgets required for SiC. It is becoming increasingly evident that the predicted device performance gain has not only been experimentally confirmed, but is also featured by other concomitant phenomena, which, for instance, impact on burnin behavior and other effects caused by defects. In this paper, we review the performance gain and threshold voltage stability in low and medium voltage SiC power MOSFETs, while focusing on the key features of an improved dielectric interface. Experimental Details Thermally oxidized and high-k MOS capacitors were fabricated by thermal oxidation in O2 ambient and state-of-the-art dielectric deposition technique without oxidizing the underlaying SiC interface, respectively. Electrically characterization by C-V and deep level transient spectroscopy was performed. C-V measurements were performed at RT (1 MHz), whereas deep level transient spectroscopy, in constant capacitance mode (CC-DLTS), was performed in the 77-750 K temperature range. CC-DLTS measurements were carried out at a fixed frequency of 1 MHz, by using a reverse bias (VR) of -15 V and by keeping a constant pulse height (VH) value, with a filling pulse of 1 ms. Another set of measurements was done by replacing the electrical pulse with an UV optical pulse of 100 ms (30 mW, 365 nm), e.g. minority carrier transient spectroscopy (MCTS). The CC-DLTS/MCTS signal was then converted into an energy distribution of the density of interface traps (DIT). This was done assuming that the DIT is weakly dependent of the energy and that the capture cross section does not depend on the temperature and energy [5]. A gate stress bias (burn-in) of 20 V was applied to either thermal oxide and high-k MOS capacitors, at 150, 175 and 200 °C, for different time durations. The flat band voltage (VFB) was extracted from the C-V measurements, by using the second derivative of the relation between the capacitance C and the gate voltage VG [7]. Finally, we fabricated planar high-k SiC power MOSFETs for repetitive switching experiments to investigate the threshold voltage stability under harsh dynamic conditions. These devices were soldered on copper substrates, wire bonded and measured in a double pulse tester with a stray inductance of 60nH. Figure 1 shows the static gate voltage hysteresis comparing both dielectric layers, also indicating the effect of N2O annealing on the SiO2/SiC interface. Apparently, the N2O process does not improve the hysteresis effect of the device. The high-k, however, shows no hysteresis up to 20V that underlines the improved threshold stability still representing one of the greatest challenges for SiC devices and applications. Figure 2 indicates the comparison of the extracted density of interface states (Dit) using CC-DLTS of high-k (left) and SiO2 (right) gate stacks. Here, voltage pulses at different start voltages (3-30V) for the CV characteristics of the MOS capacitor samples were studied. Higher trap densities 384 Silicon Carbide and Related Materials 2021 are associated to larger voltage sweeps in both cases. However, the trap density for the high-k gate stacks is more than one order of lower compared to the SiO2-based stack. Notably, the pulse voltage dependence drops significantly towards mid gap. We also investigated whether the improved highk/SiC interface quality does also affect the burn-in behavior. Figure 3 shows the flat band voltage shift (∆V) as function of the burn-in time at various temperatures. The voltage shift saturates at approx. +2.5V for all temperatures for the SiO2 stack. This saturation level is reached faster, i.e. at shorter burn-in times, with increasing temperature. Similar behavior was observed for the high-k dielectric, although the saturation voltage is lower. Figure 4 and 5 display the interface density extracted using CC-DLTS and MCTS for a wide range of the bandgap. We examined as deposited devices and samples after burn-in pulses at 200°C, for thermal oxide and high-k dielectrics. As it can be seen, the DIT close to the conduction band is considerably higher for the thermal oxide compared to the high-k dielectric. This might be due to the fact that in the thermal oxide MOS capacitor, the presence of defects in the SiO2 that give rise to a DIT distribution close to the conduction band [8] has to be taken into consideration. Close to midgap the two dielectrics show a broad DIT distribution. Despite the superior high-k/SiC interface quality, this peak was not influenced by the choice of dielectric and its corresponding process. Thus, we assume that the defect traps are in the SiC epi layer rather than at the interface. After the burn-in stress, it can be noted that the DIT close to the conduction band was not affected (Figure 4 and 5). On the other hand, we observe a slight decrease in DIT for the thermal oxide samples. The burn-in pulses do not show any effect on the DIT for the high-k samples. In the following we will provide a short discussion about the microscopic origin of such mid-gap defect distribution. First, it should be noted, that no final conclusions can be drawn based on the presented results and that further investigations are needed. Nonetheless, some hypotheses can be put forward to explain such broad distributions. In their study, Kobayashi et al. [9] have analyzed different defects, e.g., carbon dimers in the SiO2, at the interface and in SiC, in a SiO2/SiC system, by means of density functional theory. Among the investigated defects, the most promising might be the dicarbon antisite ((C2)Si), located in the SiC crystal giving rise to several electrical active levels in the band gap. As a matter of fact, (C2)Si is responsible for two very close electrically active levels at midgap, at EC-1.6 eV and EC-1.7 eV [9]. Since DLTS is generally not suitable for separating two (or more) similar emission rates, it can be suggested that such broad DIT might originate from these two closely spaced levels. Possibly, a Laplace DLTS investigation might be useful to separate both contributions. In the insets of Figure 4 and 5, we report the extracted depth profiles of the CC-DLTS signals at ~750K (thermal oxide) and ~400K (high-k). The depth distribution of both signals is located at least within the first hundreds of nanometers of the epitaxial layer, meaning that the defects responsible Silicon Carbide and Related Materials 2021 for this DIT are rather close to the surface. The concentration, however, is in the range of the drift doping. Thus, it must be assumed that a change due to burn in, as we have seen, has a significant effect on the threshold and flatband voltage of the device. Figure 6 shows the static on-state characteristics of 3.3kV and 1.2kV SiC power MOSFETs with high-k and standard silicon dioxide gate stacks and pitches of 14um. The channel length amounts to 250nm. While the high-k gate stack was deposited by a state-of-the-art deposition technique commonly used for example in CMOS technology, the silicon dioxide was formed by high temperature thermal oxidation of a deposited CVD layer and a high temperature post oxidation nitridation (PON) in N2O ambient. As can be seen in the graph, both voltage classes gain significantly in performance. Although the performance of higher voltage devices depends substantially on the epi resistance, the implementation of high-k leads to a reduction of the on-state by more than 35%. Figure 7 shows the blocking characteristics of the same representative devices from Figure 6. In both voltage classes the high-k dielectric efficiently reduces the reverse leakage by approx. one order of magnitude. Apparently, the channel suffers from drain induced barrier lowering effects which can be suppressed by the larger permittivity and thus electric field in the SiC. Moreover, the Vth stability extracted from dynamic characterization is depicted in in Fig. 8. The extracted dV/dt of the turn-on wave forms up to 200k stress cycles at 2x INOM using a wide gate voltage swing of VGS= +/-15V does not show any signs of degradation. Finally, we performed high temperature gate bias (HTGB) tests on 1.2kV SiC power MOSFETs with high-k and SiO2-based gate stacks. The devices were tested for 100-120 hours at 175°C using a gate voltage of VGS = 20V. After each 20 hours gate voltage stress phase we measured the transfer curves, as can be seen in Figure 9, and extracted the threshold voltage drift (c.f. insets of Figure 9). The threshold voltage drift of the high-k devices is considerably lower, i.e. below 100mV, compared to the SiO2-based devices. For the latter we observed a maximum threshold voltage drift of 500mV after an accumulated HTGB stress of 100 hours. Summary In this paper we presented the latest advances of our high-k gate stack technology for vertical SiC power MOSFETs including a thorough study of the SiC/dielectric interface quality using CV, DLTS and MCTS measurements. Additionally, we studied the effect of burn-in voltage pulse on the flatband voltage stability of high-k SiC MOSCAPs. We found that the flatband voltage saturates at lower burnin times for increasing temperature. This saturation is reached even earlier for the high-k gate stacks. 1.2kV and 3.3kV SiC power MOSFETs show a significantly improved on-state performance compared to SiO2-based control devices and indicate a lower threshold voltage drift. Furthermore, it was demonstrated that the influence of the stress voltage results in a change of the defect concentration.
2022-06-02T15:14:49.625Z
2022-05-31T00:00:00.000
{ "year": 2022, "sha1": "86d96dbb58fef0236269d8a77f155c4066c3aaeb", "oa_license": "CCBY", "oa_url": "https://www.scientific.net/MSF.1062.383.pdf", "oa_status": "HYBRID", "pdf_src": "ScientificNet", "pdf_hash": "caad5911383b8f65a12494f644e6592b086a54fe", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
253224166
pes2o/s2orc
v3-fos-license
Semi-UFormer: Semi-supervised Uncertainty-aware Transformer for Image Dehazing Image dehazing is fundamental yet not well-solved in computer vision. Most cutting-edge models are trained in synthetic data, leading to the poor performance on real-world hazy scenarios. Besides, they commonly give deterministic dehazed images while neglecting to mine their uncertainty. To bridge the domain gap and enhance the dehazing performance, we propose a novel semi-supervised uncertainty-aware transformer network, called Semi-UFormer. Semi-UFormer can well leverage both the real-world hazy images and their uncertainty guidance information. Specifically, Semi-UFormer builds itself on the knowledge distillation framework. Such teacher-student networks effectively absorb real-world haze information for quality dehazing. Furthermore, an uncertainty estimation block is introduced into the model to estimate the pixel uncertainty representations, which is then used as a guidance signal to help the student network produce haze-free images more accurately. Extensive experiments demonstrate that Semi-UFormer generalizes well from synthetic to real-world images. INTRODUCTION Images captured in hazy weather often suffer from noticeable visibility degradation, color distortions, and contrast reduction, which further drop the performance of downstream vision-based systems such as object detection, autonomous driving, and traffic surveillance [1,2]. Therefore, restoring clean images from their hazy versions is quite important. Existing dehazing efforts can be roughly classified into prior-based [3,4] and learning-based approaches [5,6]. Traditional prior-based studies often exploit hand-crafted image priors to solve the image dehazing problem based on the atmospheric scattering model [7], such as dark channel prior (DCP) [3], non-local color prior [4], etc. Although these algorithms can improve the overall visibility of the image, they are not always reliable due to over-reliance on assumptions. Recent advances in deep learning open up huge opportunities for image dehazing tasks, and a large number of learning-based approaches have sprung up [5,6,8,9]. While these algorithms are efficient and can produce promising results on various popular benchmarks, most of them are trained on synthetic data, so they cannot generalize well to real-world scenes due to the existence of domain gap. Recently, several semi-supervised [10,11] and unsupervised [12,13] methods have attempted to solve the domain shift issue via training models on real-world images. Although these approaches alleviate the domain shift issue to some extent, their dehazing capacity is usually limited because the ground-truth of realworld hazy images cannot be used as the reconstruction loss to constrain the training of the network. In addition, most learning-based dehazing efforts only produce the final dehazed images without discussing the uncertainty of the results, which is important for recovering edge and texture regions in hazy images. Furthermore, current dehazing models typically treat all pixels equally, but pixels in edge and texture regions obviously contain more visual information than pixels in smooth regions [14], and such pixels tend to have a high degree of uncertainty. Therefore, accurate estimation and appropriate use of image uncertainty can guide the network to focus on pixels with large uncertainty, so that pixels in specific regions can be enhanced to improve the quality of the final dehazed images. To resolve these issues, we propose a novel semi-supervised uncertainty-aware transformer network (Semi-UFormer) for image dehazing. Semi-UFormer builds itself on the knowledge distillation framework and benefits from uncertainty guidance information, thus producing much clearer images with well-preserved details. In summary, our contributions are three-fold: (1) A novel semi-supervised uncertaintyaware transformer network called Semi-UFormer is proposed for image dehazing, which leverages both real-world data and uncertainty guidance information to boost the model's dehazing ability. (2) An uncertainty estimation block is exploited to predict the epistemic uncertainty of the dehazed images, which is then used to guide the network to better reconstruct the image texture and edge regions. (3) We leverage knowledge distillation technology to align the feature distributions between synthetic and real data, which can help the network generalize well in real-world scenarios. SEMI-UFORMER Beyond existing image dehazing wisdom, Semi-UFormer fully explores knowledge distillation technology and uncertainty guidance information to help the network produce much clearer images with more confidence. Fig. 1 exhibited the overview of our Semi-UFormer, where the teacher and student network share the same architecture. We first train the teacher network on both synthetic and real-world data to produce the coarse dehazed results and pixel uncertainty map, θ. Then, the student network produces fine dehazed results with the help of uncertainty-guided information θ and knowledge distillation strategies. The specific dehazing process is described in the following. In the following, we will detail the individual network modules in our Semi-UFormer. Teacher-student Network with Knowledge Distillation Unlike existing image dehazing networks that are exclusively trained on synthetic data, we serve the teacher-student network as the semi-supervised framework and utilize knowledge distillation to migrate supervised dehazing knowledge to unsupervised dehazing. Teacher-student network. The training phase of Semi-UFormer can be divided into two stages, as depicted in Fig. 1. In stage 1, the teacher network is trained on both synthetic and real data to estimate uncertainty information and coarse dehazed images, where the supervised branch plays a leading role. In stage 2, we first leverage the weights of teacher network to initialize the student model. Then, with the help of the teacher network, the student model retrains on both data and exploits knowledge distillation for better generalization in real-world scenes, where the unsupervised branch plays a leading role. At the same time, the uncertainty θ supplied by the teacher model is used as additional guidance information to teach the student how to produce fine dehazed images, while the network for estimating uncertainty θ is frozen. Knowledge Distillation. Considering similar images tend to demonstrate correlation at the high-dimensional feature level, we leverage the teacher-student framework to extract the high-dimensional features of synthetic and real haze. Then, we minimize the KL divergence between these two features to reduce the gap between synthetic and real-world data [15], to help the student model apply the supervised dehazing knowledge to unsupervised haze removal. Transformer-based Dehazing Network Typically, real-world images follow a definite rule and reflect global properties such as contrast ratio and sparsity of dark channel. To capture global information and perform accurate dehazing, we introduced the Dehazeformer block into our network because of the excellent dehazing abilities of the Dehazeformer-Net [9]. Additionally, to reduce computational costs, the detailed parameters of the Dehazeformer block in this paper are referred to the Dehazeformer-Small in [9]. As exhibited in Fig. 2, the Transformer-based dehazing network is an enhanced 5-stage U-Net, which consists of three modules: a shallow feature extraction, a Mix DehazeFomer Block, and a reconstruction module. Conv layer Pixel shuffle Fig. 2. Overview of our Transformer-based dehazing network. Mix dehaze Block Shallow feature extraction. A 3 × 3 convolutional layer C 3 (.) is first applied to extract shallow feature information from the hazy image I R H×W ×3 : Mix DehazeFormer Block (MDB). Next, feature F shallow will be sent to Mix DehazeFormer Block for extracting image global features. In MDB, we first leverage several Dehazeformer blocks DF (.) (see Fig. 3) to extract the global information and then use the residual block without normalization layer RB(.) to fuse the global information, such a hybrid structure is more efficient than using only Transformer blocks. The global feature F global R H×W ×3 is: where n denotes the number of Dehazeformer blocks used in MDB. Reconstruction module. Finally, a 3 × 3 convolutional layer C 3 (.) and a pixelshuffle layer P (.) are used to produce the haze-free image J R H×W ×3 from the extracted F global : Uncertain Estimated Block and Uncertain Loss Theoretically, there are two types of uncertainty in Bayesian modeling: aleatoric uncertainty from the data and epistemic uncertainty from the model. The former is very common in dehazing models, but existing methods ignore exploring it. Therefore, we exploit an uncertainty estimation block (UEB) to predict the uncertainty of dehazing results, which enables the model to focus on regions with rich visual information (e.g., edge areas) to improve the final restoration results. The prediction process for uncertainty θ [14] can be expressed as:Ĵ whereĴ i , G 1 (I i ), and θ i denote the ground-truth, coarse dehazed image, Laplace distribution, and aleatoric uncertainty from the synthetic dehazed image, respectively. For a more accurate prediction of θ i , we introduce Jeffrey's prior [16] into the uncertainty estimation process. ForĴ i , G 1 (I i ), the Laplace distribution-characterized log-likelihood function and uncertainty estimation loss L ue can be expressed as: We employ L ue to estimate the θ more accurately. Then, through the guidance of θ, we apply the uncertainty-guided loss L ugs to push the network to concentrate more on the reconstruction error area with large uncertainty in the dehazed image, to obtain accurate and confident dehazed results. The formula is shown in (7). In addition, inspired by the identity loss [17], we incorporate the uncertainty-guided loss L ugu into the unsupervised branch, as shown in (8). where G 2 (.), J j , θ j represents the student network, realworld images, and uncertainty from real-world dehazed images, respectively. Loss Functions Teacher network. The overall loss functions for the teacher network are formulated as: where L ts , L tu refers to the loss functions of the supervised and unsupervised branches, respectively. L tu = λ 3 * L ide + λ 4 * L dc + λ 5 * L tv (11) where L a , L ide , L dc , L tv represent adversarial loss, identity loss [17], total variation loss and dark channel loss [10]. Student network. The overall loss functions for the student network are formulated as: where L ss , L su refers to the loss functions of the supervised and unsupervised branches, respectively. L su = λ 3 * L ugu + λ 4 * L dc + λ 5 * L tv + λ 6 * L kl (14) where L kl denotes the KL divergence loss. KL loss: Since the intermediate structure of the network tends to extract high-dimensional haze-related features, we choose the 3-th MDB to supply the synthetic hazy image embedding V syn and the real-world image embedding V real . And by taking V syn as the pseudo-label of V real , these two haze distribution features are transformed into high-dimensional vectors to calculate L kl [15], which can be expressed as: by which to enhance the similarity of haze distribution features between synthetic and real-world images. 3. EXPERIMENTS Implementation Details Experiments are implemented on Pytorch 1.7 with NVIDIA RTX 3090 GPU and Adam optimizer with parameters β 1 = 0.9, β 2 = 0.99, = 10 −8 to train the network. The teacher network is trained for 100 epochs, in which we update the unsupervised branch once after updating the supervised five times. The student network is trained for 60 epochs, in which we update the supervised branch once after updating the unsupervised five times. In each stage, the learning rate is set to 10 −4 for the first half and then decays linearly to 0 at the end. The batch size is set to 2. The Semi-UFormer is trained with 10,000 paired and 2000 unpaired samples from the OTS and URHI datasets [18]. The loss weights are set to: λ 1 = 1, λ 2 = 10 −2 , λ 3 = 2, λ 4 = 10 −2 , λ 5 = 10 −5 , λ 6 = 10 −6 . Results on Synthetic Datasets. Semi-UFormer is evaluated on the SOTS outdoor and HSTS sets [18] with nine SOTA dehazing algorithms. As exhibited in Table 1, our method achieves the highest PSNR and SSIM values on both datasets. Method Results on Real-world Images. To evaluate the dehazing performance on real-world images, we apply the blind image quality evaluation index SSEQ [21], color evaluation index σ [22] and HCC [23]. Our Semi-UFormer produces the cleanest dehazed images with high visual quality compared with other dehazing approaches, as depicted in Fig. 4 Table 3. Ablation Analysis on Semi-UFormer. Ablation Study In ablation studies, we first build the base network with the original Dehazeformer-S module trained with L 1 loss (replace the uncertainty loss). Then, we build base + MDB module → V 1 , V 1 + uncertainty → V 2 , V 2 + Knowledge Distillation → V 3 (full model). As exhibited in Table 3, our complete network scheme achieves the best dehazing performance. CONCLUSION In this work, a novel semi-supervised uncertainty-aware transformer network called Semi-UFormer is proposed for image dehazing, which leverages both real-world data and uncertainty guidance information to facilitate the dehazing tasks. To bridge the gap between synthetic and real data, we build our Semi-UFormer on top of a knowledge distillation framework and apply a two-branch network to train our model on both synthetic and real-world images. Moreover, we exploit an uncertainty estimation block (UEB) to predict the pixel uncertainty of the coarse dehazed results and then guide the network to better restore the image edges and structures. Experiments on both synthetic and real-world images fully validate the effectiveness of our Semi-UFormer.
2022-10-31T01:15:59.383Z
2022-10-28T00:00:00.000
{ "year": 2024, "sha1": "2954d6c313a7d228af4cdee12676124977087856", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2954d6c313a7d228af4cdee12676124977087856", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
247341921
pes2o/s2orc
v3-fos-license
Decentralized stability enhancement of DFIG-based wind farms in large power systems: Koopman theoretic approach This paper proposes a data-centric model predictive control (MPC) for supplemental control of a DFIG-based wind farm (WF) to improve power system stability. The proposed method is designed to control active and reactive power injections via power converters to reduce the oscillations produced by the WF during disturbance conditions. Without prior knowledge of the system model, this approach utilizes the states measurements of the DFIG subsystem for control design. Therefore, a data-driven optimal controller with a decentralized feature is developed. The learning process is based on Koopman operator theory where the unknown nonlinear dynamics of the DFIG is reconstructed by lifting the nonlinear dynamics to a linear space with an approximate linear state evolution. Extended dynamic mode decomposition (EDMD) is then applied to determine the lifted-state space matrices for the proposed Koopman-based model predictive controller (KMPC) design. The effectiveness of the proposed scheme is tested on New England IEEE 68-bus 16-machine system under three-phase fault conditions. The results ascertain the effectiveness of the proposed scheme to enhance the system damping characteristics. Proportional integral 1 , 1 PI gains of power regulator 2 , 2 PI gains of RSC current regulator 3 , 3 PI gains of grid voltage regulator I. INTRODUCTION Extensive studies have been conducted to damp the oscillatory modes using power system stabilizers (PSSs) [1], [2] and Flexible AC transmission systems (FACTS) that have good performance under a wide range of network conditions [3], [4] Nevertheless, PSS and FACTS are difficult to tune which may jeopardize their performance in damping the inter-area modes under varying network conditions induced by high penetration of intermittent renewable energy resources. More effective controllers can be derived utilizing modern control theory such as robust and optimal control [5] to enhance the PSS and FACTS efficacy in damping the inter-area modes. With the advent of new wide-area measurements (WAMs) technology, those devices can utilize remote signals and overcome the limitation of local measurements, which lack observing the interarea modes [6]- [8] However, most of these controllers are model-based and no real-time knowledge of the system is utilized. In controller design and prediction of future system behavior, an accurate and thorough model of the system is required, which can sometimes lead to low performance due to model discrepancy or uncertainty. For systems with significant wind penetration, the ability of wind generators to deliver ancillary services is vital. As such, transmission system operators (TSOs) have altered grid code criteria in response to the growing number of wind generation facilities around the world [9]. These grid regulations require wind farms to provide ancillary services such as inertial support, frequency regulation, and damping control. All these services are conventionally provided by synchronous generators. This large-scale integration of wind farms, however, delivers an extreme challenge to handle the time-varying characteristics of such resources and is usually accompanied by modeling uncertainties [10]. Therefore, it is difficult to guarantee a sufficient deterministic model of wind-dominated power systems. Usually, the uncertainties are managed employing robust or adaptive approaches. The value set technique, for example, was utilized in [11] to undertake robust stability analysis and parameter design in large power systems. Ref [12] suggested a robust design of multimachine PSSs based on the simulated annealing optimization approach. These solutions, are nevertheless analytical model-based, resulting in sophisticated designs and complex controls. We emphasize that, whereas model-based design gives an appropriate solution for oscillation occurrences in principle, optimality and resilience are seldom attained in practice due to: 1) The real parameters of the devices (e.g., HVDC stations and SGs (synchronous generator)) are difficult to determine due to operating conditions dependence and parameter uncertainty. 2) Because of multiple operating modes, uncertainties, and relaying, the grid model is constantly evolving and hence difficult to generate. 3) The model-based design might not adapt to the operating condition changes and may not handle plant changes after some time due to the unceasing variations of load profiles and unpredicted faults. The difficulty of having a precise model without any parameter uncertainties would pose some limitations [13]. With the advent of synchronized phasor measurement units (PMU) along with the rapid progress in model identification theory, offline models may be replaced with measurementbased online models that can be used to construct damping controllers [14]- [16]. Data-driven methods are not widely discussed in the literature. In [13], Xinze et. al. proposed a two-stage control structure comprising adaptive linear quadratic gaussian control (LQGC). The model is estimated by N4SID algorithm (subspace identification method) to capture the electromechanical characteristics of the system. Ref [17] addressed a method to design a model-free adaptive wide-area PSS. The controller can modify its parameters only based on the input/output measurements without requiring any complex model. The methods discussed in [13], [17] can typically be used to mitigate the impact of model uncertainty. Generally, adaptive controllers are self-tuning and sensitive to incoming state data. The adaptive control scheme can handle system changes (system parameters that drift gradually over time) [18]. Nevertheless, the need for an explicit uncertainty model restricts its usefulness in general settings for adaptive and robust control strategies. Moreover, in [13], [17] the constraints of the system are not incorporated e.g., hard limits on the control signal. In [15] Liu et al proposed an autoregressive moving average exogenous (ARMAX) method to identify a low-order transfer function model for the power system using ambient and ring-down measurements. However, this work did not consider designing a coordinated damping control scheme. Recently, Koopman operator theory has been introduced for robust identification, estimation, and control of electric power systems [19]. Koopman operator is an infinite-dimensional linear operator capable of reliably identifying the dynamic behavior of nonlinear systems. Koopman operator theory provides a general overview of the complex nonlinear dynamical systems in terms of the evolution of "observablefunctions" in the state space. At the expense of operating in high-dimensional spaces, a reasonable approximation of the observables dynamics is given. By finding the observables, Koopman operator can be approximated in a finitedimensional space in a data-driven fashion. In other words, a highly nonlinear power system can be represented as a linear dynamical system that contains all the nonlinear information (as opposed to a Taylor expansion of the dynamics centered at an equilibrium state) [20]. Koopman operator can provide a data-driven methodology for control and state estimation of nonlinear systems [19]. The controller is constructed based on [21], in which the system measured data is used to build a linear predictor that is embedded in a high-dimensional space. This predictor is mimicking the original system dynamics in the low-dimensional space. And thus, we can capture the nonlinear dynamics with a linear tool but in different space dimension. By implementing this form of high-dimensional predictors, one can apply effective linear model predictive control (MPC) tools to control the power system. The MPC optimization problem is not affected by this high dimensionality, in terms of computational ability and complexity [21]. Thus, the control sequence can be obtained very fast, opening a gate for real-time control development. MPC has been proven to be one of the most effective control strategies in numerous real-world control applications [22]. A mixture of prediction and regulation schemes are utilized by the MPC to maintain the system output at the required reference value by generating the optimal control sequence. The most notable element of the MPC compared with other control strategies such as LQG, is that it can produce the optimal control signal by minimizing a certain cost within a finite prediction horizon while considering explicitly the system constraints which are usually taken as a lower or upper limit of the control signal. Moreover, because of the timevarying gains acquired via online optimization, an MPC-based damping controller damps quicker than LQG-based damping controller. Owing to the optimal characteristics and the ability to deal with system constraints, MPC has been utilized in various power system applications [23], [24]. Generally, MPC has the capability to consider the system requirements such as 1) frequency deviation can be constrained in a certain range via MPC; thus, generator removal because of protective relay action can be avoided; 2) to meet the generator physical limitations, the control action can be bounded, this can be important for practical considerations. In general, the plant operational points are designed to achieve economic targets and sit at the intersection of certain restrictions. The control mechanism usually works near the boundaries, so violations of constraints are possible. The control system, particularly for long-range predictive control, must foresee and correct violations of the constraints in an acceptable manner, MPC control could achieve this purpose efficiently. A little effort has been done to implement Koopman theory in power system control problems. In [25] the first design application of data-driven MPC for power system control was introduced. The Koopman model was generated and used to design the MPC controller. The main downside of this work is that the model considered was a 2 nd order machine model. Furthermore, that work did not present any comparison between the identified model and the real one. In [26], Koopman-based PSS was presented where the 4 th order model was considered. The PSS was designed as a data-driven MPC controller and validated in a small-scale four-machine system. The measurements were collected in a decentralized manner to design a local Koopman-based model predictive controller (KMPC)for each generator. However, the generalization to a more complex and large-scale system is not considered. Furthermore, damping control via DERs has not been investigated up to now. In this paper, we propose a novel data-driven scheme for supplemental damping based on Koopman operator theory. The main contributions of this paper can be summarized as follows: 1-The proposed damping control design of DFIG oscillations is a data-driven Koopman MPC, which is compared with existing tuned PSS-DFIG. 2-The proposed design scheme is decentralized in both identification and control stages and utilizes only the local measurements of the DFIG in the identification phase, therefore, no scalability issues are posed. At the learning phase, extended dynamic mode decomposition (EDMD) utilizes the local measurements and only computes the model associated with DFIG. At the control stage: the proposed controller receives only local feedback signals from the WF, and no states are hosted from the rest of the grid. 3-In the proposed design approach, neither the model nor the states of the rest of the grid are required for the design. Therefore, the MPC computes the optimal control sequence using the local model associated with the WF. The remainder of this paper is organized as follows. In section II, an overview of the DFIG model is introduced, and the design problem is formulated. Section III presents the proposed scheme for prediction and control. The effectiveness of the proposed design is demonstrated by time-domain simulation in Section IV. Finally, Section IV concludes the paper with the major findings and remarks. II. MODELING OF DFIG-WIND FARM The power system adopted in this study mainly comprises synchronous generators, loads, and wind farms. The dynamic structure of each element is well discussed in the literature [26]. Each wind farm is represented by the combination of a single aggregate turbine and an aggregate DFIG. A large WF typically has tens to hundreds of wind turbines. According to earlier studies, if the wind turbine controllers are well-tuned, there would be no mutual coupling between the turbines on a wind farm [29]. Thus, in this paper, the wind farm connected to the ℎ bus is a collection of several wind generators aggregated inside the wind farm represented by one DFIG. The model of all the generators and turbines is similar [30]. The total power injected into the grid is the sum of the power output of each generator. The rated output of the aggregated wind turbine generator (WTG) is equal to the rated output of one WTG multiplied by the number of WTGs being aggregated. The structure of the DFIG and the control system is given in Figure 1. This topology is sophisticated and relates mechanical, electrical, and aerodynamics. This paper, however, is concerned with the electromechanical dynamics in a time scale of oscillations damping, where a detailed model is considered to sufficiently reflect the nature of those dynamics. This detailed description maintains all the oscillatory modes to identify and characterize various frequency dynamics in the wind farm. Figure 1. DFIG structure including the MSC and GSC As shown in Figure 1, the stator is directly connected to the grid, whereas the rotor winding is fed through back-to-back converter and LCL filter. The control of DFIG is performed by active and reactive power modulation [31] as shown in Figure 2. The active and reactive power controls are decoupled via vector control technique. The model comprises back-to-back converter (B2Bc) which comprises three parts, rotor side converter (RSC), grid side converter (GSC), dc-link capacitor, generator electrical dynamics, turbine mechanical dynamics, turbine aerodynamics, and LCL filter dynamics. Below is a description of each component. VOLUME XX, 2017 9 Turbine areo-dynamics: this component relates wind speed to mechanical torque. The following equations describes the turbine mechanical power considering the pitch angle, wind speed and rotor speed. = 0.5 2 ( , ) 3 (1) where 1 , … , 9 are coefficients determined by the manufacturer [32]. Drive-train: the drive-train consists of low and high-speed shaft both coupled through a gearbox. To account for the torsional mode associated with the shaft, the wind turbine, and the DFIG rotational mass are represented by a two-mass model as follows: Remark 1: Normally, more detailed models are needed when dealing with sub-synchronous resonance at wind-turbine torsional oscillations. However, since the focus of the paper is on systems dynamics and wind controls interactions with the synchronous generators we have determined that a simplified yet realistic two-mass model will be sufficiently accurate for studying inter-area oscillations. This assumption is even more justifiable for Type-3 wind generators used in this paper compared to type-4 generators which may be subjected to modal resonance under certain conditions (see [33], [34] for more clarification). Representation of the generator: The DFIG is an induction machine which is described in d-q synchronous reference frame as follows: The stator active power , rotor active power , stator reactive power , rotor reactive power and electric torque are given by (12)-(16) respectively . Both adjustments are carried out independently in two loops; the outer slow loop which creates the , currents setpoints for the inner fast loop to achieve the prementioned purpose. Since we are concerned with the electro-mechanical dynamics, the convertors switching dynamics are assumed to be fast, as such; we can represent the associated converter control loops only [32]. Figure 3 depicts the control scheme, while the control equations are described as follows: Back-to-back capacitor (B2BC): B2BC is used to balance the active power flowing between RSC and GSC according to the following equation: = + (37) Ignoring the convertor losses then = , = , B2BC dynamics is described by LCL Filter: which is used to filter out the components of switching frequency harmonics, described in d-q frame by the following: MODEL IDENTIFICATION A. Koopman Operator theory: an overview Consider the following autonomous system that evolves on ndimensional finite manifold : ( ) = ( ( )), ∈ ℝ, ( ) ∈ ⊂ ℝ (48) where is the system state and represents the nonlinear continuous function. The solution to (48) at time starting with the initial condition 0 at time 0, is denoted by ( 0 ), which is known as the "flow map". The represented system can be lifted to a higher dimensional space ℱ that contains scalar-valued continuous functions, invariant under the Koopman operator action in the manifold ⊂ ℝ n . The flow in the lifted space is described by the action of Koopman operator : ℱ → ℱ, ∀t ≥ 0. The Koopman operator for the continuous-time system which acts on the space of observables is defined by: where ∘ denotes the composite function. : ℝ → ℝ is a scalar-valued function known as the observable, which includes the information of the state , i.e., the mapping → belong to ℱ, ∀ ∈ {1, … , n}. is a linear space despite the nonlinearity of the system (21), since for 1 , 2 ∈ ℱ and 1 , 2 ∈ ℝ, then U t ( 1 1 + 2 2 ) = 1 1 ∘ + 2 2 ∘ = 1 1 + 2 2 . As a result, in the infinite-dimensional space of observables; the Koopman operator offers a linear mapping of the nonlinear system. Equation (48) describes the evolution of the initial condition 0 at ≥ 0 in which the solution (trajectory of the state) is obtained by solving analytically or numerically the nonlinear differential equation. The new representation, however, allows the operator to be applied to each component of the state to obtain the mapping of each one, in a linear fashion. Figure 5 shows a representation of Koopman operator mapping where the right-hand side is the original nonlinear system with a state denoted by x and the left-hand side represents the lifted linear system with lifted state denoted by z. B. Koopman operator approximation The infinite-dimensionality of Koopman operator renders a difficulty of gaining an actual matrix representation of the operator, therefore, EDMD is utilized to compute a finite approximation of the operator [35] [16]. EDMD is a regression based-method used to find a finite-dimensional approximation of . It is a data-driven approach and relies upon the accessibility of the system states observations. Consider ℱ ⊂ ℱ to be a subspace of ℱ. Linearly independent basis functions which span ℱ are defined by { = ℝ → ℝ} =1 , the image of is denoted by ℛ , which is equal to { ∈ ℝ|∃ ∈ ℝ , ( ) = }. For the sake of simplicity, we suppose that the first basis functions are defined as ( ) = , where is the ℎ element of . The observables f ∈ ℱ is constructed by the linear combinations of : where Ψ , Ψ ∈ ℝ̇× represents the lifted-state measurements. Using EDMD least-square we can get the best fit of the data observation by where Ψ ⊥ denotes the pseudo-inverse of Ψ [ ] ∈ ⊂ ℝ represents the system input at the ℎ sample and is the space of control inputs Above all, the control input is preserved in its original space (un-lifted), thus the control inputs appear linearly; linear constraints may be applied linearly and keep the predictor's linear reliance on the original input, rendering the predictor form suited for real-time application. The Matrix ∈ ℝ × , ∈ ℝ × are a decomposition of which is given in [21]. C is a projection factor that projects the lifted space into the original one and it is different from the state-space output matrix. The previous form is well known as a linear predictor form [21]. C. Linear Predictors In the same manner, the best fit of the measured data is offered by The previous equation can be written differently by realizing that the transpose of is the minimizer of D. Local Computation of Koopman Operator As explained previously, we may need to add basis functions to stretch out the nonlinear system with dimensional space into a linear system with -dimensional space where ≫ . However, for large systems with thousands of states (especially when a detailed model of a large power grid is considered); the size of dictionary functions grows exponentially as the number of states grows. Considering memory constraints, we may end up with a difficulty learning a centralized Koopman model. In addition to the matter of basis function selection, which might be easy when handling small systems, but for large ones, we might need to use kernel methods for implicit expression of those functions [35]. However, the issue of memory limitation still poses a problem. Moreover, overfitting may occur because of enormous basis functions integration, and thus the generated model may provide an accurate prediction for the existing training data but do not adapt well to new data sets. This motivates us to learn only a local Koopman model for the subsystem of concern (DFIG-WF in our case) to avoid the aforementioned obstacles. The identification algorithm is summarized as follows: Step 3: Extract the lifted-state space matrixes A, B, C E. Koopman MPC (KMPC) A diagram describing the proposed control strategy is depicted in Figure 6. Where is the control action, 0 * is the first sample of the computed sequence { 0 * , 1 * , … , −1 * } of the optimization problem that is solved by the optimizer within the prediction horizon window , we take only the first sample and cast the others. Note that the objective function is a convex quadratic function. , are the set of state and input constraints, respectively. This paper embraces discrete-time MPC since it is computationally less demanding as contrasted to the continuous-time MPC. The optimization problem can be solved as a convex optimization problem in the lifted space owing to the linearity of the obtained predictor and the freedom of choice of the mapping functions, no matter how nonlinear it is in the original space [21]. where, , is the rotor-speed reference at the sample . is the prediction horizon, ∈ ℝ × and ∈ ℝ × are tuning weighting matrices (positive semidefinite). The MPC controller aims at minimizing (60) at each prediction horizon window, considering the states and control signals constraints. Generally, linear MPC minimizes a convex quadratic cost function, in this way taking into consideration an incredibly quick generation of the control input sequence. Contrary to Nonlinear MPC, which solves a troublesome nonconvex function [36], thus requiring a high computational effort. IV. SIMULATION RESULTS In the previous section, we have shown the mathematical model of the DFIG and presented an overview of Koopman theory that is used to establish the basis of the data-driven MPC. In this part, we show the simulations to demonstrate the ability of the proposed scheme to improve the dynamic stability of wind-integrated power systems. A. Study System The wind-integrated IEEE 68-bus/16-machine system is considered to verify the effectiveness of the proposed KMPC. The system is built in MATLAB/SIMULINK on a desktop PC with Intel Core i7-5500U CPU processor at 2.40 GHz with 16 GB RAM. Figure 7 shows the single-line diagram of the test system. Power system stabilizers have been added to enhance the damping of local modes of some SGs. The PSS setting is given in [37]. It should be noted that not all generators are equipped with PSSs; we just placed PSSs at a set number of SGs to ensure the system's stability; in the meantime, we retain room to demonstrate the performance of our concept. Adding PSSs across the system may overpower our proposed controller, causing its performance to be shadowed. The number of the connected WFs depends upon the concept to be illustrated. i.e., in the learning phase, we use only one WF, in the control phase we use two WFs. The intended DFIG-MPC will be equipped at the wind farm's central control center, and the control signals will be relayed to each turbine via fiber-optic interconnections. and Δ = ± 10%. Simulink block called "random number" is used to generate Δ with different seeds, each seed generates a unique trajectory, which ensures the randomization of the signals generated at each trajectory. The system is discretized at The prediction accuracy has been tested by comparing the true model with the identified one. The validation is conducted using an input signal which is not part of the learning data set (considering a new seed), yet it has the same range of Δ. The initial conditions are the same obtained from the load flow. The identification of some selected DFIG states is shown in Figure 8. The results demonstrate that the predicted model can capture the system dynamics and achieve a satisfactory prediction of the actual model. This allows us to use the predicted model to design an MPC controller for the actual plant, even though we have considered a black-box model. Remark 2: By generating data matrices of proper dimensions, measurement noise may be viewed as part of the unknown system, and its influence is indirectly acknowledged in the proposed algorithm. Remark 3: The choice of the WF location is determined in the planning phase. In this study, however, the focus is on the operational phase. For this work, we explain that our goal was to displace conventional generation with wind. Therefore, no specific localization was performed. However, the penetration level is obtained as the ratio of displaced generation at G7, G10 delivering an active power of 560 MW and 700 MW, respectively, with respect to the total generation which is therefore around 22%. We can easily increase the rate by displacing more generators, but this paper focus is on control design using the Koopman model approach. We will later study if the approach provides additional benefits at a high wind penetration rate. C. Damping Control Design In general, the typical control architecture of DFIG-WF shown in Figure 1, does not provide enough control flexibility to accomplish oscillation damping functionality, necessitating the use of supplemental control. The auxiliary control signals can be actuated through different controllable subsystems such as FACTS and HVDC to provide a damping aid with the PSS. In this paper, the actuation is performed via DFIG to enhance weakly oscillatory modes. It should be noted that this controller must be installed in each wind generator that forms the aggregated WF. The controller is designed considering two WFs located at different areas replacing generators G7, G10 delivering an active power of 560 MW and 700 MW, respectively. Each WF is identified using the method discussed earlier. To mitigate system oscillations MPC supplementary controller is attached locally at each WF. The MPC computes the time-varying gain at each horizon by minimizing the objective function described in eq (50) using " " optimization toolbox [38]. The controller is built in MATLAB "s-function" and linked with the Simulink platform. The local feedback signal is the DFIG rotor speed ω dfig , the auxiliary control signals are added to the active and reactive powers references as shown in Figure 2. The auxiliary inputs , are assumed to be zero when no controller is integrated. The MPC settings are as follows: 1) The sampling time is 0.01 s. 3) The weighting factor of the cost function is = , = = 100 × ( is the identity matrix, the dimension of which can be deduced from the context). D. Modal Analysis: Since the closed-loop is nonlinear and the gain of the MPC varies every prediction horizon = 10 samples, thus, rendering a time-varying gain. This inevitably means the transfer function of the closed-loop system changes at each operating condition; thus, we will get a different state-space representation at every new horizon. Therefore, linear analysis tools would not reflect the dynamic characteristics, it is not possible to apply linear tools to identify the linear modal traits i.e., eigenvalues and damping ratios. Therefore, a measurement-based technique is required, Prony method is used to capture the variable frequency components of the measured signals [39]. In this part two cases are considered, 1) the impact of replacing the conventional SGs with DFIG with a similar power delivery on the oscillatory modes, this case is a general study in which the proposed design is not introduced. 2) The second case addresses the effect of the proposed scheme on the oscillatory modes and damping enhancement. Replacing the SG with a DFIG In the first part of linear analysis, we evaluate the effect of substituting the conventional SGs (with their local control if any) with the WF-based DFIG (injecting the same amount of power) on the local and inter-area modes of oscillation. Table 1 provides the frequencies and the damping ratios of different modes. The system contains four interarea modes, which we refer to as M1 to M4. The range of the interarea oscillatory frequencies lies between 0.2 and 0.8 Hz. Furthermore, various local modes are also identified in the range between 1-2 Hz. denoted by L1 to L10. Table 1. Local and interarea modes at different scenarios As shown in Table 1, the second interarea mode M2 is weakened slightly by the substitution of G7, and the associated damping has reduced. L5 has disappeared because the DFIG does not contribute to local modes due to the decoupling impact of the power convertor which split the machine mechanical mechanism from the legacy grid, that is why it doesn't introduce a new electromechanical mode. However, the interarea mode can still be captured in the busbar frequency. The rest of the local modes were not subject to significant drifts. Because the DFIGs have lower shaft inertia than the SGs, the frequency in M2 rises a little bit when G7 is substituted due to the absence of WF inertial involvement. Wind turbines do not contribute to low-frequency electromechanical oscillations since they are not simultaneously coupled to power networks due to the decoupling effect of the B2B convertor. Furthermore, because grid-connected wind turbine technologies do not engage in power system oscillations, wind turbines do not introduce additional oscillatory modes into power systems. [40]. The inclusion of the proposed controller The modal analysis of the DFIG-integrated power system with and without auxiliary control is illustrated in Table 2. The interarea modes are poorly damped without a controller that why a disturbance may cause instability; however, the proposed scheme has shown a significant improvement compared to the baseline case. This has resulted in the WT operating steadily for both techniques, but better for the proposed methodology. The system demonstrates a good performance without supplemental control. However, with the addition of the conventional PSS and the proposed control, we notice a superior damping performance. In addition to modal analysis, nonlinear time-domain simulation is performed. To assess the impact of the proposed design on transient behavior, the system has been subjected to two significant disturbances: three-phase bus and line faults. In contrast to the bus fault scenario, a change in network topology is considered in the line fault scenario. Three-phase bus fault: An instantaneous three-phase fault at bus 5 is applied and cleared soon after 0.1 sec. Figure 9 to Figure 13 shows the following quantities: DFIG rotor speed deviation, DFIG active power, DFIG reactive power, dc voltage of B2B capacitor, and the terminal voltage of the DFIG, in three cases: 1) conventional PSS-DFIG [41] which is used as a base-line method (red line), 2) proposed KMPC-DFIG (yellow line) 3) no additional damping controller (gray line). The results indicate that using conventional PSS improves the dynamics of DFIG, although with clear consequences for DFIG torsional dynamics in Figure 9. This is because the baseline method's sole goal is to dampen inter-area oscillations according to its tuning objective without paying much attention to the WF internal dynamics [41]. It can be noticed that the torsional dynamics with MPC have much better damping than the poorly damped oscillations with PSS installed. It is obvious that the proposed technique can not only effectively dampen inter-area oscillations, but also ensure the optimal control of the WF dynamics. The interarea modes are represented by the relative speeds and rotor-angle deviations of G5 vs G15 and G1 vs G13 are shown in Figure 14 and Figure 15. With the proposed MPC design, the swings following a substantial system disturbance are damped much faster. It must be noted that the proposed controller cannot guarantee the overall stability, nevertheless, it can only improve the local damping performance of WF. Therefore, local PSSs are added at different locations. For instance, G15 is not equipped with a PSS, unlike G1, which explains why oscillations of 5 − 15 are extremely large in the no-control case compared to 1 − 13 . The results demonstrate that the damping has been enhanced successfully when KMPC for each WF is introduced. The controller gives satisfactory action when the fault takes place and improves the damping greatly. Furthermore, the oscillations settling time is reduced significantly, which perfectly achieves the system damping requirement. For clarity, the speed deviation of all conventional synchronous generators in all three control scenarios is depicted in Figure 16. Three-phase line fault In this case, a three-phase fault is applied in the middle of the line connecting bus 62 and bus 63 at t=1 sec and cleared after 0.1 sec by opening the circuit breaker at both ends of the faulted line, thus a topological change has occurred. Figure 17 and Figure 18 depict the DFIG speed deviation and the SGs average speed, respectively. It's shown the torsional dynamics has again improved and the inter-area oscillations settle well within an acceptable timespan of 8 s for the postfault topology that was not included in the controller design. The proposed MPC can dampen oscillations significantly quicker than the standard PSS even though the network is different from the one that is initially used to gather the data and build the Koopman model due to the line tripping. Note that the location of the fault impacts the performance of the decentralized MPC used, the closer a disturbance is to the WF bus, the greater the deviation from equilibrium in the WF initial states. Because of the poor observability of the WF states toward various system oscillation modes, the MPC only performs meaningful actions when the feedback signal has sufficient information about the modes of interest that are to be damped. However, in this paper, the selected local feedback signal is a mechanical state belonging to a mechanical part that is decoupled from the main grid via the converter. The fault that is electrically far would excite an insignificant oscillation in the mechanical-detached states, thus the cost function to be minimized by the MPC would have less weight compared to the cost function resulting from a close fault (Note that this function is dependent on the output feedback signal as shown in eq (60)). Therefore, any disturbance that occurs electrically far from the WF would have little influence on its internal dynamic states, and so the contribution of interarea oscillation damping would be minimal. V. CONCLUSION In this paper, the design of a data-driven KMPC is presented for a wind-integrated power system. The methodology is decentralized in both learning and control phases. The design scheme utilizes only the local measurements of the DFIG in the learning phase, thus no scalability issues are posed. The controller receives only a local feedback signal from the WF, and no extra states are hosted from the rest of the grid. As such, neither the model nor the states of the rest of the grid are required for the design. The learning process is based on Koopman operator theory where the unknown nonlinear dynamics of the DFIG is reconstructed by lifting the nonlinear dynamics to a linear space with an approximate linear state evolution. EDMD is then applied to determine the lifted-state space matrixes which are used to design the KMPC. The effectiveness of the proposed scheme is tested on New England IEEE 68 bus, 16-machine system under threephase faults. The result ascertains the effectiveness of the proposed scheme at enhancing the damping performance. The design is also validated by small-signal analysis by identifying the oscillatory modes associated with the DFIGs using Prony method and the findings coincide with the simulations results.
2022-03-10T16:29:15.864Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "1a7e1db607a137fbed93ce3ae70c9171bd2fbb9e", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/9668973/09729814.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "92044433117d631860c9c16526ace87c19438f12", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
269669919
pes2o/s2orc
v3-fos-license
Ultrasound-triggered three dimensional hyaluronic acid hydrogel promotes in vitro and in vivo reprogramming into induced pluripotent stem cells Cellular reprogramming technologies have been developed with different physicochemical factors to improve the reprogramming efficiencies of induced pluripotent stem cells (iPSCs). Ultrasound is a clinically applied noncontact biophysical factor known for regulating various cellular behaviors but remains uninvestigated for cellular reprogramming. Here, we present a new reprogramming strategy using low-intensity ultrasound (LIUS) to improve cellular reprogramming of iPSCs in vitro and in vivo. Under 3D microenvironment conditions, increased LIUS stimulation shows enhanced cellular reprogramming of the iPSCs. The cellular reprogramming process facilitated by LIUS is accompanied by increased mesenchymal to epithelial transition and histone modification. LIUS stimulation transiently modulates the cytoskeletal rearrangement, along with increased membrane fluidity and mobility to increase HA/CD44 interactions. Furthermore, LIUS stimulation with HA hydrogel can be utilized in application of both human cells and in vivo environment, for enhanced reprogrammed cells into iPSCs. Thus, LIUS stimulation with a combinatorial 3D microenvironment system can improve cellular reprogramming in vitro and in vivo environments, which can be applied in various biomedical fields. Introduction Cellular reprogramming technology has expanded several aspects of stem cell research as somatic cells with different origins can be induced to produce different cells with desired features, and these reprogramming technologies have developed from in vitro culture method to in vivo [1,2].In particular, cellular reprogramming into a pluripotent state for induced pluripotent stem cells (iPSCs) is highlighted for its functional characteristics of self-renewal and differentiation into multiple cell lineages [3][4][5][6][7][8].Compared to other stem cells, iPSCs are favorable sources in that they can bypass ethical issues and immune rejections.Therefore, iPSCs have been utilized in various biomedical fields for disease screening, gene editing, and cell therapeutics [9][10][11][12].Although there still remains a necessity to increase iPSCs in number and purity, reprogramming of iPSCs is challenging as its efficiency is low (1-5%); this leads to incomplete reprogramming that lacks reproducibility and has a lengthy culture time [13].Several defined sets of transcription factors, small-molecule cocktails, and cell culture supplements have been suggested to improve reprogramming efficiency [14,15], but these biochemical methods have limitations for increasing the reprogramming efficiency while neglecting other considerable adverse effects caused by biochemical factors [16].Moreover, cellular reprogramming technology in vitro is often differently regulated in vivo due to differences in local microenvironment conditions.These circumstances cause more reduced reprogramming efficiencies in vivo.Therefore, development of alternative methods to enhance the reprogramming efficiencies are necessary for the increased production of iPSCs in therapeutic applications. Biophysical changes have garnered attention for regulating cell fates during differentiation and reprogramming [17].These biophysical changes include stiffness, topography, ligand patterning, stretch, compression, shear stress, and external stimulation [18,19].Cells are continuously exposed to biophysical stimuli as they are perturbed by various surfaces, stresses, and mechanical forces.These biophysical changes can be controlled to influence a myriad of cellular processes by switching the mechanical cues to biological responses during cell migration, differentiation, proliferation, and apoptosis [20][21][22][23][24].A few reprogramming studies have shown that biophysical changes to stiffness [25,26], topography [27,28], shear stress [29], mechanical stretching [30], and electromagnetic fields [31] can affect the reprogramming efficiencies of iPSCs.Especially, biophysical changes from 2D to 3D culture have shown increased cellular responses in migration, adhesion, interaction, and cytoskeletal organization [32,33].As physiological microenvironment of cell is cultured in 3D, efforts to understand how biophysical factors synergistically activate cell behaviors in dynamic 3D microenvironment form [34,35].In cell reprogramming studies, 3D culture system also showed improvement on producing more iPSCs [36,37].These changes modulate cellular behaviors during the reprogramming process by varying the cytoskeletal changes, extracellular matrix (ECM)-receptor interactions, mesenchymal to epithelial transitions (MET), and epigenetic modulations [38]. Ultrasound as a noninvasive acoustic wave force is another biophysical change that can apply mechanical stress in a noncontact manner.Among the various intensities and frequencies of ultrasound available, low-intensity ultrasound (LIUS) has been widely investigated as it is not lethal to cells yet provides active biophysical stimulus [39].Unlike other inaccessible, expensive, invasive stimuli, LIUS has deep tissue-penetrating properties that can efficiently target sites with easily accessible apparatus [40].Since the FDA has indicated that LIUS is generally safe with low risks, its safety profile is highly promising in various cell types [41].LIUS has been commonly used for diagnostic and therapeutic purposes [42], as well as for modulating physiological changes in the proliferation, migration, and differentiation potentials (chondrocytes [43,44], neurons [45,46], and osteoblasts [47,48]) at the cellular level.Number of studies presented the effect of ultrasound stimulation on directed differentiation of various cell types, but limited number of studies support the reprogramming potential of ultrasound stimulation and its mechanotransduction [49].Furthermore, mechanism of LIUS stimulation on cells and microenvironments are still unclear.A few studies have associated biophysical changes from LIUS with cytoskeletal rearrangement and fluidization of the cell membrane as well as transmembrane protein of the target cells [50].Also, regulatory effects of LIUS on cell and tissue are associated with the interaction of cell with adjacent substrates, such as extracellular matrices [51,52].Therefore, further investigations of dynamic culture with LIUS during cell reprogramming are necessary to examine cellular mechanisms of biophysical modulation and ECM interaction. Previous studies have reported that 3D hydrogel system can improve the cell reprogramming process [53], but how cells in 3D microenvironment regulate in response to dynamic biophysical cues during cell reprogramming process has not been clearly elucidated.Here, we present the LIUS stimulation of 3D microenvironment system for improved cell reprogramming of iPSCs and identify mechanical action of LIUS on cell interaction with 3D microenvironment.A previous study reported enhanced reprogramming efficiency of iPSCs under a hyaluronic-acid (HA) based 3D hydrogel niche with controlled stiffness, initially upregulating the HA mediated CD44 expression [53].In the present study, we apply the different duration of LIUS during cellular reprogramming of iPSCs within a HA based 3D hydrogel, and the reprogramming efficiency of the iPSCs is assessed.LIUS stimulation under 3D microenvironment culture is evaluated for pluripotency, mesenchymal to epithelial transition (MET), and histone modification.As LIUS induces acoustic wave force to modulate cytoskeletal rearrangement and cell surface change, cellular interaction with microenvironment may regulate increased reprogramming process in HA hydrogel.LIUS stimulated 3D microenvironment system is further confirmed its efficacy in human cell and in vivo environment.Our results indicate that LIUS stimulated microenvironment system induces more cell reprogramming process into iPSCs. LIUS enhances the reprogramming efficiency of iPSCs In this study, LIUS stimulation was carried out by modulating the duration of application on the 3D hydrogel.The frequency and intensity of the LIUS were set constant at 40 kHz and 300 mW cm − 2 , respectively, as these values ranged within the viable condition [42].Different durations of LIUS were induced on alternate days and analyzed throughout the study.Cell viability of the LIUS was first analyzed after stimulation for 7 days, and the fibroblasts stimulated with up to 20 min of LIUS showed high viability without dead cells (Supplementary Figs.2a and b).However, 30 min of LIUS stimulation caused cell death, reducing the viability of the fibroblasts to ~71 % on day 7.The results show that up to 20 min of LIUS can maintain cell viability for 14 days; thus, LIUS stimulations were limited to 20 min for further studies. Next, LIUS stimulation was evaluated for reprogramming iPSCs.For the reprogramming process, OCT4-GFP expressing mouse embryonic fibroblasts (OG-MEFs) were induced with Yamanaka's four factors (OCT4, SOX2, c-MYC, and KLF4) using retroviral transduction.The transduced cells were then encapsulated in HA hydrogel using photocrosslinking methods, and the hydrogels were stimulated with LIUS for a given time until analysis (Fig. 1a).Expression of OCT4-GFP at each time point showed that longer exposure to LIUS stimulation induced more OCT4-GFP-positive cells (Fig. 1b).While the unstimulated group started expressing OCT4-GFP on day 14 from transduction, 20 min of LIUS application accelerated GFP expression as early as 7 days after transduction.This observation is remarkable in that the reprogramming process of iPSCs can be further shortened to one week through LIUS stimulation.The quantification of OCT4-GFP-positive colonies on day 21 was also compared, and LIUS stimulation generated more GFPexpressing colonies in terms of numbers and intensities [20 min of LIUS produced ~3.2-fold increase in colony numbers and ~2.3-fold increase in GFP intensities compared with the unstimulated group] (Fig. 1c and d).These results correlated with those of flow cytometry analysis in that the ratio of OCT4+/SSEA1+ cells improved to 15.56 % when compared with the conventional 2D condition (~1 %) and the 3D condition (~11 %) (Fig. 1e and Supplementary Fig. 3).The fluorescent intensities of other pluripotent markers, NANOG and SOX2, were also observed, and higher intensities were detected after LIUS exposure, confirming the increased reprogramming efficiency (Fig. 1f).As the expressions of pluripotency proteins and genes demonstrate fully reprogrammed iPSCs [54], total expression of the pluripotent markers were examined.The gene expressions of pluripotent markers Oct4, Nanog, and Sox2 showed increased expression with longer LIUS stimulation (Fig. 1g).Compared to the unstimulated cells, 20 min of LIUS stimulation expressed Oct4, Nanog, and Sox2 markers by ~2.7, ~6.4, and ~2.0 fold growth, respectively.Indeed, the protein expressions were also correlated with gene expressions, where longer LIUS stimulation upregulated the pluripotent markers of OCT4, NANOG, and SOX2 (up to 2.5, 5.1, and 4.0 times higher intensities of OCT4, NANOG, and SOX2 bands were measured in LIUS, respectively) (Fig. 1h). LIUS accelerates initial changes in the iPSC reprogramming process During the reprogramming process, cells undergo initial changes into fully reprogrammed iPSCs, such as METs and epigenetic modifications [55].First, MET changes were analyzed by the gene expressions of epithelial marker E-cadherin and mesenchymal marker N-cadherin after LIUS stimulation (Fig. 2a).The results showed that LIUS stimulation significantly increased the expression of E-cad.On the contrary, N-cad expression was slightly reduced as LIUS was induced for 20 min.The ratio of E-cad to N-cad expression was more apparent such that LIUS stimulation increased the MET during iPSC reprogramming (Fig. 2b).Consistently, protein expression of E-cadherin was increased and expression of N-cadherin was decreased as LIUS stimulation was induced (Fig. 2c).Epigenetic modification is another aspect of iPSC reprogramming that activates or silences genes.Specifically, histone modifications are extensively reported transitions that occur in the early reprogramming process, which are key checkpoint molecules for iPSC reprogramming [56].Here, we observed expressions of active H3 Quantitative analysis of OCT4 GFP intensity and d, number of iPSC colonies derived from OG-MEFs within 3D hydrogels with various ultrasound stimulation times (D21; n = 3).e, representative flow cytometry profile of the OCT4+/SSEA1+ cells under LIUS stimulation and its quantified results (D21; n = 3).Immunofluorescence images of f, NANOG and SOX2 stained colonies under LIUS-stimulated OG-MEFs after reprogramming (D21; scale bar, 300 μm).g, qPCR analysis of the pluripotent markers, (Oct4, Nanog, and Sox2) and h, western blot analysis of the pluripotency markers (OCT4, NANOG, and SOX2) in iPSCs derived from OG-MEFs under LIUS exposure in 3D hydrogels (D21; n = 3).All data are expressed as the mean ± s.d.The statistical significance was determined with one-way ANOVA followed by Tukey's multiple comparison post test.****P < 0.0001.acetylation, H3K4 dimethylation, and H3K4 trimethylation during the early reprogramming process, and LIUS stimulation upregulated the expressions of all histone markers analyzed (Fig. 2d-g).Therefore, LIUS exposure during reprogramming enhanced the initial procedures by upregulating the METs and histone modifications. Cytoskeletal rearrangement by LIUS alters membrane fluidity and promotes CD44 signaling pathway LIUS stimulation could potentially affect the mechanical forces on cells within the hydrogel system and adjust their physical properties.Thus, we examined the mechanisms for LIUS to modulate reprogramming efficiencies by regulating the cytoskeletal structures.F-actin staining was used to observe different expressions of actin stress fibers from LIUS stimulation.Interestingly, F-actin filaments start to lose their expression as LIUS was stimulated extensively (Fig. 3a and b, and Supplementary Figs.4a and b).The measured intensities of F-actin staining showed ~37 %, ~43 %, and ~75 % reductions for 5, 10, and 20 min of LIUS stimulations, respectively, compared to the unstimulated group.Aside from that, we did not observe any physical differences (modulus, swelling ratio) in the 3D hydrogels from LIUS stimulation (Fig. 3c and d).Actin stress fiber dynamics were further analyzed if LIUS stimulations permanently degraded F-actin expressions (Fig. 3e).As 20 min of LIUS application reduced F-actin expression, the fluorescence intensities remained at their reduced expressions until 40 min from initial LIUS stimulation.Interestingly, the expression of F-actin gradually reappeared and rapidly rearranged the cytoskeletal structure by 60 min.These results imply that the mechanical forces from LIUS only alter the temporal changes to the cytoskeletal rearrangements of the reprogrammed cells.The electron microscope images also show the changes to cell morphology after LIUS stimulation (Supplementary Fig. 5).Consequently, F-actin remodeling induced by LIUS regulates focal adhesion signaling associated with mechanical changes.Following LIUS-induced remodeling, focal adhesion kinase phosphorylation increased up to 2.7-fold (Fig. 3f), indicating its regulatory role in focal adhesion signaling.To further elucidate the involvement of cytoskeletal structures in cell reprogramming, we modulated F-actin structures using actinassociated small molecules known to transiently alter cytoskeletal organization [57], and assessed their effects on cells.In this experiment, cytochalasin D, which depolymerizes F-actin [58], and phalloidin, which stabilizes the F-actin structure [59], were employed.The results showed that each small molecule regulated the expression of F-actin in fibroblasts within a 3D hydrogel, even under the LIUS stimulation (Supplementary Fig. 6).Subsequently, these cells were evaluated its molecular changes in pluripotency, and observed significant alterations in the relative mRNA expression of pluripotent markers (Oct4 and Nanog) in response to each small molecule (Fig. 3g).F-actin depolymerization induced by cytochalasin D increased the Oct4 and Nanog expression by about 2-fold compared to the untreated group.Conversely, phalloidin, which stabilized F-actin polymerization, reduced Oct4 and Nanog expression to about 60 % of the untreated group.These results emphasize the contribution of LIUS-induced cytoskeletal remodeling to the cell reprogramming process. The cytoskeletal change of LIUS may adjust the fluidity and mobility of the cell membrane composition.Using the diffusion rate of the lipid analog probes in the fluidized cell membranes, LIUS-stimulated cells were analyzed for their cell membrane fluidity (Fig. 3h).The relative fluorescence units (RFU) showed that more lipid probes were diffused in LIUS treatment, indicating highly fluidized cell membranes.Changes in the cell membrane fluidity from LIUS influence cell surface receptor mediated signaling to ECM interactions.A previous study reported the importance of HA/CD44 interactions facilitating the initial reprogramming processes [53], and enhanced membrane fluidity by LIUS could further improve CD44 expression.Fluorescence staining of CD44 showed up to 4 fold increased CD44-expressing cells with greater LIUS stimulation (Fig. 3i and j, and Supplementary Fig. 7).Likewise, up to 3 times high CD44 expressions from LIUS stimulation were found in the qRT-PCR (Fig. 3k) and western blot (Fig. 3l,m) results, confirming that LIUS stimulation increases CD44 expression.With increased expression of CD44, phosphorylation of downstream signaling molecules (STAT3 and AKT) was observed (Fig. 3l,n).Although all three signaling molecules show higher phosphorylation as LIUS was induced, significant increase in phosphorylation was observed from STAT3.Taken together, LIUS stimulation sequentially rearrange cytoskeletal fibers, modulate membrane fluidity, and regulate cell-ECM interaction to increase expression of CD44 and its downstream signals. Presence of HA/CD44 is necessary for the increased reprogramming process during LIUS stimulation LIUS stimulation has shown increased reprogramming efficiencies from interaction of HA/CD44, and reprogramming efficiencies were further analyzed under LIUS stimulation when accessibility of HA or CD44 is limited.First, cells were exposed to different ECM-based hydrogels (PEG, Gel, and HA) before and after the LIUS stimulation.PEG and Gel were selected, as PEG is a biologically inert material, and Gel is the most widely used natural material for cell culture.While all hydrogels showed decreased F-actin expressions after LIUS stimulation (Supplementary Fig. 8), immunofluorescence stained CD44 showed high expression in HA hydrogel only, and LIUS stimulation further increased the CD44 expression to two folds (Fig. 4a).On the contrary, notable increased expression of CD44 were not observed from PEG and GEL hydrogels before and after the LIUS stimulation, as the CD44 expressions were ~16 % from the HA hydrogel group.PEG hydrogel showed significant CD44 change after LIUS stimulation, but increased CD44 expression from PEG hydrogel with LIUS stimulus is too minimal (~20 % of HA).These different hydrogels were further analyzed for the reprogramming efficiency into iPSCs by protein expressions of pluripotency.Fluorescence intensity of OCT4-GFP showed that LIUS stimulation only increased from HA hydrogel, while PEG and Gel did not change (Fig. 4b).Similarly, immunofluorescence stained SOX2 and NANOG also showed that LIUS stimulation regulated almost 3.6 fold increase from HA hydrogel, while other hydrogels remained its expressions (Fig. 4c and d).These results imply that HA ECM is necessary during LIUS stimulation to regulate CD44 expression and ultimately increase reprogramming efficiencies into iPSCs. To further examine if LIUS solely improves the reprogramming efficiency of iPSCs without any CD44 modulations, we used downregulation of CD44 and performed the reprogramming with 20 min of LIUS application.In this experiment, shRNA was used to establish stably downregulated CD44-deficient OG-MEFs (shCD44) (Supplementary Figs.9a-c).After the cells were characterized with CD44 downregulation, they were reprogrammed into iPSCs under 20 min of LIUS stimulation.Compared to the control group (shCon), the shCD44 group showed a dramatic reduction in OCT4-GFP expressions throughout reprogramming under LIUS stimulation (Fig. 5a).At day 21, the shCD44 cells demonstrated reduced number of colonies and fluorescence intensities of OCT4-GFP (Fig. 5b and c).LIUS mediated phosphorylation of downstream signaling molecules (STAT3 and AKT) was observed to be downregulated as CD44 was downregulated (Fig. 5d).The analyzed pluripotent markers also showed reduced mRNA and protein expressions when CD44 was downregulated (Fig. 5e and f).These results suggest that LIUS stimulation regulates the reprogramming efficiency of iPSCs mediated by the expression of CD44 interacting with the HA microenvironment. iPSC reprogramming applications of LIUS stimulation in HA microenvironment in human cell and in vivo condition Results have shown that LIUS stimulation in HA microenvironment system enhances cellular reprogramming efficiencies in mouse cell, and we applied this system to human primary cells for increased iPSC production.For the reprogramming process of human primary cells, ASCs were isolated from patients and introduced retrovirally with Yamanaka's four factors.After encapsulation of transduced cells in HA hydrogel, LIUS was introduced every other day for a given time period (Fig. 6a).After the observation of viable LIUS conditions in human ASCs, up to 20 minutes of LIUS were stimulated for the analysis (Supplementary Fig. 10).Reprogrammed cells were stained with pluripotent protein markers to examine the effect of LIUS stimulation.Immunofluorescence images show that more pluripotent expressions were observed from the human ASCs as the LIUS stimulations increased (Fig. 6b and c).Along with the OCT4 and NANOG, pluripotent markers for both mouse and human iPSCs, SSEA4 and TRA-1-60 proteins, which solely present in human iPSCs [60], were also increased from LIUS stimulation.Protein expressions were quantified its intensities and showed that all the analyzed pluripotent markers are significantly increased at 20 min of LIUS stimulation (up to 3.27, 3.96, 3.89, and 2.22 times higher intensities of OCT4, SSEA4, NANOG, and TRA-1-60 proteins were quantified in LIUS, respectively) (Fig. 6d).Number of colonies formed were also indicated that 20 min of LIUS significantly increased in number (Fig. 6e), presenting that LIUS/HA hydrogel system can be applied to the reprogramming process in human cell. Common procedures for cellular reprogramming are performed in vitro, understanding the cell fate specification and plasticity under controlled environment.To further utilize the LIUS/HA hydrogel technique in regenerative medicine, we investigated the reprogramming efficiency of LIUS in vivo environment.For LIUS stimulation in vivo, ultrasound therapy device was used to deliver sound waves in vivo.First, the ultrasound therapy device showed significant cytotoxicity of OG-MEFs from 20 min (Supplementary Fig. 11).Therefore, LIUS stimulation of the device was set to 10 min throughout the process.After the encapsulation of OG-MEFs under reprogramming process, HA hydrogel was transplanted to a mouse and further processed with LIUS stimulation until analysis (Fig. 6f).The immunofluorescent staining of pluripotent markers showed the elevated expression from LIUS stimulation (Fig. 6g-i).The presented result is meaningful that the LIUS/HA system can solely enhance the reprogramming process within in vivo condition.The relative fluorescence intensities also verify that expression of OCT4, NANOG, and SOX2 increased to 3.87, 1.77, and 2.33 times, respectively (Fig. 6j).Therefore, LIUS stimulation under HA hydrogel can be utilized in vivo reprogramming process to enhance reprogramming efficiency. Discussion Cell reprogramming technique has been highlighted as a promising tool for cell therapy and biomedical applications, yet key challenges still remain unsolved for its clinical use.Cell reprogramming is a complex process comprising some key events to convert somatic cells to iPSCs, and these key hallmarks include metabolic changes, inhibition of senescence, epigenetic modifications, morphological transitions, and pluripotency [55,61,62].Bioengineering technologies have offered multiple approaches to enhance cell reprogramming process into desired cell types and numbers.In specific, engineering biomaterials to modulate biochemical and biophysical cues in a microenvironment have expanded the possibility to further control cell fate and function.Importance of microenvironment is a key regulating factor for cell behavior that it mimics the in vivo-like extracellular environment producing optimal biochemical and biophysical cues.Previous finding has shown that composition of ECM can impact reprogramming process, and it can be further modulated with biophysical cues [53,63].Biophysical factors directly influence the cell surfaces, changing the phenotypic transitions by cytoskeletal rearrangements.Furthermore, biophysical changes can modulate the chromatin structure by regulating the size, shape, and stiffness of the nucleus [64][65][66].These biophysical modulations can lead to regulation of transmembrane localization, signaling pathway, and nuclear localization that ultimately impact expression of transcription factors.LIUS, another stimulus to regulate biophysical cues, is a noninvasive acoustic wave that is well known for its safety for various cells, including fibroblast [67,68] and adipose-derived stem cells [69][70][71].Although biophysical regulation impacting on cells or extracellular matrices are well-researched in tissue engineering and cell differentiation, there has been no studies exploring the reprogram cells into iPSCs clarifying the relationship of LIUS as biophysical regulation to microenvironments. Here, we have presented LIUS as a new biophysical factor that can stimulates cell to promote reprogramming process of somatic cells into iPSCs throughout the MET, epigenetic modulation, and pluripotency expressions.LIUS directly alters cells to temporarily modulate cellular plasticity in cytoskeletal structure and membrane fluidity to facilitate actin remodeling and provide higher interaction with ECM components around the 3D microenvironment.3D cell culture system is used in this study because it is advantageous over conventional 2D culture to observe the precise efficacy of LIUS stimulation, as the 3D system resembles cellular phenotypes, structures, and adhesion kinetics observed in vivo.Previous reports have indicated different regulatory mechanisms of mechanotransduction and cell stimulation between 2D and 3D [72,73].Specifically, 2D culture generates forced polarity of cells, inducing varied biophysical modulations of LIUS or ECM interactions [74].Through the LIUS stimulation, cytoskeletal remodeling and interaction between HA and CD44 has indicated as key factors to increase reprogramming efficiency of iPSCs, verifying that the effect of LIUS is highly influenced under optimal 3D microenvironment to improve cell reprogramming.As a result, biophysical modulation of LIUS stimulation within 3D microenvironment can substantially increase iPSC reprogramming efficiency up to 15-fold from traditional methods.The application potentials of LIUS are further confirmed with human primary cells isolated from human patients and in vivo condition that reprogramming efficiencies of iPSCs can be upregulated. We suggest that LIUS directly initiates the cellular plasticity of a cell during reprogramming process.While LIUS induces insufficient forces to affect the mechanical properties of microenvironments [75,76], LIUS stimulates the intracellular cytoskeletal structure as a biophysical change, which initiates cell transition and cell-matrix interactions surrounding its microenvironments [77,78].The report indicated that ultrasound-induced strains break down the intracellular cytoskeletal filaments under viable conditions [50], and the softening of the actin stress filaments affects cell mechanosensing and cell properties [79].Mechanical modulations alter cytoskeletal remodeling of cells and biochemically shift signaling pathways to regulate stem cell fate [80,81].Studies indicate that the stiffness of iPSCs is soft compared to those of fibroblasts and human-adipose-derived stromal cells that are not pluripotent [82].Our results showed that LIUS reduces cytoskeletal structures of stimulated somatic cells, followed by increased membrane fluidity to soften the cell rigidity.Cytoskeletal structures were then remodeled to signal the focal adhesion pathway, which is responsible for the activation of STAT3 [83,84] and highly influenced cell reprogramming process in cell transition and epigenetic modification.Choi et al. validated the relationship between F-actin stress fibers and MET during cell reprogramming in that F-actin stress fibers reorganize as cells form the epithelium in the morphology [25].Other studies have also indicated the role of mechanical modulation in epigenetic changes during cell reprogramming, and biophysical cues can induce actin depolymerization and reorganization to remodel cytoskeletal structure and transmit focal adhesion signaling for iPSC reprogramming [85,86].The presented results correlate that cytoskeletal rearrangement and cellular rigidity from LIUS promotes cell transitions towards epithelial-like cells and epigenetic modification towards the reprogramming acceleration into iPSCs. In addition to cell transitions by cytoskeletal disruption of LIUS, it is also known to influence the membrane fluidity of cells and mobility of the membrane receptor proteins (CD44) that interact with the surrounding biomaterials (HA).The acoustic force exerted by LIUS induces D. Kim et al. membrane fluidity of the cells, and the transmembrane receptors increase its mobility within the cell membrane.LIUS is already known to induce membrane fluidization by dynamic cortical F-actin cytoskeleton, allowing high cellular plasticity [87].Different lineages from pluripotent to differentiated cells show variable cell membrane fluidities.As pluripotent stem cells lose their pluripotency towards the differentiated state, membrane fluidization decreases and the lipid structure gets ordered [88].When cell membranes are disordered, the membrane receptors are less confined and have increased mobility, such that they encounter targets more easily [89].Our results showed the cells surrounded by the HA substratum further upregulated the CD44 expression as a result from the LIUS stimulation, and it enhances the reprogramming efficiency of iPSCs.Interestingly, LIUS stimulation under removed HA substratum or inhibited CD44 expression did not increase pluripotency after cell reprogramming.Thus, the mobility of CD44 in the cell membrane is affected by LIUS stimulation in that more HA engages with the less confined CD44.A related study also mentioned that HA/CD44 binding is dependent on the spatial alignments of HA and CD44, and increased flexibility could facilitate the binding process [90].Our investigations of LIUS uncovered the enhancement of cellular expression of CD44 under HA microenvironments that accelerates reprogramming efficiency, and it is possible to further modify LIUS stimulation for increased cellular interaction with other ECM microenvironments. In our system, the HA hydrogel showed reprogramming efficiency of iPSCs by the interaction of HA with the CD44 transmembrane receptor that initiates reprogramming via enhanced MET, epigenetic, and pluripotent expressions [53].Reports have indicated that HA acts as an adhesive bridging the molecules between cells that express CD44 [91], and CD44 expression is associated with the several signaling key molecules, such as STAT3, and AKT that are associated with transcription factors [92][93][94].Our results also indicated high expression of CD44 induced phosphorylation of STAT3 and AKT proteins for activation.Each signaling molecule is highly important for pluripotency transcription markers, as STAT3 and AKT undergoes transcriptional activation of the pluripotent markers, KLF4, NANOG, SOX2, and OCT4 [95][96][97].Phosphorylation of STAT3 at ser727 are found to regulate MET change, which our result also indicates CD44 induced STAT3 phosphorylation at ser727 increased epithelial marker over mesenchymal marker.Therefore, increased CD44 expression by LIUS also upregulated MET, epigenetic, and pluripotent markers.Additional studies on other intracellular signaling pathways are necessary to clarify the understanding of LIUS stimulation from 3D microenvironment during cell reprogramming, but the presented results imply that HA-CD44 signaling activates downstream molecules, STAT3 and AKT, for regulation of transcription factors facilitating reprogramming process. Although LIUS/HA hydrogel system showed enhanced cell reprogramming efficiencies in mouse somatic cells, reproducible effects in human primary cells and in vivo condition are not validated.Since the cell reprogramming is intended to convert patient derived cells into iPSCs, investigating the reprogramming efficiency of LIUS stimulation under HA hydrogel should be verified from human cell.Human primary cells, isolated from patient's tissue, also showed high reprogrammed cells into iPSCs.More number of reprogrammed cells in human tissue demonstrates the potential for utilizing this method in clinical therapy and tissue regeneration.Moreover, LIUS stimulation in HA hydrogel is applied in in vivo environment.Recently, in vivo reprogramming has gained much attention for utilizing the reprogramming technique directly to the regenerative medicine applications [98].However, generating reprogramming cells in vivo still raises challenges related to safety considerations and the constraint of limited conversion efficiency.HA hydrogels and LIUS are highly recognized materials for safety as these are widely utilized in clinical applications.These biocompatible materials are further validated its functional ability to increase reprogramming efficiency by transplanting HA encapsulated mouse cells and stimulating with LIUS.Results on pluripotency expression from mouse tissues showed more expressed pluripotent proteins from LIUS stimulation.The results are meaningful in that the previous cell reprogramming methods often encounter reduced reprogramming efficiencies when translated in vivo [99].However, the LIUS/HA system show increased reprogramming efficiencies in both in vitro and in vivo condition.The result can be best explained by the HA hydrogel forming the surrounding microenvironment structure that mimics in vivo of reprogrammed cells, which the equivalent regulatory effects were induced from any condition when stimulated with LIUS.Other regulatory factors may be involved in LIUS during the reprogramming process in vivo, given the complexity of physiological responses.However, the LIUS stimulation presented here can be proposed as a potential tool for enhancing cell reprogramming systems.LIUS stimulation can also be extended to other in vivo cell reprogramming studies to demonstrate its applicability to different cell types, such as neuronal and musculoskeletal cells, as LIUS has shown potential for regenerating musculoskeletal and nerve systems [100,101].Therefore, the presented LIUS/HA cell reprogramming technique can be suggested as a significant platform for various reprogramming applications in both human primary cell and in vivo condition. In this study, we have suggested methods to increase the reprogramming efficiency of iPSCs via modulation of the biophysical factor LIUS in a 3D hydrogel system.While initial exposure to ultrasound has been reported to show pluripotent potential [49], no clear understanding of the mechanisms underlying the relationship between ultrasound and reprogramming is elucidated.The present study uses LIUS to modulate the cellular responses within a 3D hydrogel system.Within the viable conditions, increasing the LIUS time facilitates reprogramming efficiency of the iPSCs by upregulating pluripotent expressions.Moreover, LIUS stimulation enhances the MET and epigenetic markers, which indicate the initial changes during reprogramming.These changes were caused by LIUS activating the cytoskeletal rearrangements and mobility of the cell membrane and transmembrane proteins that easily interact with surrounding molecules.Utilization of LIUS as a biophysical stimulus holds many advantages for cellular reprogramming studies, as the reprogramming efficiency of LIUS is as effective as those of other biophysical cues.Furthermore, LIUS can be applied in other reprogramming processes (direct reprogramming or in vivo reprogramming) under modified microenvironments as other mechanical stimulants have already presented such potential in in vivo reprogramming [102,103].This noninvasive biophysical LIUS stimulation represents a novel research area in bioengineering and mechanotransduction that can be expanded in the tissue engineering and in vivo reprogramming. Conclusions We demonstrated a new reprogramming strategy involving noncontact ultrasonic stimulation in a 3D hydrogel system, whereby Yamanaka's four-factor-induced reprogramming efficacy was improved.Integration of LIUS stimulation with the established 3D hydrogel system enhances the reprogramming efficiency, as more ultrasound stimulation leads to higher cellular reprogramming of iPSCs.LIUS stimulation also degrades the cytoskeletal structure, followed by increased fluidity and mobility of the cell membrane.These changes lead to increased expressions of CD44 with HA microenvironment only, and downstream signaling pathways of STAT3 and AKT led to the regulations of METs, epigenetics, and pluripotency during reprogramming.The established 3D microenvironment system integrated with a combinatorial LIUS is thus suggested as a new platform for cell reprogramming studies, and its capability for cell reprogramming can be applied in both human primary cells and in vivo tissue as well.Accordingly, the strategy presented herein is expected to be beneficial for various biological and biomedical applications. Preparation of photo-crosslinkable polymers Photo-crosslinkable polymers were synthesized as previously reported [53].In brief, methacrylated hyaluronic acid (MAHA) was synthesized by adding 1.5 % v/v of methacrylic anhydride to 1 % w/v hyaluronic acid (MW = 500 kDa, Bioland, South Korea) in aqueous solution (Supplementary Fig. 1a).While mixture was stirred for 24 h in the dark at 4 • C, 1.5 % v/v of 5 N NaOH was added to adjust the pH.Mixture was dialyzed for 3 days against deionized water using a dialysis membrane with a cutoff molecular weight of 100 kDa.After removing the remaining impurities, the mixture was lyophilized before use.The synthesized MAHA was analyzed for its relative peak of the methacrylate protons (peaks at ~6.1, 5.6, and 1.85 ppm) using proton nuclear magnetic resonance spectroscopy ( 1 H-NMR; 500 MHz FT-NMR Spectrometry, Bruker, USA) (Supplementary Fig. 1b).Methacrylated gelatin was synthesized from 10 % w/v Gelatin type B (Sigma Aldrich, St. Loius, USA) dissolved in phosphate buffer (25 mM potassium phosphate monobasic and 170 mM sodium phosphate dibasic) under 50 • C. While stirring, 10 % v/v methacrylic anhydride solution was added dropwise and reacted for 1 h, at pH of 7.5 in the dark.The reaction solution was then filtered with 100 μm cell strainer, dialyzed against distilled water for 3 days in 10 kDa membrane tube, and lyophilized for the final product. Hydrogel characterization Swelling properties: The swelling ratio of the HA hydrogel was measured by immersing a sample in DPBS under LIUS at 37 • C for 3 days.Before weighing the swollen sample, the remaining water on the sample surface was blotted with a paper.The swelling ratio at each time point was defined by the weight ratio of the net liquid uptake to the dried hydrogel. Rheological characterization: To characterize the mechanical properties of the HA hydrogel, the sample was measured using a HAAKE Rheostress1 (Thermo Scientific, USA).The volume of each sample was 500 μL.The oscillatory frequency sweep was applied at a constant oscillatory stress of 0.1 Pa for frequencies from 0.05 to 20 Hz.For all tests, the temperature was maintained at 37 • C. Cell culture Mouse embryonic fibroblasts (MEFs) were modified for OCT4-GFP expression (OG-MEF).The OG-MEFs were obtained from Dr. TaeHee Lee (Sejong University, South Korea) and incubated in a growth medium composed of Dulbecco's modified Eagle's medium (DMEM, Hyclone, USA) with 10 % v/v fetal bovine serum (FBS, Hyclone) and 1 % v/v penicillin/streptomycin (P/S, Hyclone).The cells were maintained at 37 • C in a humidified 5 % CO 2 incubator and passaged with 0.25 % trypsin/EDTA (Hyclone).For F-actin modulation test, 2 μM of cytochalasin D (Sigma Aldrich) or 0.05 μM of phalloidin (Sigma Aldrich) was treated for 30 minutes.Human primary adipose-derived stem cells (ASCs) were obtained from the human infrapatellar fat pad of patients' knee, with approval of the Ethics Committee at Dongguk University (IRB No. DUIRB-202210-18).Briefly, human infrapatellar fat pad was washed with DPBS consisting 2 % P/S for three times.Washed tissue was digested with 0.5 mg mL − 1 collagenase (Sigma Aldrich) and 1 % P/S diluted in DMEM with low glucose at 37 • C for 45 min.Then, digested tissue was processed to isolate cell pellets by filtration (45 μm strainer) and centrifugation (1000×g for 5 min).ASC cell pellet was resuspended and cultured in a growth medium composed of low glucose DMEM with 10 % fetal bovine serum and 1 % P/S at 37 • C in a humidified 5 % CO 2 incubator. Generation of iPSCs On the day before transfection, GP2-293 packaging cells (Clontech, Germany) were seeded at a density of 4 × 10 6 cells per 100 mm dish; pMXs-hOCT4, pMXs-hSOX2, pMXs-hKLF4, and pMXs-hc-MYC (Addgene, USA) were transfected using retroviral packaging vector VSV-G (Thermo Scientific) and lipofectamine 2000 reagent (Thermo Scientific).The collected medium was centrifuged at 1,300 rpm for 3 min to remove debris, and the supernatant was filtered using a 0.45 μm syringe filter.Then, the filtered supernatant was loaded on an Amicon® Ultra-15 10 kDa Centrifugal Filter (Merck, USA) and centrifuged at 4,000×g and 4 • C for 20 min.The resulting solution was resuspended in fresh growth medium containing 8 μg mL − 1 polybrene (Sigma Aldrich).Each OG-MEFs and ASCs were pre-seeded at a density of 1 × 10 6 and 2 × 10 5 cells, respectively, in 100 mm dishes before transduction.About 10 mL of the growth medium containing the retrovirus and polybrene was added to each dish, and the medium was replaced with a new growth medium after 24 h.After 48 h for complete expression, the transduction efficiencies of the OG-MEFs (>90 %) and ASCs (>50 %) were confirmed, and the detached cells were suspended in each hydrogel solution at a density of 2 × 10 6 cells mL − 1 .The transduced cells encapsulated hydrogel was replaced with the iPSC medium every day. Ultrasound stimulation Ultrasonic stimulation was applied using a Digital Ultrasonic Set (Daehan, Korea).Each OG-MEF-encapsulated HA hydrogel was collected in a 50 mL tube with 2 mL of the medium and exposed to LIUS for 0, 5, 10, and 20 min at a frequency of 40 kHz and an intensity of 300 mW cm − 2 on alternate days during the culture period. Live & dead staining Live/dead fluorescence staining was performed to estimate the cytotoxicity of ultrasonic stimulation.First, the HA hydrogel was washed with DPBS and stained with 2 μM calcein AM (Thermo Scientific) as well as 4 μM ethidium homodimer-1 (Thermo Scientific) in DPBS solution for 30 min.Fluorescence images of the live (green) and dead (red) cells were then observed using a Cytation3 (Biotek, USA). Gene expression analysis For quantitative real-time polymerase chain reaction (qRT-PCR) D. Kim et al. analysis, the HA hydrogels were frozen in liquid nitrogen and disrupted with a homogenizer in 200 μL of TRIzol™ (Thermo Scientific).After complete homogenization, an additional 800 μL of TRIzol and 200 μL of chloroform were added.The samples were then vortexed and centrifuged at 13,000 rpm and 4 • C for 20 min.The supernatant was mixed with an equal amount of isopropanol, and the mixture was centrifuged at 13,000 rpm and 4 • C for 20 min.The supernatant was removed from the pellet, and washed with 75 % ethyl alcohol, followed by another round of centrifugation at 13,000 rpm and 4 • C for 10 min.The pellets were completely dried and resuspended in RNase-free water (Thermo Scientific).RNA quantification was then performed using the Cytation3. For cDNA synthesis, complementary DNA was synthesized from 1 μg of total RNA using the PrimeScript™ RT reagent kit (Takara, Japan).Then, qRT-PCR was performed using the Power SYBR® Green PCR Master Mix (Applied Biosystems, UK); the PCR conditions were 95 1. Western blot Before western blotting, all hydrogel samples were washed three times with DPBS and frozen with liquid nitrogen.Then, the samples were disrupted with a homogenizer using 50 μL 5X RIPA buffer (Sigma Aldrich) supplemented with a protease inhibitor (Merck) and phosphatase inhibitor (Sigma Aldrich).The extract was centrifuged at 13,000 rpm and 4 • C for 20 min, and the supernatant was collected.The total protein concentration was quantified by bicinchoninic acid protein assay using the Pierce BCA Protein Assay Kit (Thermo Scientific).About 20 μg of each protein sample was separated by denaturing 10 % polyacrylamide gel electrophoresis.The separated proteins were transferred to nitrocellulose membranes and blocked with 5 % skim milk in trisbuffered saline and 0.05 % tween-20 (TBS-T) for an hour.The membranes were incubated overnight at 4 • C with primary antibodies in TBS-T with 5 % bovine serum albumin (BSA).The membranes were then washed three times with TBS-T and incubated for 2 h at room temperature with appropriate horseradish-peroxidase-conjugated secondary antibodies diluted 1:5000 in 5 % skim milk TBS-T.After washing with TBS-T thrice, the protein bands were detected using a Chemi-doc detection system (Bio-Rad, USA), and the images were visualized using Image Lab (Bio-Rad) software.Details of the primary and secondary antibodies are shown in Supplementary Table 2. Immunofluorescence staining The HA hydrogels were washed three times with DPBS, and the washed gels were fixed for an hour at room temperature with 4 % paraformaldehyde (Biosesang, Korea).The fixed hydrogels were permeabilized with 0.3 % (v/v) Triton X-100 in DPBS (PBS-T) at room temperature for 30 min.After blocking with 1 % (w/v) BSA in PBS-T for an hour at room temperature, the hydrogels were incubated in the primary antibody solution (1:200) overnight at 4 • C. The samples were then washed with DPBS and incubated in fluorescein-conjugated secondary antibodies (Thermo Scientific) diluted 1:200 in 1 % BSA in PBS-T for 2 h at room temperature under dark conditions.For F-actin staining, Texas red-X phalloidin (Thermo Scientific) was used.The unbound antibodies were washed with DPBS, and samples were counterstained with 4′6-diamidino-2-phenylindole (DAPI; Thermo Scientific) to observe the cellular nuclei.The fluorescence images were observed using Cytation3. Flow cytometry The surface antigens on the cells were analyzed via flow cytometry.Cells encapsulated in the hydrogels were washed twice with DPBS and dissociated using hyaluronidase from bovine testes Type I-s (Sigma Aldrich) before collection by centrifugation at 1,300 rpm for 3 min.The cell pellets were washed twice with DPBS and blocked with 2 % FBS in DPBS solution (FACS buffer).Specific antibodies (diluted 1:100 in FACS buffer) were incubated at 4 • C for 30 min and washed thrice with FACS buffer.Finally, fluorescence was detected using BD Accuri C6 (BD Bioscience, Japan).The percentage of expressed cell surface antigens was calculated for 10,000 gated cell events.The antibodies used in these experiments were anti-CD44 (103011, Biolegend, UK) and anti-SSEA1 (125608, Biolegend). Cell membrane fluidity The cells were analyzed for membrane fluidity via the membrane fluidity assay (Abcam, UK) according to manufacturer instructions.In brief, cells stimulated with LIUS were labeled with pyrenedecanoic acid (PDA) for an hour at 25 • C under dark conditions.The unincorporated PDA was washed, and the ratio of monomer (Em 400 nm) to excimer (Em 470 nm) fluorescence was normalized to that of the unstimulated cells. CD44 knockdown CD44 RNA was downregulated in the OG-MEFs by retroviral transduction to introduce a predesigned CD44 mouse shRNA into the retroviral vector (Origene, USA) targeting CD44.The CD44 mouse shRNA of the retroviral vector was purchased in annealed and purified form ready to be transfected into the GP2-293 packaging cells for retroviral synthesis.For the transduction, 1 × 10 6 cells were plated in 100 mm dish, and the adhered cells were transduced with retrovirus synthesized using polybrene.The OG-MEFs were transduced with CD44 mouse shRNA for 24 h and then replaced with new growth medium. Electron microscopy Cells cultured on the HA hydrogel with LIUS stimulation were harvested and washed in PBS, fixed with 2.5 % glutaraldehyde solution, and washed with distilled water.The hydrogel samples were lyophilized and coated with a 10-nm-thick layer of platinum/palladium using a sputter (E− 1010, Hitachi, Japan).The morphology of the adhered cells on the hydrogels was observed by scanning electron microscopy (SEM; S-3000 N, Hitachi). Ultrasound stimulation in in-vivo Generation of iPSCs with LIUS stimulation in vivo condition was also analyzed, approved by the Institutional Animal Care and Use Committee of Dongguk University (IACUC-2021-018-3), in accordance with ARRIVE guidelines (https://arriveguidelines.org/arrive-guidelines).The transduced OG-MEFs encapsulated hydrogel was transplanted into a single subcutaneous space of randomly selected 6-8 weeks-old male BALB/c nude mice (Orient Bio, Inc., Korea).Each hydrogel is stimulated with ultrasound therapy (ST-10A, StraTek Co., Ltd., Korea).ultrasound was induced on top of hydrogel transplanted skin, at a frequency of 1 MHz and an intensity of 300 mW cm − 2 on alternative days.On the day of analysis, each hydrogel was isolated from skin and analyzed. Statistical analysis In all figures, exact n values represent independent experiments performed for each condition.Statistical analysis of the results was then performed using GraphPad prism ver.8.1.0(GraphPad Software, USA).All data shown in this study are presented as means ± standard deviation (SD).Two-tailed Student's t-tests were used for comparisons between two experimental groups.One-way ANOVA using Tukey's multiple comparison post-test was implemented to compare the samples.Two-way ANOVA with Bonferroni posthoc testing was used for comparisons among more than two experimental groups with two varying parameters.The statistical significance values were set at n.s., not significant, *P < 0.05, **P < 0.01, ***P < 0.001, and ****P < 0.0001. Fig. 3 . Fig. 3. | LIUS modulates cytoskeletal rearrangement and CD44 interaction.a, Immunofluorescence staining of F-actin after LIUS stimulation (scale bar, 50 μm) and b, its graphically quantified intensity (n = 3).Mechanical properties of hydrogel by LIUS application were analyzed by c, shear modulus and d, swelling ratio (n = 3).e, Time-dependent expressions of F-actin from LIUS were analyzed by the fluorescence intensities of F-actin (scale bar, 50 μm).f, Protein expression of phosphorylated FAK and total FAK under LIUS (D1; n = 3).g, mRNA expression of Oct4 and Nanog after F-actin modulation by cytochalasin D and phalloidin treatment (D1; n = 3).h, Cell membrane fluidity of LIUS was measured by the relative fluorescence intensity of excimer to monomer ratio (n = 3).i, Immunofluorescence staining of CD44 after LIUS stimulation (D1; scale bar, 100 μm) and j, its intensities (n = 3).k, Expressions of CD44 under LIUS were analyzed by qPCR (D1; n = 3).l, Protein expression of CD44 and signaling molecules were analyzed with western blotting.Relative protein expressions of m, CD44 and n, signaling molecules, STAT3 and AKT, were measured (D1; n = 3).All data are expressed as the mean ± s.d.The statistical significance was determined with one-way ANOVA followed by Tukey's multiple comparison post test.n.s., not significant, ****P < 0.0001. Fig. 4 . Fig. 4. | LIUS stimulation under HA hydrogel mediates initial expression of CD44 and affects reprogramming efficiency.a, Immunofluorescence staining and its intensities of CD44 after LIUS stimulation under different microenvironment system (scale bar, 300 μm; n = 3).The reprogramming efficiency of LIUS system under different hydrogels were analyzed by immunofluorescence expression and its intensities of pluripotent markers, b, OCT4-GFP, c, NANOG, and d, SOX2 (scale bar, 300 μm; n = 3).All data are expressed as the mean ± s.d.The statistical significance was determined with two-sided t-tests for comparisons between two experimental groups.n.s., not significant, ****P < 0.0001. Fig. 6 . Fig. 6. | LIUS stimulation enhances reprogramming efficiencies into iPSCs in different applications.a, Schematic representation of the reprogramming of human adipose derived stem cells (hASCs) into iPSCs under ultrasound stimulation.Immunofluorescence staining of b, OCT4 (Green), SSEA4 (Red), c, NANOG (Green), and TRA-1-60 (Red) of hASCs under LIUS stimulation after reprogramming (scale bar, 300 μm).d, Quantification of fluorescence intensity in each immunofluorescence stained cell sample, and e, number of iPSC colonies derived from hASCs (n = 3).f, Schematic representation of the reprogramming of OG-MEFs into iPSCs under ultrasound stimulation in vivo.g, Fluorescence images of OCT4-GFP and its quantitative intensities from iPSCs reprogramming of OG-MEFs in vivo (scale bar, 100 μm; n = 3).Immunofluorescence images of pluripotent markers, h, NANOG and i. SOX2 from iPSCs derived from OG-MEFs under in vivo LIUS stimulation in HA hydrogel (scale bar, 100 μm), and j, quantified its fluorescence intensities (n = 3).All data are expressed as the mean ± s.d.The statistical significance was determined with one-way ANOVA (d,e) followed by Tukey's multiple comparison post test or two-sided t-tests (f,i).n.s., not significant, ****P < 0.0001. D .Kim et al. • C for 20 s, followed by 40 cycles of 95 • C for 3 s, 60 • C for 30 s, and melting curve stage of 95 • C for 15 s and 60 • C for 60 s.The primer sequences for the PCR are shown in Supplementary Table
2024-05-11T15:04:55.241Z
2024-05-09T00:00:00.000
{ "year": 2024, "sha1": "10255d1bd78ac5e03db1905d19dfec014c76119c", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.bioactmat.2024.05.011", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7f23dc55430fb08d376d521ce7f756157d292cb4", "s2fieldsofstudy": [ "Medicine", "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
13881326
pes2o/s2orc
v3-fos-license
The myofilament lattice: studies on isolated fibers. II. The effects of osmotic strength, ionic concentration, and pH upon the unit-cell volume. The effects of osmotic concentration, ionic strength, and pH on the myofilament lattice spacing of intact and skinned single fibers from the walking leg of crayfish (Orconectes) were determined by electron microscopy and low-angle X-ray diffraction. Sarcomere lengths were determined by light diffraction. It is demonstrated that the interfilament spacing in the intact fiber is a function of the volume of the fiber. It is also shown that the interfilament spacing of the skinned (but not of the intact) fiber is affected in a predictable manner by ionic strength and pH insofar as these parameters affect the electrostatic repulsive forces between the filaments. From these combined observations it is demonstrated that the unit-cell volume of the in vivo myofilament lattice behaves in a manner similar to that described for liquid-crystalline solutions. INTRODUCTION Loeb (32) noted that even in extremely hyperosmotic solutions muscles lost a maximum of 20 % of the initial weight. Later, Overton (36) postulated that muscles behave as osmometers if allowances are made for dry weight and extracellular space . This osmotic behavior has been extensively studied in frog whole muscle (11,22,25,44) and in frog single fibers (9,16,38,41) . Worthington (45) showed by X-ray diffraction that the separation between the myofilaments of insect muscle fibers increased upon exposure to hyposmotic media . It was suggested by Harris (24), from electron microscope observations, that osmotic swelling THE JOURNAL of CELL BIOLOGY . VOLUME 5 3 , 1972 . pages [53][54][55][56][57][58][59][60][61][62][63][64][65] in the muscle occurred primarily between myofibrils and within cytoplasmic organelles ; however, no reference to lattice volume was included . H . E . Huxley et al . (30) observed that frog fibers fixed for electron microscopy after exposure to hypertonic solutions appeared to have less space between the thick myofilaments than control fibers . Brandt et al . (12), also from an electron microscope study, reported that the spacing between the thick myofilaments in crayfish muscle fibers was doubled when the fibers were placed in a hypotonic medium before fixation whereas the myofilaments became more closely packed in a hypertonic medium. Rome (40) demonstrated by X-ray diffraction that the myofilament lattice of frog muscle behaves as an osmometer with 38% of the lattice volume osmotically inactive . This report demonstrates, using electron microscope and X-ray diffraction procedures, that the myofilament lattice of intact muscle fibers from the walking legs of crayfish (Orconectes) reflects the osmotic behavior of the whole muscle fiber . Experiments demonstrating that the interfilament spacing of skinned fibers is dependent upon the ionic composition, and the pH of the internal medium are also described . Brief accounts of these results have already appeared (1,3,6,7) . MATERIALS AND METHODS Most of the experiments described herein were carried out in conjunction with or subsequent to those reported by April et al . (4) in which the apparatus and basic experimental procedure are described . For fixation in NaCl or urea media, either the control saline solution was used throughout (a modified van Harreveld's crustacean saline solution) or it was replaced for an equilibration period of 30 min by a solution made hypo-or hypertonic by adjustment of the NaCl concentration or of the urea concentration . Subsequently, to prevent contraction, an identical solution containing 1 mg/ml procaine hydrochloride was substituted before fixation with I' / " OsO4 as described by April et al . (4) . For fixation in KCI media the control saline solution was replaced for 3 min by an isosmotic 30 mm/liter KC1 saline solution to lower the membrane potential so that no large contraction occurred when isosmotic 200 mm/liter KCl solution was subsequently perfused through the chamber . The muscle fibers were fixed 30 min later. Procaine pretreatment was unnecessary in this instance, for the membrane is depolarized below the contraction threshold by the KCl . The preparative technique and electron microscopy as well as the techniques utilized to determine the unit-cell volumes of the myofilament lattice of the fixed and living fibers are described in a previous paper (4) . In the experiments undertaken to determine the effects of pH upon the in vivo myofilament lattice of the intact and skinned fibers, the pH of the medium was adjusted to within 0.02 of a pH unit with propionic acid or KOH . The medium for some of the intact fibers was 200 mm/liter potassium propionate, 13 .5 mm/liter CaC1 2 , and 5 mm/liter Tris hydroxide . The medium for the remaining intact fibers and for all the skinned fibers was 200 mm/liter potassium propionate, 5 mm/liter MgC12 , 10 mm/liter K 2 -EGTA, I mM/liter adenosine triphosphate (ATP), and 20 mm/liter Tris hydroxide . In this and all following tables L s means sarcomere length, and d,,,_,,,, distance between myosin filaments . All numbers in brackets refer to the number of experiments . where P is the osmolarity of the medium, V is the volume of the unit cell, b is the minimal unit-cell volume, 0 is the osmotic coefficient, and nRT has the usual significance . From the calculated volume of the unit cell of the fixed fibers and the assumption that the lattice equivalent to the osmotic dead space is 29 % of the unit-cell volume, a curve can be generated which relates the reciprocal of the osmolarity to the interfilament distance . The experimental points closely approximate this theoretical curve (Fig . 1, lower curve) . At the minimal unit-cell volume (6 .2 X 10-3 µ3) THE JOURNAL OF CELL BIOLOGY . VOLUME 5 3,1972 the thick filaments would be spaced 273 A apart (center-to-center) at the standard sarcomere length of 9 .6 µ. The control unit-cell volume is 21 .5 (=i=1 .1) X 10-3 µ3, representing an interfilament separation of 507 (=i=13) A at the standard sarcomere length used . The interfilament distances of single intact fibers also vary with additions or isosmotic substitutions of urea and KCl to the bathing medium . The unitcell volumes of fibers after 4 min in a solution made hyperosmotic by addition of 400 mm/liter urea average 14 .4 X 10-3 µ3, 67c/1 0 of the control unitcell volume . The lattice volume is an average 17 .5 X 10-3 µ3 when the fibers are fixed 30 min after the introduction of the urea solution . Although the 30 min value of the unit-cell volume is still less than that of the control volume, the lattice tends to return to the control level with time . The unit-cell volumes average 28.5 X 10-3 µ3 in cells fixed after 30 min exposure to a hyposmotic (less 95 mm/liter NaCl plus 95 mm/liter KCl) saline solution, 133 % of the control volume . LIVING FIBERS : The interfilament distances of living single fibers were determined by lowangle X-ray diffraction in bathing media of different osmolarities and tonicities . The results are listed in Table II . The equilibrium unit-cell volumes decrease when fibers are bathed in solutions made hyperosmotic and increased in solutions made hyposmotic by adjusting the NaCl concentration . The interfilament spacing is plotted against the osmolarity of the medium in The relative unit-cell volume (V/ Vo ) is plotted against the reciprocal of the relative osmolarity (Po /P) in Fig . 2 (solid circles) . The line is the least squares regression fit for the X-ray data below Po /P = 1 .3 . The ordinate intercept (b/Vo) is 53 % ( f 3 % SE of estimate) . A curve relating the reciprocal of the osmolarity to the distance between the thick myofilaments can be calculated as before from the Boyle-van't Hoff equation, using the computed unit-cell volume of the living fiber at a sarcomere length of 9 .6 µ, 24 .1 (=L:1 .7) X 10-3 µ3, and an assumed lattice equivalent to an osmotic dead space of 53% . The experimental results at the osmolarities studied (Fig . 1, upper curve) within I sD from the mean of the control points . Thus, the equivalent nonosmotic volume of the myofilament lattice corresponds to an in vivo minimal unit-cell volume of 12 .9 X 10-3 µ3 and represents a minimal in vivo interfilament spacing of 393 A at the standard sarcomere length of 9 .6 µ . When fibers are equilibrated in solutions made hypotonic either by deletion of 95 mm/liter NaCl or by substitutions of 190 mm/liter urea for 95 mm/liter NaCl, the unit-cell volumes are 29 .3 (=E2 .5) X 10 -3 µ3 and 29 .6 (f 1 .5) X 10 -3 µ3, respectively, which is about 124% of the control unit-cell volume. When fibers are equilibrated for 30 min in a solution made hypotonic, but kept isosmotic, by substitution of 95 mm/liter KCI for an equivalent amount of NaCl, the average unitcell volume is 35 .8 (f2 .0) X 10 -3 µ3, representing about 144% of the volume of the control (Table II) . SKINNED FIBERS : The interfilamen distances of skinned fibers bathed in media of various osmolarities were determined by low-angle X-ray diffraction and the data are presented in Tables III A and B . In fibers bathed in the potassium propionate medium there is an approximate 22% increase in the unit-cell volume (29 .8 X 10-3 µ3) , which is followed by an additional 38% increase in the unit-cell volume ( Lattice Parameters Determined by X-Ray Diffraction as a Function of Ionic Strength in Skinned Fibers In one series of experiments the effects of ionic strength upon the myofilament lattice were investigated . A pair of single fibers isolated from the same bundle and attached to the same tendon were mounted together in the chamber, and one fiber was skinned . The interfilament distances of the intact and skinned fibers were determined sequentially by X-ray diffraction, and the results are listed, respectively, in Tables III A and B . The interfilament distances are plotted against the relative ionic strength of the medium in Fig. 3 . Since the propionate is nonpermeant, ionic strength with respect to the intact fibers is practically equivalent to osmolarity, and indeed the behavior of the lattice of the intact fiber is accordant with the Boyle-van't Hoff relation . The curve for the intact fibers is that calculated from the pressure-volume equation, using the volume of the unit-cell in 200 mm/liter potassium propionate solution (29 .8 X 10-3 µ3 ) and the in vivo nonosmotic volume of 53% . The effect of the ionic strength upon the lattice of the skinned fiber is not only markedly less than that upon the intact fiber, but it is in the opposite direction . The data approximate a straight line fitted by the method of least squares . The lattice 5 8 THE JOURNAL OF CELL BIOLOGY • VOLUME 53, 1972 spacing of the skinned fiber is therefore directly proportional to the ionic strength which is quite unlike the greater inverse function in the intact fiber and which is accountable to the osmolar concentration of the external medium . It is thus apparent that the myofilament lattice of the skinned fiber no longer behaves as an osmometer . The relative unit-cell volumes are plotted against the reciprocal of the relative ionic strength in Fig. 4. The lattice of the intact fiber behaves as an osmometer whereas that of the skinned fiber is independent of the osmolarity . At low osmolarities a limit to the capacity of the lattice of the intact fiber to expand is reached and the interfilament spacing becomes similar to that of the skinned fiber. Table IV A, and the interfilament spacing is plotted against the pH of the external medium in Fig . 5 Unit-cell volume as a function of osmolarity and ionic strength . The relative unit cell volumes of intact fibers (9) and skinned fibers (A) as determined by X-ray diffraction are plotted against the same conditions expressed, respectively, as relative osmolarity and relative ionic strength . The disparity in the behavior as a function of osmolarity and ionic strength is evident . The curve for the intact fibers is fitted by the method of least squares, while that for the skinned fibers is fitted by eye . It is also evident that there is a restriction upon the amount to which the lattice can swell . The open symbols (o, A) represent mean values, with the number of experiments and the standard deviation from the mean indicated . Comparing the electron microscope and X-ray results, it is interesting to note that while the myofilament lattice volume shrinks during fixation by 11 % in the control medium, the minimal unit-cell volume shrinks by 51 % determined by extrapolation to infinite osmolarity . It seems that the minimal unit-cell volume shrinks more than the cytoplasm as a whole shrinks . This fixation shrinkage, which appears to be mainly located within the osmotic dead space, might be due to several possible effects of the fixative . First, the fixation shrinkage might involve a conformational change in the structure of the myofilaments so that the diameters decrease, but the surface-to-surface distance between the myofilaments remains unchanged . There is some evidence in support of this from the work of Elliott et al . (19), who found that they could best account for the equatorial distribution of X-ray intensity for living frog and rabbit muscle by taking the effective filament diameters in vivo to be about twice those inferred from measurements from fixed tissue, sectioned and examined in the electron microscope . A second possibility involves a change in the minimum surface-to-surface distance because of a possible change in the forces acting between the myofilaments or a change in the amount of water bound to the myofilament surfaces . It is also quite possible that a combination of all of these effects might occur . One general conclusion at least emerges from these experiments . In a fiber with the sarcolemma intact the lattice volume invariably follows the fiber volume, which in turn depends upon the osmotic constraints (Table III) . When the sarcolemma is removed from the fiber there is usually a volume increase. This increase is least when the fiber is initially swollen and greatest when the fiber is shrunken (cf . Figs. 2 and 5) . As a working hypothesis we suggest that the in vivo lattice is normally constrained from expansion by the sarcolemma, which is consistent with the observed expansion upon removal of that organelle . This implies that there may not be a true osmotic balance across the sarcolemma and that the relatively large minimal unit-cell volume of the in vivo lattice corresponds to the point at which the residual outwardly directed forces of the lattice are balanced by the inwardly directed osmotic forces and the intrinsic elasticity of the sarcolemma . This relationship would hold within the structural limits of the fiber lattice . It is plausible that the fiber volume is a primary factor in the interfilament spacing. At low osmolarities the maximum expansion of the lattice of the intact fiber appears to be similar to the limit in the skinned fiber (cf. Figs. 3 and 4) . It is generally thought that the constraint to extreme muscle swelling is the elasticity of the sarcolemma (23) . With respect to the myofilament lattice, however, the constraint in the extreme swollen state seems to be independent of the sarcolemma . The fiber can swell to 300% of the control volume (37) while the lattice only expands to about 175 (Fig . 4) . This could be due to mechanical limitation such as Z line or M line structure ; but, it could also represent the true balance between the van der Waal's and electrical double-layer forces which are thought to stabilize the hexogonal lattice (17,18) . Again, further experiments in this area are needed . One further point worthy of note is that Rome (40) found that the lattice spacing in glycerinated rabbit psoas muscle was, over most of the sarcomere length range, greater than that in living 62 THE JOURNAL OF CELL BIOLOGY . VOLUME 53,1972 whole muscle . Rome also found that in frog muscle something appeared to prevent the filament lattice from swelling maximally in hypotonic Ringer's solution (see Fig . 6 in reference 40) . In the case of the skinned fibers, the sarcolemmal constraint is no longer operative and, since ample ATP is present, the filaments are free to separate . These observations support the role which the sarcolemma plays in our tentative hypothesis . The observation and quantitation of the effects of osmotic pressure upon the myofilament lattice of the intact muscle fiber is of further interest, for not only can this behavior be explained in terms of the proposed (20,21,34,42) liquidcrystalline nature of the lattice, but the limiting role of the sarcolemma is defined as it contributes to the phenomenon of interfilament spacing (2) . Ionic Strength Effects In a number of our experiments the osmolarity of the medium was varied independently of the ionic strength, and the effects upon the myofilament spacing were studied in fixed fibers, in living fibers and in "living" fibers with the sarcolemma removed . The results from the fixed and in vivo studies indicate that changes in the lattice volume of fibers with intact membranes are not due to changes in the ionic strength of the internal medium since an isosmotic, but hypotonic, KCl solution causes the lattice to swell even though the internal ionic strength is thought to remain constant (5,10,31,33) . Similar volume changes induced with urea (14) show that, in all cases where the cell membrane is intact, the lattice volume follows the fiber volume (cf . Tables I and II) . The X-ray diffraction experiments in intact and skinned fibers also distinguished between the effects of osmolarity and ionic strength . After removal of the sarcolemma the myofilament lattice volume expands to a new volume and henceforth responds markedly less and in the opposite direction to the same osmolar changes, expressed as ionic strength, which greatly modify the volume of the intact fiber (cf . Fig . 3) . It is evident that it is the osmolarity of the external medium, not the ionic strength of the internal medium, which has the major effect upon the in vivo interfilament spacing . It further seems likely that the effects upon the myofilament lattice spacing in the living frog muscle bundle attributed partly to ionic strength and to the special nature of potassium ions (40) are in fact due to osmotic pressure changes across the sarcolemma . The ionic strength effect is more equivocal . There is no doubt, however, that ionic strength does exert a distinct effect upon the myofilament spacing in the living fiber which can only be observed when the sarcolemma is removed . These results support to some degree the observations reported by Rome (40) in which changes in the ionic strength at sarcomere lengths longer than rest length result in similar variations in the lattice spacing of glycerol-extracted rabbit psoas muscle . Our results do not cover the lower range of ionic strengths as thoroughly as do Rome's, but over the normal to higher ionic strengths the trend appears to be similar. Rome (40) has ascribed the effects of ionic strength upon the filament separation of glycerinated muscle to the effect of the counterions on the electric-double-layer associated with the filaments. This explanation was initially proposed by Bernal and Fankuchen (8) for similar behavior observed in the liquid-crystalline system of tobacco mosaic virus (TMV) . In the case of both TMV and the glycerinated muscle a distinct saturation effect is observed at higher ionic strengths . There is, in the skinned fibers as well as in the glycerinated muscle, a slight reversal of this saturation effect which in the latter case has been shown to be somewhat dependent upon sarcomere length . This effect of ionic strength in this range is in a direction that is opposite to that given by the common-sense argument, which would hold that an increased concentration of ions in solution would increase the electrostatic shielding and thus decrease the repulsive double-layer forces, and would lead to a decrease of the interfilament spacing . A point to consider, however, is the possibility that the ionic strength in the operative region of the electrical double-layer may not be related to the medium in a straightforward manner. Additionally, the theoretical work of Nineham and Parsegian (35) opens the possibility that the van der Waal's (dispersive) forces, to which the attraction between filaments has been ascribed (17,18,20,39,40,43), may also be a function of the ionic composition of the interfilament medium . We intend to make further calculations with respect to these effects. Obviously, more experiments must also be undertaken with respect to the effects of sarcomere length in the skinned fiber . Preliminary light microscope observations on skinned crayfish fibers (April, unpublished) suggest that, under some conditions, at low ionic strengths the whole fiber swells dramatically, as Rome (40) also observed with glycerinated muscles at short sarcomere lengths . These data on the effect of ionic strength upon the myofilament lattice of living fibers stripped of the sarcolemma appear to support some aspects of the electrostatic hypothesis for filament separation proposed by Elliott (17,18) and Rome (40) . This hypothesis has been modified and extended by April (2) to account for the differences between intact and skinned fibers. pH Effects Experiments on living fibers involving change in the pH of the bathing medium suggest that long-range electrostatic forces are involved in the maintenance of the myofilament separation . The filament lattice of the intact single fiber exhibited very little change, if any, when the pH was varied over one unit on either side of the physiological control point (cf. Fig. 5) . It had been shown that the internal pH of living muscle is relatively independent of the pH of the external medium (13,26), although Rome (40) did observe a definite decrease in lattice spacing when the pH of the medium was changed by bubbling CO 2 through it . In the living skinned fibers in our experiments, the pH of the bathing medium was found to have definite effects upon the myofilament separation at all values of pH tested (cf . Fig . 5) . These results in living fibers support and confirm the observations of Rome (40) who found that the lattice dimensions of glycerinated rabbit psoas muscle were inversely proportional to the pH of the medium, with some indication that the same may be true in the living muscle . The pH effects are not simple, and our results cannot be considered conclusive because they were generally nonreversible beyond a narrow range . The denaturing effect of low pH upon protein is well established, and it might be assumed that this is the cause of the nonreversibility of the lattice shrinkage at low pH values. Rome (40) ascribed the lattice behavior in response to pH to alteration of the net amount of negative charge on the myofilaments . Again, this explanation was originally proposed by Bernal and Fankuchen (8) for similar behavior observed in liquid-crystalline solutions of TMV . In general, the pH data appear to support the APRIL, BRANDT, AND ELLIOTT postulation that interfilament spacing is determined to some extent by electrostatic repulsive forces originating from the negative charge on the myofilaments. The results of the experiments reported in this and a previous paper (4) appear to support a hypothesis for the liquid-crystalline nature of the resting myofilament lattice (as defined by Elliott and Rome,20) . Several papers by H . E . Huxley and his coworkers (27,28,29) have illustrated the role of "cross-bridge" interaction in the generation of muscle tension . While this paper has described the morphological behavior of the myofilament lattice under specific physical conditions, a subsequent paper will deal with the dynamics of the active myofilament lattice by correlating these morphological data with the generation of tension . We wish to thank Mr . Robert Demarest for the graphics and Drs. Harry Grundfest, Jean Hanson, John P . Reuben, Robert V . Rice, Elizabeth Rome, and Edgar Smith for their valuable discussions . This work was supported in part by National Institutes of Health Grants (NB-03728, NB-05328, NS-05910, and GM-00256) and in part by a grant from the Muscular Distrophy Association of America . Dr . E . W . April is also grateful to the Grass Foundation for a Grass Fellowship in Neurophysiology. Dr . Elliott is grateful to the Marine Biological Laboratory for the Rand Fellowship and to Carnegie-Mellon University for a visiting professorship . Received for publication 14 July 1971, and in revised form 9 December 1971 .
2014-10-01T00:00:00.000Z
1972-04-01T00:00:00.000
{ "year": 1972, "sha1": "073263c1145d3a862d3608e68a2bdf1b482f56a1", "oa_license": "CCBYNCSA", "oa_url": "https://rupress.org/jcb/article-pdf/53/1/53/1070560/53.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "073263c1145d3a862d3608e68a2bdf1b482f56a1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
118909816
pes2o/s2orc
v3-fos-license
Economics from a Physicist's point of view: Stochastic Dynamics of Supply and Demand in a Single Market. Part I Proceeding from the concept of rational expectations, a new dynamic model of supply and demand in a single market with one supplier, one buyer, and one kind of commodity is developed. Unlike the cob-web dynamic theories with adaptive expectations that are made up of deterministic difference equations, the new model is cast in the form of stochastic differential equations. The stochasticity is due to random disturbances ("input") to endogenous variables. The disturbances are assumed to be stationary to the second order with zero means and given covariance functions. Two particular versions of the model with different endogenous variables are considered. The first version involves supply, demand, and price. In the second version the stock of commodity is added. Covariance functions and variances of the endogenous variables ("output") are obtained in terms of the spectral theory of stochastic stationary processes. The impact of both deterministic parameters of the model and the random input on the stochastic output is analyzed and new conditions of chaotic instability are found. If these conditions are met, the endogenous variables undergo unbounded chaotic oscillations. As a result, the market that would be stable if undisturbed loses stability and collapses. This phenomenon cannot be discovered even in principle in terms of any cobweb deterministic model. Introduction It is universally accepted wisdom that supply and demand interaction is the main driving force of a market economy. The word "interaction" means that supply and demand, as well as the price they depend upon, change with time and are therefore not static phenomena but dynamic processes. These price-quantity processes can develop in different ways. They can converge to an equilibrium state, oscillate, collapse or explode, all in a regular or chaotic manner. To gain insight into this complex behavior, supply-and-demand problems should be treated not only statically, as in economics textbooks, but also dynamically. Yet surprisingly, the dynamic consideration of these problems has, almost completely, given way to a static analysis. As a result, the inadequate static approach has led to a limited and sometimes even incorrect understanding of the price-quantity interaction. A case in point is the single one-commodity market with one seller and one buyer. There are some quasi-dynamic models, to be briefly discussed below, expressly designed for a study of stability of supply-and-demand equilibrium price in such a market. No wonder that such specific models fail to deal with the whole interactive process of supply, demand, and price. Instead, this process is routinely discussed from the static point of view based on the well-known oversimplified assumptions: 1. Supply S and demand D are functions only of a price P and do not depend on time t. The variables S, D, P , which are included in the analysis, constitute an endogenous set (a market) E = {S, D, P }. 2. The plots of S(P ) and D(P ) have respectively positive ("upward") and negative ("downward") slopes everywhere. These curves intersect at a point of equilibrium E e = {S e , D e , P e } ⊂ E where supply equals demand (S e = D e ) and the equilibrium price P e clears the market. Suppose that a set G of all exogenous variables which can disturb the market equilibrium is excluded from the analysis. This is equivalent to an assumption that G = ∅ where ∅ is an empty set. Yet there still may be an endogenous force ,say, the seller who raises the price P that can displace the market E from the equilibrium point E e to a non-equilibrium state E n = {S n , D n , P n } = E e . In the end, however, as the static analysis shows, the market E will automatically return from E n to E e . Hence, according to the static consideration, the equilibrium of the simplest market E = {S, D, P } is always stable provided G = ∅ This discovery stands so high in economics that it is frequently called the "law" of supply and demand (e.g., [1]). Even its introductory part,-the existence of an equilibrium price P e , the fact already known in the 18-th century,-was praised by Thomas R. Malthus as the "first, greatest, and most universal principle" of economics. To a certain extent this is true since the "law" of supply and demand can not only give us information about the equilibrium state E e , but also predict in some instances the impact of the exogenous variables G = ∅ on the market E = {S, D, P } For example, a static analysis of an aggregate one-commodity market (macroeconomics) enabled one to gain an initial insight into the enigmatic phenomena of unemployment, inflation, etc., and explain how they appear when either the aggregate supply (AS) or the aggregate demand (AD) curve shifts in response to the variations in the exogenous set G . Despite, however, its merits, the famous "law" of supply and demand has to be taken with a grain of salt because actually it is not a universal principle. Firstly, the change in the exogenous variables G often leads to the "shifts on both the demand and supply side at the same time, so the simple rules [the above explanations] ... don't work" [2]. What happens in this case to, say, macroeconomics, or a market of commodities, depends on the shifts' magnitude which the famous "law" cannot predict. Secondly, contrary to the conventional assumption, the AS and AD static curves can be "perverse." This means that "AS, at least in some situations, slope downward and/or AD may slope upward" and "the usual implications of the macro AS − AD analysis may be misleading." [3] Thirdly, and most importantly, even if the set of exogenous variables G = ∅ the static "law" of supply and demand does not work either as long as it disregards the time-dependent nature of the endogenous variables E . Now, since all the three variables change with time, so will do both the curves of supply and demand as functions of the price. Consequently, at different points the curves will have, generally speaking, different slopes which can be positive, or negative, or zero. These dynamic phenomena undermine the fundamental feature of the static "law" of supply and demand,-the fixed positive (negative) slope of the supply (demand) curve,-and hence invalidate the "law's" assertion that the market equilibrium is always stable if the exogenous variables G = ∅. Because of this fault, the static "law" becomes impractical and can perhaps serve, as it does, only as a first step in understanding supply-and-demand problems. Such "myopic preoccupations of traditional equilibrium analysis" [4] persisted for a long time. Only the last few decades have seen a monotonic increase in studies of price-quantity dynamics. The respective dynamic models have been developed on the basis of (1) differential and (2) difference (cobweb) time-dependent equations. The first ("differential") direction of research goes back to Hicks [5] and Samuelson [6]. They focused on the study of stability of supply-and-demand equilibrium in a single and multiple markets in terms of differential equations. Because of such a narrow objective, Hicks' and Samuelson's models were too limiting. They were incapable of dealing with the general problem of price-quantity evolution although the time-dependent differential equations did make it possible. Consequently, by a strange coincidence, the "differential" direction in price-quantity dynamics gave way to the cobweb models. Such models were introduced in the 1930's (e.g., [7]) and have since then became the prevailing theoretical tools for studying price-quantity dynamics. One of the first who employed a special cobweb model for analyzing the price stability was Samuelson [6]. His model came to be known as a naive expectations theory. Later on a more sophisticated (and up to now widely used) cobweb model was proposed based on a concept of adaptive expectations. The model is reviewed in detail in our forthcoming paper. Now we only point out some of the shortcomings of the adaptive expectations theory (see, e.g., [8]). 1. In this theory, the curves of supply f S and demand f D are supposed to be "rigid," their shape is fixed and time-independent. This brings us back to the simplistic static assumption adopted in textbooks. But in pricequantity dynamics there are no "rigid" supply and demand curves. As has been already mentioned, both the curves are generally dependent on and changing with time. 2. The governing equations of the adaptive expectations theory are explicitly deterministic because no stochastic components are included. Only by a special choice of the key deterministic functions f S and f D , the cobweb model may be able to reveal some stochastic features of the price-quantity dynamics. But because there are in fact no "rigid" functions f S and f D , this particular approach is in general overly restrictive. Therefore a better way to study the chaotic behavior of the price-quantity process is to address it directly, by incorporating stochastic functions into governing equations. Now we can proceed with our new theory. The Stochastic Dynamic Model We begin with two assumptions. 1. It is clear that a profit-minded supplier should acutely aware of his/her costs. He/she will therefore control the actual output of commodity S(t) in view of the difference P St = P (t) − P S between the market price P (t) and some characteristic price P S which includes all costs and the desired profit. It is reasonable to believe that the actual supply S(t) will in-crease if the net price P St > 0 , or decrease if P St < 0 . So, P S can be interpreted as the seller's borderline price above which the supply of commodity will rise or below which it will fall. 2. It is also clear that a sensible buyer will adjust his/her actual demand for commodity D t based on the difference P Dt = P (t) − P D between the market price P (t) and a characteristic price P D . The latter is formed by the buyer's needs and financial opportunities, as well as by his/her tastes and other unspecified psychological, physiological, etc. factors. The higher the price P D , the higher is the buyer's willingness to purchase the commodity. In other words, the actual demand D(t) is likely to increase if the net price P Dt < 0 , or decrease if P Dt > 0. Hence, P D can be viewed as the buyer's borderline price below which the demand for commodity D(t) will rise or above which it will fall. All the factors influencing the price P (D),-the buyer's needs, financial opportunities, tastes, etc.,-are, to a different degree, random by nature. We may therefore express P (D) as a sum of two components P (D) = P D + P D where P D = 0 is a deterministic part (a mean) of P D while P D is a stochastic disturbance of P D with a zero mean P D = 0. Besides, the seller's borderline price P S is similar to P D and is therefore also stochastic. Yet in the present paper, for the sake of simplicity, the price is taken to be a deterministic quantity. Applying these two assumptions (as well as some others) to a rather inclusive rational expectations model (e.g., [9]), we have derived governing relations of our dynamic theory: It is a system of three stochastic first-order linear differential equations for the set of three endogenous variables E(t) = {S(t), D(t), P (t)}. In Eqs. (2.1), dots denote differentiation with respect to time t; the quantities a, b, c, k are non-negative constants; φ S (t), φ D (t), φ P (t) represent exogenous deterministic functions influencing supply, demand, and price. Now a natural question arises: What is the economic interpretation of the governing equations (2.1). Let us first look at the first of these equations which we designate as (2.1) 1 . Its meaning seems quite clear: (i) If the market price exceeds the seller's borderline price, supply will rise. If the seller's borderline price exceeds the market price, supply will fall. (ii) If demand exceeds supply, supply will rise. If supply exceeds demand, demand will fall. The meaning of next equation designated as (2.1) 2 is no less clear: (iii) If the buyer's borderline price exceeds the market price, demand will rise. If the market price exceeds the buyer's borderline price, demand will fall. Lastly, let us the third of Eqs.(2.1) designated as (2.1) 3 . Its meaning has been given by Samuelson's dictum [6]: (iv) "If at any price demand exceeds supply, price will rise. If supply exceeds demand, price will fall." We thus see that the paraphrase of Eqs. (2.1) is very simple. If we had started from this literal interpretation (that is, by inverting our approach), Eqs. (2.1) could have easily been written toutdesuite. Yet we have avoided this path and derived the dynamic model (2.1), as mentioned before, differently,-in terms of the more fundamental rational expectations model. This has been done on purpose, in order to show that our model (2.1) not only differs from the rational expectations model, but also has a certain affinity with it which would otherwise have been difficult, if not impossible, to see. There is also the other side of the coin. Once the stochastic equations (2.1) are obtained, they lend themselves well to developing more inclusive dynamic models of supply-and-demand to be dealt with elsewhere. It is important to note that unlike Eqs. (2.1) and without them such advanced models are not directly derivable from the rational expectations model (cf. [9]). A Closed Deterministic Market First we consider a closed deterministic market, that is, a market without exogenous variables and stochastic disturbances. This means that in the right-hand side of the model (2.1) we should omit the second and the third vector components and thus obtain It is a system of three deterministic first-order linear differential equations which can easily be solved by using any traditional techniques (see, e.g., [10], [11]). Analytic solutions of Eqs. (3.1) are simple but rather cumbersome. Therefore,in what follows, we restrict ourselves to the illustrations of typical results. C a s e 1 Suppose that there is no damping, that is, the constant k = 0. In this particular case, the solution of (3.1) is comparatively compact: where S(0), D(0), P (0) stand for initial values of supply, demand, and price, and the parameters A i (i = 1, 2, 3), β are defined as We see that according to Eqs. (3.2)-(3.5), • If P D > P S , then A 2 > 0. Consequently, both supply and demand increase in time,and the market is booming. • If in addition S(0) = D(0) and/or P (0) = A 1 , then the monotonic increase of supply and demand is accompanied by undamped oscillations. • If P D < P S , then A 2 < 0. As a result, both supply and demand decrease in time and the market goes south. • If simultaneously S(0) = D(0) and/or P (0) = A 1 , then the monotonic decrease of supply and demand is followed by undamped oscillations. • If both P (0) = A 1 and S(0) = D(0), then the price p(t) does not change and remains equal to the initial value P (0) = A 1 . The foregoing brief analysis leads to important conclusions. 1. The supply-and-demand dynamics is mainly influenced by the ratio α = P D /P S . This ratio defines what may be called the market asymmetry. 2. If α = 1, the market is symmetric since both the seller and the buyer adhere to the same borderline price P D = P S . As a result, the market will neither expand nor collapse, i.e., it is in a sense stable. 3. If α = 1, the market is asymmetric because the seller and the buyer adhere to different borderline prices. Consequently, the asymmetric market is, in the same sense, unstable, -it will either boom or collapse. The above new criteria of stability are drastically different from the corresponding stability conditions established by such figures as L. Walras, A. Marshall, J.R. Hicks, and P.A. Samuelson [12]. The new criteria also show that, contrary to the static "law" of supply and demand, the equilibrium of a single market is not always stable even if the set of endogenous variables is empty [G(t) = ∅] as has already been mentioned in Introduction. It is also interesting to observe that the seller's borderline price P S is usually hidden from the buyer and, vice versa, the buyer's borderline price P D is generally unknown to the seller. The obvious inference is that incomplete (asymmetric) information about the characteristic prices P S and P D available respectively to the seller and to the buyer may have either a beneficial or an adverse effect on the market. This phenomenon has a direct bearing on a theory of markets with asymmetric information [13]. Particular examples of the closed deterministic market described by (3.1) will be considered in the next paper.
2014-10-01T00:00:00.000Z
2002-07-30T00:00:00.000
{ "year": 2002, "sha1": "be48a1f423bfada291f2765fa142faebe4b82ee9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "be48a1f423bfada291f2765fa142faebe4b82ee9", "s2fieldsofstudy": [ "Economics", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
15684727
pes2o/s2orc
v3-fos-license
Evidence for a probabilistic, brain-distributed, recursive mechanism for decision-making Decision formation recruits many brain regions, but the procedure they jointly execute is unknown. To characterize it, we introduce a recursive Bayesian algorithm that makes decisions based on spike trains. Using it to simulate the random-dot-motion task, based on area MT activity, we demonstrate it quantitatively replicates the choice behaviour of monkeys, whilst predicting losses of usable sensory information. Its architecture maps to the recurrent cortico-basal-ganglia-thalamo-cortical loops, whose components are all implicated in decision-making. We show that the dynamics of its mapped computations match those of neural activity in the sensory-motor cortex and striatum during decisions, and forecast those of basal ganglia output and thalamus. This also predicts which aspects of neural dynamics are and are not part of inference. Our single-equation algorithm is probabilistic, distributed, recursive and parallel. Its success at capturing anatomy, behaviour and electrophysiology suggests that the mechanism implemented by the brain has these same characteristics. Introduction Decisions rely on evidence that is collected for, accumulated about, and contrasted between available options. Neural activity consistent with accumulation over time has been reported in parietal and frontal sensory-motor cortex (Roitman and Shadlen, 2002;Huk and Shadlen, 2005;Churchland et al., 2008;Ding and Gold, 2012a;Hanks et al., 2015), and in the subcortical striatum Gold, 2010, 2012b). What overall computation underlies these local snapshots, and how is it distributed across cortical and subcortical circuits, is unclear. Multiple models of decision making reproduce aspects of recorded choice behaviour, associated neural activity or both Wong et al., 2007;Mazurek et al., 2003;Ditterich, 2006;Lo and Wang, 2006;Grossberg and Pilly, 2008). While successful, they lack insight into the underlying decision procedure (but see Beck et al. (2008)). In contrast, other studies have shown how exact inference algorithms may be plausibly implemented by a range of neural circuits (Bogacz and Gurney, 2007;Larsen et al., 2010;Lepora and Gurney, 2012;Caballero et al., 2015); however, none of these has been demonstrated to reproduce experimental decision data. Here we close that gap by showing that a recursive quasi-Bayesian algorithm -whose components naturally map to recurrent cortico-subcortical loops-can originate the choice behaviour observed in primates during the random dot motion task, as well as its known neural activity correlates, as a direct transformation of the information in sensory cortex. Introducing this algorithm enables us to predict which aspects of neural activity are necessary for inference in decision making, and which are not. Our analysis predicts that evidence accumulation occurs over the entire feedback loop and, in particular, that it is not exclusive to increasing firing rates, but that, counter-intuitively, it can even take the form of rate decrements. Our algorithm explains the decision-correlated experimental data more comprehensively than preceding models, thus introducing a formal, systematic framework to interpret this data. Collectively, our analyses and simulations indicate that mammalian decision-making is implemented as a probabilistic, recursive and parallel procedure distributed across the cortico-basal-ganglia-thalamo-cortical loops. Figure 1: Random dot motion task and MT data. (a) Fixed duration task for MT recordings (Britten et al., 1992a). (b, c) Reaction time task for LIP recordings, N = 2, 4 alternatives . (d, e) Smoothed population moving mean and variance of the firing rate in MT during the fixed duration dot motion task, aligned at onset of the dot stimulus (Stim), for a variety of coherence percentages (colour-coded as in the legend in panel f). Solid lines are statistics when dots were moving in the preferred direction of the MT neuron. Dashed lines are statistics when dots were moving in the opposite, null direction. Data from Britten et al. (1992a), re-analysed. (f) Lognormal density functions for the inter-spike intervals (ISI) specified by the statistics over the approximately stationary segment of (d, e) before smoothing (parameter set Ω in table S1). Preferred and null motion directions by line type as in (d, e). Figure 2: The MSPRT and rMSPRT in schematic form. Circles joined by arrows are the Bayes' rule. All C evidence streams (data) are used to compute every one of the N likelihood functions. The product of the likelihood and prior probability of every hypothesis is normalized by the sum ( ) of all products of likelihoods and priors, to produce the posterior probability of that hypothesis. All posteriors are then compared to a constant threshold. A decision is made every time with two possible outcomes: if a posterior reached the threshold, the hypothesis with the highest posterior is picked, otherwise, sampling from the evidence streams continues. The MSPRT as in Baum and Veeravalli (1994) and Bogacz and Gurney (2007) only requires what is shown in black. The general recursive MSPRT introduced here re-uses the posteriors ∆ time steps in the past for present inference; hence the rMSPRT is shown in black and blue. If we are to work with the negative-logarithm of the Bayes' rule and its computations -as we do in this article-all relations between computations are preserved, but products of computations become additions of their logarithms and the divisions for normalization become their negative logarithm. Eq. 13 shows this for the rMSPRT. The rMSPRT itself is formalized by Eq. 14. Recursive MSPRT Normative algorithms are useful benchmarks to test how well the brain approximates an optimal algorithmic computation. The family of the multi-hypothesis sequential probability ratio test (MSPRT) (Baum and Veeravalli, 1994) is an attractive normative framework for understanding decision-making. However, the MSPRT is a feedforward algorithm. It cannot account for the ubiquitous presence of feedback in neural circuits and, as we show ahead, for slow dynamics in neural activity that result from this recurrence during decisions. To solve this, we introduce a recursive generalization, the rMSPRT, which uses a generalized, feedback form of the Bayes' rule we deduced from first principles (Eq. 9). Here we schematically review the MSPRT and introduce the rMSPRT (Fig. 2), giving full mathematical definitions and deductions in the Supplemental Experimental Procedures. The (r)MSPRT decides which of N parallel, competing alternatives (or hypotheses) is the best choice, based on C sequentially sampled streams of evidence (or data). For modelling the dot-motion task, we have N = 2 or N = 4 hypotheses -the possible saccades to available targets (Fig. 1b,c)-and the C uncertain evidence streams are assumed to be simultaneous spike trains produced by visual-motion-sensitive MT neurons (Roitman and Shadlen, 2002;Mazurek et al., 2003;Caballero et al., 2015). Every time new evidence arrives, the (r)MSPRT refreshes 'on-line' the likelihood of each hypothesis: the plausibility of the combined evidence streams assuming that hypothesis is true. The likelihood is then multiplied by the probability of that hypothesis based on past experience (the prior). This product for every hypothesis is then normalized by the sum of the products from all N hypotheses; this normalisation is crucial for decision, as it provides the competition between hypotheses. The result is the probability of each hypothesis given current evidence (the posterior) -a decision variable per hypothesis. Finally, posteriors are compared to a threshold, and a decision is made to either choose the hypothesis whose posterior probability crosses the threshold, or to continue sampling the evidence streams. Crucially, the (r)MSPRT allows us to use the same algorithm irrespective of the number of alternatives, and thus aim at a unified explanation of the N = 2 and N = 4 dot-motion task variants. The MSPRT is a special case of the rMSPRT (in its general form in Eqs. 9 and 14) when priors do not change or, equivalently, for an infinite recursion delay; this is, ∆ → ∞. Also, the previous recurrent extension of MSPRT (Larsen et al., 2010;Ditterich, 2010) is a special case of the rMSPRT when ∆ = 1. Hence, the rMSPRT generalizes both in allowing the re-use of posteriors from any given time in the past as priors for present inference. This uniquely allows us to map the rMSPRT onto neural circuits containing arbitrary feedback delays, in particular solving the problem of decomposing the decision-making algorithm into distributed components across multiple brain regions. We show below how this allows us to map the rMSPRT onto the cortical-basal-ganglia-thalamo-cortical loops. Inference using recursive and non-recursive forms of Bayes' rule gives the same results (e.g. see Sivia and Skilling (2006)), and so MSPRT and rMSPRT perform identically. Thus, like MSPRT (Baum and Veeravalli, 1994;Bogacz and Gurney, 2007), for N = 2 rMSPRT also collapses to the sequential probability ratio test of Wald (1947) and is thereby optimal in the sense that it requires the smallest expected number of observations to decide, at any given error rate (Wald and Wolfowitz, 1948). This is to say that the (r)MSPRT is quasi-Bayesian in general: the physical limit of performance or ideal (Bayesian) observer for two-alternative decisions (N = 2), and an asymptotic approximation to it for decisions between more than two (N > 2) (Baum and Veeravalli, 1994;Bogacz and Gurney, 2007). Upper bounds of decision time predicted by the (r)MSPRT We first use a particular instance of rMSPRT (Eqs. 13 and 14) to determine bounds on the decision time in the dot motion task. We can then ask how well monkeys approximate such bounds. This results from using a minimal amount of sensory information, by assuming as many evidence streams as alternatives; that is, C = N . Thus, it gives the upper bound on optimal expected decision times (exact for N = 2 alternatives, approximate for N = 4) per condition (given combination of coherence and N ); assuming C > N would predict shorter decision times (see Caballero et al. (2015)). Following Caballero et al. (2015), we assume that during the random dot motion task ( Fig. 1a-c), the evidence streams for every possible saccade come as simultaneous sequences of inter-spike intervals (ISI) produced in MT. On each time step, fresh evidence is drawn from the appropriate (null or preferred direction) ISI distributions extracted from MT data (Fig. 1f). By repeating the simulations for thousands of trials per condition, we can compare model and monkey performance. Using these data-determined MT statistics, the (r)MSPRT predicts that the mean decision time on the dot motion task is a decreasing function of coherence (Fig. 3a). For comparison with monkey reaction times, the model's reaction times are the sum of its decision times and estimated non-decision time, encompassing sensory delays and motor execution. For macaques 200-300 ms of non-decision time is a plausible range (Resulaj et al., 2009;Drugowitsch et al., 2012). Within this range, monkeys tend not to reach the predicted upper bound of reaction time. The (r)MSPRT framework suggests that decision times depend on the discrimination information in the evidence. Discrimination information here is measured as the divergence between the ISI distributions of MT neurons for dots moving in their preferred and null directions. Intuitively, the larger this divergence, the easier and hence faster the decision. We can estimate how much discrimination information monkeys used by asking how much the (r)MSPRT would require to obtain the same reaction times on correct trials as the monkeys, per condition. We thus find that monkeys tended to use less discrimination information than that in ISI distributions in their MT when making the decision (Fig. 3b). In contrast, the (r)MSPRT uses the full discrimination information available. This implies that the decision-making mechanism in monkeys lost large proportions of MT discrimination information (Fig. 3c). Since these (r)MSPRT decision times are upper bounds, this in turn means that this loss of discrimination information is the minimum. (r)MSPRT with depleted information reproduces monkey performance To verify if this information loss alone could account for the monkeys' deviation from the (r)MSPRT upper bounds, we depleted the discrimination information of its input distributions to exactly match the estimated monkey loss in Fig. 3c per condition. We did so only by modifying the mean and standard deviation of the null direction ISI distribution, to make it more similar to the preferred distribution (exemplified in Fig. 3d). Using these information-depleted statistics, the mean reaction times predicted by the (r)MSPRT in correct trials closely match those of monkeys (Fig. 4a). Strikingly, although this information-depletion procedure is based only on data from correct trials, the (r)MSPRT now also matches closely the mean reaction times of monkeys from error trials (Fig. 4b). Moreover, for both correct and error trials the (r)MSPRT accurately captures the relative scaling of mean reaction time by the number of alternatives (Fig. 4a,b). The reaction time distributions of the model closely resemble those of monkeys in that they are positively skewed and exhibit shorter right tails for higher coherence levels ( Fig. 4c-f). These qualitative features are captured across both correct and error trials, and 2 and 4-alternative tasks. Together, these results support the hypothesis that the primate brain approximates an algorithm similar to the rMSPRT. , but expressed as a percentage of information lost by monkeys with respect to the information available in MT for the three assumed non-decision times (solid lines and shadings). The information lost if the reaction time match is perfected (see Supplemental note S1.2.1) is shown as dashed lines (assuming 250 ms of non-decision time). (d) Example ISI density functions before (blue) and after (solid blue and dashed red) information depletion; N = 2, 51.2 % coherence and 250 ms of non-decision time. The null distribution was adjusted to become the 'new null' by changing its mean and standard deviation to make it more similar to the preferred distribution. After adjustment, the discrimination information between the preferred and 'new null' distributions matches that estimated from the monkeys performance (panel b). Figure 4: Monkey reaction times are consistent with (r)MSPRT using depleted discrimination information. (a, b) Mean reaction time of monkeys (lines) with 99 % Chebyshev confidence intervals (shading) and (r)MSPRT predictions for correct (a; Eq. 1) and error trials (b; Eq. 2) when using information-depleted statistics (MT parameter set Ω d ). (r)MSPRT results are means of 100 simulations with 3200, 4800 total trials each for N = 2, 4, respectively. Confidence intervals become larger in error trials because monkeys made fewer mistakes for higher coherence levels. (c-f) 'Violin' plots of reaction time distributions (vertically plotted histograms reflected about the y-axis) from monkeys (red; 766-785, 1170-1217 trials for N = 2, 4, respectively) and (r)MSPRT when using information-depleted statistics (blue; single example Monte Carlo simulation with 800, 1200 total trials for N = 2, 4). rMSPRT maps onto cortico-subcortical circuitry Beyond matching behaviour, we then asked whether the rMSPRT could account for the simultaneously recorded neural dynamics during decision making. To do so, we first must map its components to a neural circuit. Being able to handle arbitrary signal delays means the rMSPRT could in principle map to a range of feedback neural circuits. Because cortex (Roitman and Shadlen, 2002;Huk and Shadlen, 2005;Churchland et al., 2008;Ding and Gold, 2012a;Hanks et al., 2015), basal ganglia (Redgrave et al., 1999;Ding and Gold, 2010) and thalamus (Watanabe and Funahashi, 2004) have been implicated in decision-making, we sought a mapping that could account for their collective involvement. In the visuo-motor system, MT projects to the lateral intra-parietal area (LIP) and frontal eye fields (FEF) -the 'sensory-motor cortex'. The basal ganglia receives topographically organized afferent projections (Heimer et al., 1995) from virtually the whole cortex, including LIP and FEF (Petras, 1971;Saint-Cyr et al., 1990;Hamani et al., 2004). In turn, the basal ganglia provide indirect feedback to the cortex through thalamus (Alexander et al., 1986;Middleton and Strick, 2000). This arrangement motivated the feedback embodied in rMSPRT. Multiple parallel recurrent loops connecting cortex, basal ganglia and thalamus can be traced anatomically (Alexander et al., 1986;Middleton and Strick, 2000). Each loop in turn can be sub-divided into topographically organised parallel loops (Alexander and Crutcher, 1990;Middleton and Strick, 2000). Based on this, we conjecture the organization of these circuits into N functional loops, for decision formation, to simultaneously evaluate the possible hypotheses. Our mapping of computations within the rMSPRT to the cortico-basal-ganglia-thalamo-cortical loop is shown in Fig. 5. Its key ideas are, first, that areas like LIP or FEF in the cortex evaluate the plausibility of all available alternatives in parallel, based on the evidence produced by MT, and join this to any initial bias. Second, that as these signals traverse the basal ganglia, they compete, resulting in a decision variable per alternative. Third, that the basal ganglia output nuclei uses these to assess whether to make a final choice and what alternative to pick. Fourth, that decision variables are returned to LIP or FEF via thalamus, to become a fresh bias carrying all conclusions on the decision so far. The rMSPRT thus predicts that evidence accumulation happens in the overall, large-scale loop, rather than in a single site. Lastly, our mapping of the rMSPRT provides an account for the spatially diffuse cortico-thalamic projection (McFarland and Haber, 2002); it predicts the projection conveys a hypothesis-independent signal that does not affect the inference carried out by the loop, but may produce the increasing offset required to facilitate the cortical re-use of inhibitory fed-back decision information from the basal ganglia (see Supplemental note S1.2.2). Electrophysiological comparison With the mapping above, we can compare the proposed rMSPRT computations to recorded activity during decisionmaking in area LIP and striatum. We first consider the dynamics around decision initiation. During the dot motion task, the mean firing rate of LIP neurons deviates from baseline into a stereotypical dip soon after stimulus onset, possibly indicating the reset of a neural integrator (Roitman and Shadlen, 2002;Furman and Wang, 2008). LIP responses become choice-and coherence-modulated after the dip (Roitman and Shadlen, 2002). We therefore reasoned that LIP neurons engage in decision formation from the bottom of the dip and model their mean firing rate from then on. After this, mean firing rates "ramp-up" for ∼ 40 ms, then "fork": they continue ramping-up if dots moved towards the response (or movement) field of the neuron (inRF trials; Fig. 6a, solid lines) or drop their slope if the dots were moving away from its response field (outRF trials; dashed lines) (Roitman and Shadlen, 2002;Churchland et al., 2008). The magnitude of LIP firing rate is also proportional to the number of available alternatives (Fig. 6a,b) . The model LIP in rMSPRT (sensory-motor cortex) captures each of these properties: activity ramps from the start of the accumulation, forks between putative in-and out-RF responses, and scales with the number of alternatives (Fig. 6c). Under this model, inRF responses in LIP occur when the likelihood function represented by neurons was best matched by the uncertain MT evidence; correspondingly, outRF responses occur when the likelihood function was not well matched by the evidence. The rMSPRT provides a mechanistic explanation for the ramp-and-fork pattern. Initial accumulation (steps 0-2) occurs before the feedback has arrived at the model sensory-motor cortex, resulting in a ramp. The forking (at step 3) is the point at which the posteriors from the output of the model basal ganglia first arrive at sensory-motor cortex to be re-used as priors. By contrast, non-recursive MSPRT (without feedback of posteriors) predicts wellseparated neural signals throughout (Fig. 6e). Consequently, the rMSPRT suggests that the fork represents the time at which updated signals representing the competition between alternatives -here posterior probabilitiesare made available to the sensory-motor cortex. Sensory cortex (e.g. MT) produces fresh evidence for the decision, delivered to sensory-motor cortex in C parallel channels (e.g. MT spike trains). Sensory-motor cortex (e.g. LIP or FEF) computes in parallel the simplified log-likelihoods of all hypotheses given this evidence and adds log-priors -or fed-back log-posteriors after the delay ∆ has elapsed. It also adds a hypothesis-independent baseline comprising a simulated constant background activity (e.g. from LIP before stimulus onset) and a timeincreasing term from the interaction with the thalamus. The basal ganglia bring the computations of all hypotheses together into new negative log-posteriors that are then tested against a threshold. Negative log-posteriors will tend to decrease for the best-supported hypothesis and increase otherwise. This is consistent with the idea that basal ganglia output selectively removes inhibition from a chosen motor program while increasing inhibition of competing programs (Basso and Wurtz, 2002;Humphries et al., 2006;Bogacz and Gurney, 2007;Redgrave et al., 1999). Further details of this computation are in Supplemental Fig. S1. Finally, the thalamus conveys the updated logposterior from basal ganglia output to be used as a log-prior by sensory-motor cortex. Thalamus' baseline is given by its diffuse feedback from sensory-motor cortex. (b) Corresponding formal mapping of rMSPRT's computational components, showing how Eq. 13 decomposes. All computations are delayed with respect to the basal ganglia via the integer latencies δ pq , from p to q; where p, q ∈ {y, b, u}, y stands for the sensory-motor cortex, b for the basal ganglia and u for the thalamus. ∆ = δ yb + δ bu + δ uy with the requirement ∆ ≥ 1. The rMSPRT further predicts that the scaling of activity in sensory-motor cortex by the number of alternatives is due to cortico-subcortical loops becoming organized as N parallel functional circuits, one per hypothesis. This would determine the baseline output of the basal ganglia. Until task related signals reach the model basal ganglia, their output codes the initial priors for the set of N hypotheses. Their output is then an increasing function of the number of alternatives (Fig. 6f). This increased inhibition of thalamus in turn reduces baseline cortical activity as a function of N . The direct proportionality of basal ganglia output firing rate to N recorded in the substantia nigra pars reticulata (SNr) of macaques during a fixed-duration choice task (Basso and Wurtz, 2002) lends support to this hypothesis. The rMSPRT also captures key features of dynamics at decision termination. For inRF trials, the mean firing rate of LIP neurons peaks at or very close to the time of saccade onset (Fig. 6b). By contrast, for outRF trials mean rates appear to fall just before saccade onset. The rMSPRT can capture both these features (Fig. 6d) when we allow the model to continue updating after the decision rule (Eq. 14) is met. The decision rule is implemented at the output of the basal ganglia and the model sensory-motor cortex peaks just before the final posteriors have reached the cortex. The rMSPRT thus predicts that the activity in LIP lags the actual decision. This prediction may explain an apparent paradox of LIP activity. The peri-saccadic population firing rate peak in LIP during inRF trials (Fig. 6b) is commonly assumed to indicate the crossing of a threshold and thus decision termination. Visuo-motor decisions must be terminated well before saccade to allow for the delay in the execution of the motor command, conventionally assumed in the range of 80-100 ms in macaques (Mazurek et al., 2003;Resulaj et al., 2009). It follows that LIP peaks too close to saccade onset (∼ 15 ms before) for this peak to be causal. The rMSPRT suggests that the inRF LIP peak is not indicating decision termination, but is instead a delayed read-out of termination in an upstream location. LIP firing rates are also modulated by dot-motion coherence ( Fig. 7a,b,i,j). Following stimulus onset, the response of LIP neurons tends to fork more widely for higher coherence levels ( Fig. 7a,i) (Roitman and Shadlen, 2002;Churchland et al., 2008). The increase in activity before a saccade during inRF trials is steeper for higher coherence levels, reflecting the shorter average reaction times (Fig. 7b,j) (Roitman and Shadlen, 2002;Churchland et al., 2008). The rMSPRT shows coherence modulation of both the forking pattern (Fig. 7c,k) and slope of activity increase (Fig. 7d,l). rMSPRT also predicts that the apparent convergence of LIP activity to a common level in inRF trials is not required for inference and so may arise due to additional neural constraints. We take up this point in the Discussion. Similar modulation of population firing rates during the dot motion task has been observed in the striatum (Ding and Gold, 2010). Naturally, the striatum in rMSPRT, which relays cortical input (Supplemental Fig. S1), captures this modulation (Supplemental Fig. S2). Electrophysiological predictions Our proposed mapping of the rMSPRT's components (Fig. 5) makes testable qualitative predictions for the mean responses in basal ganglia and thalamus during the dot motion task. For the basal ganglia output, likely from the oculomotor regions of the SNr, rMSPRT (like MSPRT) predicts a drop in the activity of output neurons in inRF trials and an increase in outRF ones. It also predicts that these changes are more pronounced for higher coherence levels ( Fig. 7e,f,m,n). These predictions are consistent with recordings from macaque SNr neurons showing that they suppress their inhibitory activity during visually or memory guided saccade tasks, in putative support of saccades towards a preferred region of the visual field (Handel and Glimcher, 1999;Basso and Wurtz, 2002), and enhance it otherwise (Handel and Glimcher, 1999). For visuo-motor thalamus, rMSPRT predicts that the time course of the mean firing rate will exhibit a rampand-fork pattern similar to that in LIP (Fig. 7g,h,o,p). The separation of in-and out-RF activity is consistent with the results of Watanabe and Funahashi (2004) who found that, during a memory-guided saccade task, neurons in the macaque medio-dorsal nucleus of the thalamus (interconnected with LIP and FEF), responded more vigorously when the target was flashed within their response field than when it was flashed in the opposite location. Predictions for neural activity features not crucial for inference Understanding how a neural system implements an algorithm is complicated by the need to identify which features are core to executing the algorithm, and which are imposed by the constraints of implementing computations using neural elements -for example, that neurons cannot have negative firing rates, so cannot straightforwardly represent negative numbers. The three free parameters in the rMSPRT allow us to propose which functional and anatomical properties of the cortico-basal-ganglia-thalamo-cortical loop are workarounds within these constraints, but do not affect inference. Mean population firing rate of LIP neurons during correct trials on the reaction-time version of the dot motion task. By convention, inRF trials are those when recorded neurons had the motion-cued target inside their response field (solid lines); outRF trials are those when recorded neurons had the motion-cued target outside their response field (dashed lines). (a) Aligned at stimulus onset, starting at the stereotypical dip, illustrating the "ramp-andfork" pattern between average inRF and outRF responses. (b) Aligned at saccade onset (vertical dashed line). (c, d) Mean time course of the model sensory-motor cortex in rMSPRT aligned at decision initiation (c; t = 1) and termination (d; Term; dotted line), for correct trials. Initiation and termination are with respect to the time of basal ganglia output. Note the suggested saccade time "Sac?", close to the peak of inRF computations. Simulations are a single Monte Carlo experiment with 800, 1200 total trials for N = 2, 4, respectively, using parameter set Ω d . We include an additional step at −1 determined only by initial priors and baseline, where no inference is carried out (y i (t + δ yb ) = 0 for all i). Conventions as in (a). (e) Same as in (c), but for the standard, non-recursive MSPRT (defined as Eq. 14 using only the first case of Eqs. 12 and 13). (f) Baseline output of the model basal ganglia increases as a function of the number of alternatives, thus increasing the initial inhibition of thalamus and cortex. For uniform priors, the rMSPRT predicts this function is: − log P (H i ) = − log (1/N ). Coloured dots indicate N = 2 (blue) and N = 4 (green). One free parameter enforces the baseline activity that LIP neurons maintain before and during the initial stimulus presentation (Fig. 7a,i). Varying this parameter, l, scales the overall activity of LIP, but does not change the inference performed (Fig. 8a). Consequently, this suggests that the baseline activity of LIP depends on N but does not otherwise affect the inference algorithm implemented by the brain. The second free parameter, w yt , sets the strength of the spatially diffuse projection from cortex to thalamus. Varying this weight changes the forking between inRF and outRF computations but does not affect inference (Fig. 8b). The third free parameter, n, sets the overall, hypothesis-independent temporal scale at which sampled input ISIs are processed; changing n varies the slope of sensory-motor computations, even allowing all-decreasing predicted mean firing rates (Fig. 8c). By definition, the log-likelihood of a sequence tends to be negative and decrease monotonically as the sequence lengthens. Introducing n is required to get positive simplified log-likelihoods, capable of matching the neural activity dynamics, without affecting inference. Hence, n may capture a workaround of the decision-making circuitry to represent these whilst avoiding signal 'underflow', by means of scaling the input data. Traditionally, evidence accumulation is exclusively associated with increasing firing rates during decision, and previous studies have questioned whether the often-observed decision-correlated yet non-increasing firing rates (e.g. in outRF conditions in Fig. 7a Kira et al. (2015)) are consistent with accumulation (Park et al., 2014). The diversity of patterns predicted by rMSPRT in sensory-motor cortex (Fig. 8) solves this by demonstrating that both increasing and non-increasing activity patterns can house evidence accumulation. Discussion We sought to characterize the mechanism that underlies decisions by using the knowable function of a normative algorithm -the rMSPRT-as a systematic framework. With the currently available range of experimental studies giving us local snapshots of cortical and sub-cortical activity during decision-making tasks, the rMSPRT shows us how, where, and when these snapshots fit into a complete inference procedure. While it is not plausible that the brain implements exactly a specific algorithm, our results suggest that the essential structure of its underlying decision mechanism includes the following. First, that the mechanism is probabilistic in nature -the brain utilizes the uncertainty in neural signals, rather than suffering from it. Second, that the mechanism works entirely 'on-line', continuously updating representations of hypotheses that can be queried at any time to make a decision. Third, that this processing is distributed, recursive and parallel, producing a decision variable for each available hypothesis. And fourth, that this recursion allows the mechanism to adapt to the observed statistics of the environment, as it can re-use updated probabilities about hypotheses as priors for upcoming decisions. We have pushed the analogy between a single-equation statistical test and the neural decision-making algorithm a long way before it broke down. We find it remarkable that, starting from data-constrained spike-trains, a monolithic statistical test in the form of the rMSPRT can simultaneously account for much of the anatomy, behaviour and electrophysiology of decision-making. Why implement a recursive algorithm in the brain? Prior work proposed that the cortex and basal ganglia alone could implement the non-recurrent MSPRT (Bogacz and Gurney, 2007;Lepora and Gurney, 2012). However, the looped cortico-basal-ganglia-thalamo-cortical architecture implies a recursive computation. This raises the question of why such a complex, distributed feedback architecture exists. First, recursion makes trial-to-trial adaptation of decisions possible. Priors determined by previous stimulation (fed-back posteriors), can bias upcoming similar decisions towards the expected best choice, even before any new evidence is collected. This can shorten reaction times in future familiar settings without compromising accuracy. Second, recursion provides a robust memory. A posterior fed-back as a prior is a sufficient statistic of all past evidence observations. That is, it has taken 'on-board' all sensory information since the decision onset. In rMSPRT, accumulation happens over the whole cortico-subcortical loop, so the sensory-motor cortex only need keep track of observations in a moving time window of maximum width ∆ -the delay around the loop-rather than keeping track of the entire sequence of observations. For a physical substrate subject to dynamics and leakage, like a neuron in LIP or FEF, this has obvious advantages: it would reduce the demand for keeping a perfect record (e.g. likelihood) of all evidence, from the usual hundreds of milliseconds in decision times to the ∼ 30 ms of latency around the cortico-basal-ganglia-thalamo-cortical loop (adding up estimates from Nambu et al. (1988); Hikosaka et al. (1993); Gerfen and Wilson (1996)). Lost information and perfect integration The rMSPRT predicts that monkeys do not make full use of the discrimination information available in MT (Fig. 3b). Here rMSPRT needs only C = N MT spike-trains to outperform monkeys: as this is the upper bound of rMSPRT mean decision time, this implies the monkey's predicted loss (Fig. 3c) is a minimum. This gap arises because rMSPRT is a generative model of the task, which assumes initial knowledge of coherence, which in turn determines appropriate likelihoods for the task at hand. Any deviation from this ideal will tend to degrade performance (Beck et al., 2012), whether it comes from one or more of the inherent leakiness of neurons, correlations, or the coherence to likelihood mapping (Caballero et al., 2015). LIP neurons change their coding during learning of the dot motion task and MT neurons do not (Law and Gold, 2008), implying that learning the task requires mapping of MT to LIP populations by synaptic plasticity (Law and Gold, 2009). Consequently, even if the MT representation is perfect, the learnt mapping only need satisfice the task requirements, not optimally perform. Excellent matches to performance in both correct and error trials were obtained solely by accounting for lost information in the evidence streams. No noise was added within the rMSPRT itself. Prior experimental work reported perfect, noiseless integration by both rat and human subjects performing an auditory task, attributing all effects of noise on task performance to the variability in the sensory input (Brunton et al., 2013). Our results extend this observation to primate performance on the dot motion task, and further support the idea that the neural decision-making mechanism can perform perfect integration of uncertain evidence. Neural response patterns during decision formation Neurons in LIP, FEF (Ding and Gold, 2012a) and striatum (Ding and Gold, 2010) exhibit a ramp-and-fork pattern during the dot motion task. Analogous choice-modulated patterns have been recorded in the medial premotor cortex of the macaque during a vibro-tactile discrimination task (Hernández et al., 2002) and in the posterior parietal cortex and frontal orienting fields of the rat during an auditory discrimination task (Hanks et al., 2015). The rMSPRT indicates that such slow dynamics emerge from decision circuits with a delayed, inhibitory drive within a looped architecture. Together, this suggests that decision formation in mammals may approximate a common recursive computation. A random dot stimulus pulse delivered earlier in a trial has a bigger impact on LIP firing rate than a later one . This highlights the importance of capturing the initial, early-evidence ramping-up before the forking. However, multiple models omit it, focusing only on the forking (e.g. Mazurek et al. (2003); Ditterich (2006); Beck et al. (2008)). Other, heuristic models account for LIP activity from the onset of the choice targets, through dots stimulation and up until saccade onset (e.g. Wong et al. (2007); Grossberg and Pilly (2008); Furman and Wang (2008); ). Nevertheless, their predicted firing rates rely on two fitted heuristic signals that shape both the dip and the ramp-and-fork pattern. In contrast, the ramp-and-fork dynamics emerge naturally from the delayed inhibitory feedback in rMSPRT during decision formation (see Supplemental note S1.2.3 for more details on the relation of the rMSPRT to prior models of decision-making). rMSPRT replicates the ramp-and-fork pattern for individual coherence levels and given N (Fig. 6). However, comparing its predictions across N (Fig. 6d) or over coherence levels (Fig. 7d,l) reveals that the model sensorymotor cortex does not converge to a common value around decision termination in inRF trials, as the LIP does (Fig. 6b, Fig. 7b,j and Churchland et al. (2008)). Here we have reached the limits where the direct comparison of the single-equation statistical test to neural signals breaks down. Our mapping of rMSPRT onto the cortico-basalganglia-thalamo-cortical circuitry suggests the core underlying computation contributed per site during decision formation (Fig. 5); that is, what is required for inference, such as the negative log-posterior probability at the basal ganglia. Even if these suggestions are accurate, we would still expect other factors to influence the dynamics of recorded neural signals. These regions after all engage in multiple other computations, some of which are likely orthogonal to decision formation. One possibility is that the convergence of LIP activity to a common value just prior to saccade onset may result from monotonic transformations of these core computations. For instance, the successful fitting of previous computational models to neural data has been critically dependent on the addition of heuristic signals Grossberg and Pilly, 2008;Furman and Wang, 2008;. The incorporation of similar heuristic signals may also convert the present qualitative resemblance of the recordings, by rMSPRT, to a quantitative reproduction. This is, however, beyond the scope of this study, whose aim is to test how closely a normative mechanism can explain behaviour and electrophysiology during decisions. Emergent predictions Inputs to the rMSPRT were determined solely from MT responses during the dot-motion task, and it has only three free parameters, none of which affect inference. It is thus surprising that it renders emergent predictions that are consistent with experimental data. First, our information-depletion procedure used exclusively statistics from correct trials. Yet, after depletion, rMSPRT matches monkey behaviour in correct and error trials (Fig. 4), suggesting a mechanistic connection between them in the monkey that is naturally captured by rMSPRT. Second, the values of the three free parameters were chosen solely so that the model LIP activity resembled the ramp-andfork pattern observed in our LIP data-set (Fig. 6a,c). As demonstrated in Fig. 8, the ramp-and-fork pattern is a particular case of two-stage patterns that are an intrinsic property of the rMSPRT, guaranteed by the feedback of the posterior after the delay ∆ has elapsed (Eq. 9). Nonetheless, the model also matches LIP dynamics when aligned at decision termination (panels b and d). Third, the predictions of the time course of the firing rate in SNr and thalamic nuclei naturally emerge from the functional mapping of the algorithm onto the cortico-basal-gangliathalamo-cortical circuitry. These are already congruent with existing electrophysiological data; however, their full verification awaits recordings from these sites during the dot motion task. These and other emergent predictions are an encouraging indicator of the explanatory power of a systematic framework for understanding decision formation, embodied by the rMSPRT. Experimental paradigms Behavioural and neural data was collected in two previous studies (Britten et al., 1992a;Churchland et al., 2008), during two versions of the random dots task (Fig. 1a-c). Detailed experimental protocols can be found in such studies. Below we briefly summarize them. Fixed duration Three rhesus macaques (Macaca mulatta) were trained to initially fixate their gaze on a visual fixation point (cross in Fig. 1a). A random dot kinematogram appeared covering the response field of the MT neuron being recorded (grey patch); task difficulty was controlled per trial by the proportion of dots (coherence %) that moved in one of two directions: that to which the MT neuron was tuned to -its preferred motion direction-or its opposite -null motion direction. After 2 s the fixation point and kinematogram vanished and two targets appeared in the possible motion directions. Monkeys received a liquid reward if they then saccaded to the target towards which the dots in the stimulus were predominantly moving (Britten et al., 1992a). Reaction time Two macaques learned to fixate their gaze on a central fixation point (Fig. 1b,c). Two (Fig. 1b) or four (Fig. 1c) eccentric targets appeared, signalling the number of alternatives in the trial, N ; one such target fell within the response (movement) field of the recorded LIP neuron (grey patch). This is the region of the visual field towards which the neuron would best support a saccade. Later a random dot kinematogram appeared where a controlled proportion of dots moved towards one of the targets. The monkeys received a liquid reward for saccading to the indicated target when ready . Data analysis For comparability across databases, we only analysed data from trials with coherence levels of 3.2, 6.4, 12.8, 25.6, and 51.2 %, unless otherwise stated. We used data from all neurons recorded in such trials. This is, between 189 and 213 visual-motion-sensitive MT neurons (see table S1; data from Britten et al. (1992a,b)) and 19 LIP neurons whose activity was previously determined to be choice-and coherence-modulated (data from Churchland et al. (2008)). The behavioural data analysed was that associated to LIP recordings. For MT, we analysed the neural activity between the onset and the vanishing of the stimulus. For LIP we focused on the period between 100 ms before stimulus onset and 100 ms after saccade onset. To estimate moving statistics of neural activity we first computed the spike count over a 20 ms window sliding every 1 ms, per trial. The moving mean firing rate per neuron per condition was then the mean spike count over the valid bins of all trials divided by the width of this window; the standard deviation was estimated analogously. LIP recordings were either aligned at the onset of the stimulus or of the saccade; after or before these (respectively), data was only valid for a period equal to the reaction time per trial. The population moving mean firing rate is the mean of single-neuron moving means over valid bins; analogously, the population moving variance of the firing rate is the mean of single neuron moving variances. For clarity, population statistics were then smoothed by convolving them with a Gaussian kernel with a 10 ms standard deviation. The resulting smoothed population moving statistics for MT are in Fig. 1d,e. In Figs. 6a,b and 7a,b,i,j smoothed mean LIP firing rates are plotted only up to the median reaction time plus 80 ms, per condition. Analogous procedures were used to compute the moving mean of rMSPRT computations, per time step. These are shown up to the median of termination observations plus 3 time steps in Figs. 6c-e and 7c-h,k-p. Spikes and continuous time out of discrete time We have defined rMSPRT to operate over a discrete time line; however, the brain operates over continuous time. Caballero et al. (2015) introduced a continuous-time generalization of MSPRT that uses spike-trains as inputs for decision. Thence, the length of ISIs was random and their sum up until decision is, by definition, a continuously distributed time. With all other assumptions equal, Caballero et al. (2015) demonstrated that, as an average, the traditional discrete-time MSPRT requires the same number of observations to decision, as the ISIs required by the more general spike-based MSPRT. This means that the number of observations to decision, T , in (r)MSPRT has an interpretation as continuously-distributed time. So, the expected decision sample size for correct choices, T c , required by the simpler discrete-time (r)MSPRT, can be interpreted as the mean decision time predicted by the more general continuous-time, spike-based MSPRT, where µ * n is the mean ISI produced by a MT neuron whose preferred motion direction was matched by the stimulus and was thus firing the fastest on average. When the mean firing rate to a preferred characteristic of the stimulus is larger than that to a non-preferred one (µ * < µ 0 ) -as in MT (Britten et al., 1992a), middle-lateral and anterolateral auditory cortex (Tsunada et al., 2016)-the hypothesis selected in error trials is that misinformed by channels with mean µ 0 n which intuitively happened to fire faster than those whose mean was actually µ * n. Hence, the mean decision time predicted by rMSPRT in error trials would be: where T e is the mean decision sample size for error trials. An instance of rMSPRT capable of making choices upon sequences of spike-trains is straightforward from the formal framework above and that introduced by Caballero et al. (2015); nevertheless, for simplicity here we choose to work with the discrete-time rMSPRT. After all, thanks to Eqs. 1 and 2 we can still interpret its behaviour-relevant predictions in terms of continuous time. We use these to compute the decision times that originate the reaction times in Figs. 3 and 4. Estimation of lost information We outline here how we use the monkeys' reaction times on correct trials and the properties of the rMSPRT, to estimate the amount of discrimination information lost by the animals. This is, the gap between all the information available in the responses of MT neurons, as fully used by the rMSPRT in Fig. 3a, and the fraction of such information actually used by monkeys. The expected number of observations to reach a correct decision for (r)MSPRT, T c , depends on two quantities. First, the mean total discrimination information required for the decision, I ( , N ), that depends only on the error rate , and N . Second, the 'distance' between distributions of ISIs from MT neurons facing preferred and null directions of motion (Caballero et al., 2015). This distance is the Kullback-Leibler divergence from f * to f 0 which measures the discrimination information available between the distributions. Using these two quantities, the decision time in the (r)MSPRT is (Caballero et al., 2015): The product of our Monte Carlo estimate of T c in the rMSPRT (which originated Fig. 3a) and D from the MT ISI distributions (Fig. 1f), gives an estimate of the limit I ( , N ) in expression 3, denoted byÎ ( , N ). The 'mean decision sample size' of monkeys -hence the superscript m -within this framework corresponds toˆ T m c = (τ m c /µ * n) − 0.5 (from Eq. 1). Here,τ m c is the estimate of the mean decision time of monkeys for correct choices, per condition; that is, the reaction time from Fig. 3a minus some constant non-decision time. Witĥ T m c andÎ ( , N ), we can estimate the corresponding discrimination information available to the monkeys in this framework asD m =Î ( , N ) /ˆ T m c (from expression 3). Fig. 3b compares D (red line) toD m (blue/green lines and shadings) for monkeys, using non-decision times in a plausible range of 200-300 ms. Fig. 3c shows the discrimination information lost by monkeys as the percentage of D, 1 − D m /D × 100 %. Information depletion procedure Expression 3 implies that the reaction times predicted by rMSPRT should match those of monkeys if we make the algorithm lose as much information as the monkeys did. We did this by producing a new parameter set that brings f 0 closer to f * per condition, assuming 250 ms of non-decision time; critically, simulations like those in Fig. 4 will give about the same rMSPRT reaction times regardless of the non-decision time chosen, as long as it is the same assumed in the estimation of lost information and this information-depletion procedure. An example of the results of information depletion in one condition is in Fig. 3d. We start with the original parameter set extracted from MT recordings, Ω ('preferred' and 'null' densities in Fig. 3d), and keep µ * and σ * fixed. Then, we iteratively reduce/increase the differences |µ 0 − µ * | and |σ 0 − σ * | by the same proportion, until we get new parameters µ 0 and σ 0 that, together with µ * and σ * , specify preferred ('preferred' in Fig. 3d) and null ('new null') density functions that bear the same discrimination information estimated for monkeys,D m ; hence, they exactly match the information loss in the solid lines in Fig. 3c. Intuitively, since the 'new null' distribution in Fig. 3d is more similar to the 'preferred' one than the 'null', the Kullback-Leibler divergence between the first two is smaller than that between the latter two. The resulting parameter set is dubbed Ω d and reported in table S1. Simulation procedures For rMSPRT decisions to be comparable to those of monkeys, they must exhibit the same error rate, ∈ [0, 1]. Error rates are taken to be an exponential function of coherence (%), s, fitted by non-linear least squares (R 2 > 0.99) to the behavioural psychometric curves from the analysed LIP database, including 0 and 72.4 % coherence for this purpose. This resulted in: Since monkeys are trained to be unbiased regarding choosing either target, initial priors for rMSPRT are set flat (P (H i ) = 1/N for all i) in every simulation. During each Monte Carlo experiment, rMSPRT made decisions with error rates from Eq. 4. The value of the threshold, θ, was iteratively found to satisfy per condition (Caballero et al., 2015); an example of the θs found is shown in Fig. S3a in the Supplement. Decisions were made over data, x j (t) /n, randomly sampled from lognormal distributions specified for all channels by means and standard deviations µ 0 and σ 0 , respectively. The exception was a single channel where the sampled distribution was specified by µ * and σ * . This models the fact that MT neurons respond more vigorously to visual motion in their preferred direction compared to motion in a null direction, e.g. against the preferred. Effectively, this simulates macaque MT neural activity during the random dot motion task. The same parameters were used to specify likelihood functions per experiment. All statistics were from either the Ω or Ω d parameter sets as noted per case. Note that the statistics actually used in the simulations are those from MT in table S1, divided by the scaling factor n. Author Contributions JAC deduced the rMSPRT, and performed all simulations and data analyses; all authors discussed the results and wrote the article. S1 Supplemental Information S1.1 Supplemental Experimental Procedures S1.1.1 Definition of rMSPRT Let x(t) = (x 1 (t) , . . . , x C (t)) be a vector random variable composed of observations, x j (t), made in C channels at time t ∈ {1, 2, . . .}. Let also x (r : t) = (x(r)/n, . . . , x(t)/n) be the sequence of i.i.d. vectors x(t)/n from r to t (r < t). Here n ∈ {R > 0} is a constant data scaling factor. If n > 1, it scales down incoming data, x j (t). Note that this is only effective from the likelihood on and does not affect the format or time interpretation of the data. This is key to reveal that the dynamics in rMSPRT computations match those of cortical recordings (Fig. 8c). Crucially, since n is hypothesis-independent, it does not affect inference. Assume there are N ∈ {2, 3, . . .} alternatives or hypotheses about the uncertain evidence, x (1 : t) -say possible courses of action or perceptual interpretations of sensory data. The task of a decision maker is to determine which hypothesis H i (i ∈ {1, . . . , N }) is best supported by this evidence as soon as possible, for a given level of accuracy. To do this, it requires to estimate the posterior probability of each hypothesis given the data, P (H i |x (1 : t)), as formalized by Bayes' rule. The mechanism we seek must be recursive to match the nature of the brain circuitry. This implies that it can use the outcome of previous choices to inform a present one, thus becoming adaptive and engaged in ongoing learning. Formally, P (H i |x (1 : t)) will be initially computed upon starting priors P (H i ) and likelihoods P (x (1 : t) |H i ); however, after some time ∆ ∈ {1, 2, . . .}, it will re-use past posteriors, P (H i |x (1 : t − ∆)), ∆ time steps ago, as priors, along with the likelihood function P (x (t − ∆ + 1 : t) |H i ) of the segment of x (1 : t) not yet accounted by P (H i |x (1 : t − ∆)). A mathematical induction proof of this form of Bayes' rule follows. If say ∆ = 2, in the first time step, t = 1: By t = 2: P (H i |x(2)/n, x(1)/n) = P (x(2)/n, x(1)/n|H i ) P (H i ) P (x(2)/n, x(1)/n) Note that we are still using the initial fixed priors P (H i ). Now, for t = 3: P (H i |x(3)/n, x(2)/n, x(1)/n) = P (x(3)/n, x(2)/n, x(1)/n|H i ) P (H i ) P (x(3)/n, x(2)/n, x(1)/n) According to the product rule, we can segment the probability of the sequence x (1 : t) as: And, since x(t) are i.i.d., the likelihood of the two segments is: If we substitute the likelihood in Eq. 6 by Eq. 8, its normalization constant by Eq. 7 and re-group, we get: It is evident that the rightmost factor is P (H i |x(1)/n) as in Eq. 5. Hence, in this example, by t = 3 we start using past posteriors as priors for present inference as: So, in general: where the normalization constants are We stress that by t > ∆, Eq. 9 starts making use of a past posterior as prior. It is apparent that the critical computations in Eq. 9 are the likelihood functions. The forms that we consider ahead are based on the simplest shown by Caballero et al. (2015), where the number of evidence streams equals the number of hypotheses (C = N ); however, as discussed by them, more complex (C > N ), biologically-plausible ones can be formulated if necessary. Although not required, to simplify the notation when C = N , data in the channel conveying the most salient evidence for hypothesis H i will bear its same index i, as x i (j) (see Caballero et al. (2015)). When t ≤ ∆ we have: this is, the likelihood that x i (j) /n was drawn from a distribution, f * , rather than from f 0 , that is assumed to have originated x k (j) /n (k = i) for the rest of the channels. In Eq. 10, a (t) = t m=1 N k=1 f 0 (x k (m) /n) is a hypothesis-independent factor that does not affect Eq. 9 and thus needs not to be considered further. When t > ∆ only the observations in the time window [t − ∆ + 1, t] are used for the likelihood function because data before this window is already considered within the fed-back posterior, P (H i |x (1 : t − ∆)). Then, the likelihood function is: where again d (t) = t m=t−∆+1 N k=1 f 0 (x k (m) /n) need not to be considered further. Now, for our likelihood functions to work upon a statistical structure like that produced by neurons in MT we need to be more specific. Caballero et al. (2015) showed that ISIs in MT during the random dot motion task are best described as lognormally distributed and we assume that decisions are made upon the information conveyed by them. Thus, from now on we assume that f * and f 0 are lognormal and that they are specified by means µ * and µ 0 and standard deviations σ * and σ 0 , respectively. We can then conflate the logarithms of Eqs. 10 and 11 as the log-likelihood function, y i (t), substituting the lognormal-based form of it reported by Caballero et al. (2015): with where κ = log µ 2 / σ 2 + µ 2 and Θ 2 = log σ 2 /µ 2 + 1 with appropriate subindices * , 0 . The computations in Eq. 12 have been shown by Caballero et al. (2015) to be neurally plausible. The terms g 0 ∆ in Eq. 12 are hypothesis-independent, can be absorbed into a (t) and d (t), correspondingly, and thus will not be considered further. As a result of this, the y i (t) used from now on is a "simplified" version of the log-likelihood. We now take the logarithm of Eq. 9, define − log P i (t) ≡ − log P (H i |x (1 : t)) and substitute the simplified log-likelihood from Eq. 12 in the result, giving: The term c (t) models a hypothesis independent baseline: importantly, as this term is uniform across all hypotheses, it has no effect on inference. It is defined in detail below. The rMSPRT itself (shown schematically in Fig. 2) takes the form: where D (t) is the decision at time t, θ ∈ (0, − log (1/N )] is a constant threshold, and T is the decision termination time. S1.1.2 Cortical and thalamic baselines The cortical baseline c (t) in Eq. 13, delayed as mapped in Fig. 5 is It houses a constant baseline l and the thalamo-cortical contribution h (t − δ bu − δ uy ), which in turn is the delayed cortical input to the thalamus N Here we have chosen h (t − δ bu ) to be a scaled average of cortical contributions; nevertheless, any other hypothesisindependent function of them can be picked instead if necessary, rendering similar results. Ω, Ω d Ω Ω d , N = 2 Ω d , N = 4 Coherence % No. neurons µ * n σ * n µ 0 n σ 0 n µ 0 n σ 0 n µ 0 n σ 0 n 3.2 206 54. Table S1: Population ISI statistics (ms) in MT per coherence (first column). Second column: number of neurons for which data was available per coherence. µ: mean. σ: standard deviation. n: scaling factor. The parameter set (Ω or Ω d ) to which each value corresponds is noted above them. Note that, due to the information depletion required to produce Ω d , µ 0 n and σ 0 n take different values for N = 2, 4. The definitions above introduce two free parameters l ∈ R + and w yu ∈ [0, 1) that have the purpose of shaping and revealing the dynamics of the computations within rMSPRT during decision formation. The range of w yu ensures that the value of computations in the cortico-thalamo-cortical, positive-feedback loop is not amplified to the point of disrupting inference in the overall loop. Crucially, since both parameters are hypothesis-independent, none affects inference. S1.1.3 Model parameters To parameterize the input stochastic processes and likelihood functions of rMSPRT, we estimated the means µ * n and µ 0 n and standard deviations σ * n and σ 0 n as those over the activity between 900 and 1900 ms after stimulus onset in the MT population, per condition (shown smoothed in Fig. 1d,e). The subscript * indicates the condition when dots were predominantly moving in the direction preferred by the neuron. The subscript 0 indicates when they were moving against it. We dub the resulting parameter set Ω and report it in table S1. Fig. 1f shows the lognormal ISI distributions specified by Ω; solid ones are f * , in our notation, and dashed ones are f 0 , per coherence. We use l = 15, w yu = 0.4, n = 40 and δ yb , δ yu , δ bu , δ uy = 1 (hence ∆ = 3) for all of our simulations. The value of latencies was set to 1 for simplicity. The values of the first three free parameters come from a manual tuning exercise with the aim of revealing a pattern in the model LIP akin to the ramp-and-fork one in LIP recordings; note that such a two-segment pattern is already guaranteed by the two parts of Eq. 9. S1.2 Supplemental items S1.2.1 Information loss for a perfected match of monkey reaction times The slight deviation of the mean reaction times of rMSPRT vs those of monkeys in Fig. 4a stems from the expression 3 being an inequality. Due to this,Î ( , N ) is a likely over-estimate of I ( , N ). DividingÎ ( , N ) byˆ T m c hence gives an over-estimate of the monkey discrimination information,D m . If then rMSPRT uses statistics consistent with this over-estimatedD m , it renders under-estimated reaction times. This minor discrepancy can be corrected by further multiplyingD m , per condition, by the corresponding ratio of the decision time of the model over that of the monkey, from Fig. 4a. Repeating the experiments with the implied parameter set would trivially render rMSPRT reaction times that more perfectly match those of monkeys (not shown). This will likely carry with it a better match in error trials which is, again, unconstrained in the procedure. Nonetheless, this exercise gives us the full information loss associated to such perfected match, shown in Fig. 3c as dashed lines for a 250 ms non-decision time (compare to solid lines); this constitutes a further refined measure of the minimum information lost by the animals according to this framework. S1.2.2 Function of a diffuse cortico-thalamic projection rMSPRT proposes a role for the spatially diffuse cortico-thalamic projection (McFarland and Haber, 2002). The thalamic baseline, contributed by cortex, h (t − δ bu ), is predicted to constitute the offset within which updated inference results (posteriors made priors) from basal ganglia are conveyed back to cortex. This is another unique prediction of rMSPRT, not accounted for by previous probabilistic algorithms. Fig. S3b shows h (t − δ bu ) along with the corresponding output of the model thalamus in inRF and outRF settings. Without the increasing range facilitated by a diffuse cortico-thalamic contribution (red in model) the biological thalamic output (blue in model) would be at risk of saturating at 0, especially in outRF trials where the basal ganglia inhibition is strongest, Figure S1: Mapping of rMSPRT computations to the basal ganglia. Parallel computations for N hypotheses -indexed by j-mapped onto the basal ganglia nuclei, within the grey dashed box (see Bogacz and Gurney (2007); Larsen et al. (2010); Bogacz and Larsen (2011)). Same conventions and notation as in Fig. 5. All computations are delayed with respect to the substantia nigra pars reticulata. log N k=1 exp (y k (t + δ yb ) + log P k (t − δ bu − δ uy ) + c (t + δ yb )): normalization term (from Eq. 13), putting together the cortical computations for all hypotheses into a hypothesis-independent contribution. Note that the model striatum represents a copy of the cortical signal (as in Fig. 5) but its influence on the substantia nigra pars reticulata is the negative of such cortical input. wasting then the segregation of hypotheses carried by it, represented by the fed-back posteriors in the model. If the biological thalamus were thus saturated, rMSPRT also predicts that the cortex would be limited to produce likelihood-proportional responses only on the basis of the immediately incoming data, instead of making use of the whole sequence. Lastly, note that the increasing h (t − δ bu ) becomes h (t − δ bu − δ uy ) at the model sensory-motor cortex. This is the time changing component of the cortical baseline, c (t + δ yb ) (Fig. 5). Thus, we argue that h (t − δ bu − δ uy ) may form part of the increasing choice-independent drive, dubbed "urgency signal", revealed by Drugowitsch et al. (2012) when averaging population responses of LIP neurons during the dot motion task across choices, for separate coherence levels. Figure S3: Threshold in basal ganglia and influence of cortico-thalamic contribution. (a) Position of the threshold, θ, set at the model basal ganglia in simulations like those of Figs. 4, 6 and 7, per condition. (b) Example mean cortico-thalamic contribution, h (t − δ bu ) (red), compared to the mean thalamic output in inRF settings (solid blue) and outRF ones (dashed blue) for 25 % coherence and N = 2. Both (a) and (b) come from single Monte Carlo experiments with 800, 1200 trials for N = 2, 4, respectively. S1.2.3 Relation of rMSPRT to existing decision models The predictions of rMSPRT on the anatomical mapping of computations, neural dynamics, behaviour and the structure of the decision procedure in the brain, as explained here, are more comprehensive than those of prior models of decision-making. Here we explain the relationship of rMSPRT to the major classes of decision models to date. As said before, our rMSPRT generalizes the MSPRT (Baum and Veeravalli, 1994;Bogacz and Gurney, 2007) in allowing the re-use of posteriors at any given distance in the past as priors for present inference, via recursion. The MSPRT collapses (Baum and Veeravalli, 1994) to the sequential probability ratio test (SPRT) of Wald (1947) for N = 2 alternatives. The non-recursive Bayes' rule in the first case of Eq. 9 is the MSPRT's test statistic (used by Eq. 14). Making a ratio of two such posteriors (i = 1, 2) gives a ratio of likelihoods times priors. If priors are flat, this gives a likelihood ratio, the original test statistic of SPRT; appropriate fixed thresholds and an equation analogous to Eq. 14, complete the specification of SPRT. For N = 2, MSPRT is thus the physical, optimal limit of performance. The rMSPRT performs identically to MSPRT and so enjoys this same privilege. However, if made to collapse in the same manner, the rMSPRT would produce a more flexible, recursive version of the SPRT allowing for a corresponding re-use of posteriors as priors. The popular drift diffusion model (DDM; e.g. Ratcliff (1978); Mazurek et al. (2003); Palmer et al. (2005); Ding and Gold (2010, 2012a; Brunton et al. (2013); Tsunada et al. (2016)) is a non-recursive version of the SPRT (strictly for N = 2). It is thus optimal and is related to rMSPRT in the aforementioned manner. The popularity of the DDM stems from it incorporating continuous time and stochasticity in its input -a time-continuous Gaussian process. Adding time-continuity added plausibility to the hypothesis that the brain somehow implements the SPRT for decision (e.g. see Kira et al. (2015)), a problem also faced by the traditional discrete-time MSPRT. However, the existence of a Gaussian process in the brain to supply evidence for decisions, is unproven; since the the decision times predicted by DDM critically hinge on this process and its statistics (typically disconnected from the statistics of sensory neural activity), the interpretation of the DDM's behavioural predictions is obscured. To solve all this, Caballero et al. (2015) introduced a continuous-time generalization of the MSPRT whose inputs consist of stochastic spike-trains -the natural format of data in the brain. Caballero et al. also demonstrated that, as an average, this spiking MSPRT requires the same number of observations (inter-spike intervals) to decision as an equivalent instance of the discrete-time, traditional MSPRT. This established a solid, formal link between the discrete neural information used for decisions and the continuously distributed reaction times observed during behaviour. It also enabled us to interpret the discrete number of observations to decision in the simpler, discrete-time (r)MSPRT, as continuous decision times. This link from spike-trains to behaviour is the first advantage of rMSPRT over DDM. A second, related advantage is the rMSPRT's capability to relate choice behaviour to the statistics of the sensory data (spike trains) that determines it, as showcased here. A third is that the rMSPRT predicts multiple decision variables, one per hypothesis; all decision variables drop (or grow in a linear space) towards a single threshold (although a threshold per decision variable can be set, see Baum and Veeravalli (1994); Caballero et al. (2015)) when their associated hypothesis is well supported by input data. This contrasts with the single decision variable drifting towards one of two thresholds in the DDM and SPRT which has proven difficult to interpret neurobiologically. A fourth advantage is that (r)MSPRT is multi-alternative by design (N > 2), allowing us to analyse all realistic decision settings with the same mechanism.
2016-11-01T19:18:48.349Z
2016-03-17T00:00:00.000
{ "year": 2016, "sha1": "44e0affece3227700f5da439423a18484d2b4269", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1006033&type=printable", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "302158a8e1c917c3067e13498760e499119e9e0a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Computer Science" ] }
4712857
pes2o/s2orc
v3-fos-license
In Vitro Degradation Behaviors of Manganese-Calcium Phosphate Coatings on an Mg-Ca-Zn Alloy In order to decrease the degradation rate of magnesium (Mg) alloys for the potential orthopedic applications, manganese-calcium phosphate coatings were prepared on an Mg-Ca-Zn alloy in calcium phosphating solutions with different addition of Mn2+. Influence of Mn content on degradation behaviors of phosphate coatings in the simulated body fluid was investigated to obtain the optimum coating. With the increasing Mn addition, the corrosion resistance of the manganese-calcium phosphate coatings was gradually improved. The optimum coating prepared in solution containing 0.05 mol/L Mn2+ had a uniform and compact microstructure and was composed of MnHPO4·3H2O, CaHPO4·2H2O, and Ca3(PO4)2. The electrochemical corrosion test in simulated body fluid revealed that polarization resistance of the optimum coating is 36273 Ωcm2, which is about 11 times higher than that of phosphate coating without Mn addition. The optimum coating also showed the most stable surface structure and lowest hydrogen release in the immersion test in simulated body fluid. Introduction Orthopedic surgery in recent times depends profoundly on the development of biomaterials used for fixation of fractures and joint replacement [1]. Among the three main kinds of biological implant materials, metallic materials, ceramic materials, and polymeric materials, biodegradable metals and polymers have gained interest for their advantages of being gradually dissolved, absorbed, consumed, or excreted in the human body, so there is no need for the secondary surgery to remove implants after the surgery regions have healed [2]. Current biodegradable implants made of polymers have an unsatisfactory mechanical strength [3] and therefore limited applications. Thus, magnesium (Mg) and its alloys have been attracting growing attention as next-generation medical material suitable for biodegradable bone implant and stent due to their well physical and mechanical properties, such as excellent biocompatibility, high strength, and similar elastic modulus to human bone [4][5][6][7]. However, the rapid decomposition speed of Mg alloys hinders the implants to fulfill their surgical function before being discharged, the inhomogeneous local corrosion starting from the surface of Mg alloys makes the corrosion behavior uncontrollable, and too much hydrogen evolved can be accumulated in gas pockets next to the corroding Mg implant, which will delay healing of the surgery region and lead to inflammatory reaction. Attempts to improve the corrosion resistance performance have been made on various Mg alloys, such as on AZ91 [8,9], AZ31 [10,11], AM60 [12], ZM21 [13], Mg-Ca [14], Mg-Zn and Mg-Mn [15], Mg-Zr-Sr [16], and Mg-Zn-Ca. Among these Mg based alloys, AZ91 and AM60 contain Al which is indicated in several pathological conditions in the human, the most commonly associated ones include dementia and Alzheimer's disease [17]. Ca was a preferable addition element for its capability of refining microstructure and biomimetic mineralization behavior during alloy biodegradable process [18]; Zn is next to aluminum in strengthening effectiveness as an alloying element in Mg, and adding Zn can improve both the tensile strength and the corrosion resistance of Mg alloys 2 Scanning [19]. In vitro cytotoxicity assessments indicated that Mg-Zn-Ca alloys did not induce toxicity in L-929 cells and are suitable for biomedical applications [20]. Surface treatment considered in previous studies includes chemical conversion coatings [21][22][23], electrochemical plating [24], anodizing [25], microarc [26], polymer coatings [27], and sol-gel coatings [28]. Among all the treatment methods, phosphate conversion coating was easier to acquire and more environmentally friendly. Phosphate conversion coating consists of crystalline or amorphous surface metal phosphates or of metal phosphate ions in the passivating solution and is usually carried out by immersion of the metal samples into the phosphating baths at a certain range of bath temperature and pH value of the bath solution [29,30]. Besides, when phosphate coating is used as a pretreatment layer between the substrate and a top layer, the pores in the coating may improve the bonding ability between substrate and top coating [31]. Calcium phosphate (CaP) coatings enhance cellular adhesion, proliferation, and differentiation to promote bone regeneration [22,[32][33][34][35]. Studies also referred to the fact that increase of calcium phosphate could maintain homeostasis and reduce the level of pH in physiological system [36]. Manganese-containing coatings induced higher bone-related gene expression in a culture of osteoblastic cells. This indicates that manganese could improve cell mediated mineralization [37]. CaP coating was reported in our previous work and coatings containing Mn element were reported in studies elsewhere that contributed to corrosion resistance on Mg alloys [24,[38][39][40]. Calcium can significantly influence the stability of Mg-Zn alloy, the crystallization of alloy can be refined [41], and a precipitate that has a complex structure can form. The precipitation has good high temperature stability, which can significantly improve the creep resistance of Mg alloy. The addition of trace amounts of Ca can have a substantial positive influence on the precipitation, process during artificial ageing in the Mg-Zn system [41]. The microstructure of the modified alloy is more complex and has a much refined precipitate structure that is very stable for prolonged periods, leading to a significant improvement in creep resistance. Mg-5Zn-1.5Ca alloy was found to be nontoxic and have good mechanical properties (the comparison is in another unpublished article of our study); thus Mg-5Zn-1.5Ca alloy was applied as Mg matrix substrate in this study. The aim of this study was to develop a manganesecalcium phosphate conversion coating on Mg-5Zn-1.5Ca alloy which only employed MnCl 2 and Ca (NO 3 ) 2 as the essential precursors that further improve the corrosion resistance property. Effect of Mn addition's amount in the phosphate solution to the coating's corrosion resistance was investigated; the in vitro biodegradation behavior and hydrogen evolution in SBF were evaluated. The morphology and composition of the coatings were also examined by X-ray diffraction (XRD) and scanning electron microscope (SEM) with energy dispersion spectroscopy (EDS). Furthermore, we also analyzed the polarization measurement, degradation behavior, impedance, and anticorrosion ability of a conversion coating on the Mg alloy. Sample Preparation and Coating Process. In this work, a home-made extruded Mg-5Zn-1.5Ca alloy with a dimension of 10 mm × 10 mm × 5 mm and compositions of 5 wt% Zn, 1.5 wt% Ca, and Mg balance was used as the substrate material in the following coating processes. Firstly, the samples were subsequently abraded by 500, 1000, 1500, and 2000 grit sandpapers to remove the oxide layer and to obtain similar surface roughness not more than RA 0.05 (the phosphate coatings would be uneven if the surface roughness of the Mg alloy substrate is high) and then ultrasonically rinsed in ethanol in order to remove grease on the alloy surface. For comparison purpose, 3 sets of samples were prepared and tested, respectively, and the average values were calculated. In this study, a "calcium salts pretreatment and phosphate coating" phosphate process was introduced to prepare phosphate coatings. The purpose of calcium salts pretreatment was to form denser and homogenous calcium activated particle on the sample surface which contributes to the uniform and complete phosphate coatings. Samples were pretreated in a calcium solution with 1.1 mol/L CaO and 0.20 mol/L HNO3 liquor for 0.5 minutes at pH 1.8. Then the samples were rinsed with deionized water and were coated in the phosphate solution for 10 minutes under 37 ∘ C at pH 2.7. The composition of phosphate solution was listed in Table 1. The pH of the phosphating solution was adjusted using nitric acid (HNO3) or sodium hydroxide (NaOH). All the chemical reagents were of reagent grade and purchased from Sinopharm Chemical Reagent Co. (Shanghai, China). Analysis of Composition of Phosphate Coating and Morphology Analysis. The phases of the coatings were analyzed by a X-ray diffractometer (XRD, Rigaku Dymax, Japan) with a Cu K radiation ( = 0.154178 nm) and a monochromator at 40 kV and 100 mA with the scanning rate and step being 4 ∘ /min and 0.02 ∘ , respectively. The surface morphologies of the samples were examined using a JSM-5310 scanning electron microscope (SEM, ZEISS EV018, Germany), with the voltage of 20 kV and working distance (WD) of 10.0 mm. The surface chemical elements were analyzed using an attached INC250 energy dispersive X-ray analyzer (EDS, X-Max, Oxford Instruments, UK) with the acquisition duration of 60 s. The surface morphologies of the samples were examined using a JSM-5310 scanning electron microscope (SEM, ZEISS EV018, Germany; WD = 12.0, EHT = 20 kV). After performing immersion test in SBF for 168 hours, the surface morphologies of the samples were characterized and analyzed. Potentiodynamic Polarization. The electrochemical measurements were performed by using a potentiostat (Ver-saSTAT 3, Princeton Applied Research, US) with a threeelectrode system. In this system, a platinum electrode was treated as the counter electrode, and a saturated calomel electrode (SCE, +0.242 V versus SHE) was regarded as the reference electrode, while the sample with an exposed area of 1 cm 2 was regarded as the working electrode. Prior to [42]. The impedances of the samples were evaluated by the electrochemical impedance spectroscopy (EIS) analysis at an OCP of 10 mV from 100 kHz down to 10 MHz additionally; the potentiodynamic polarization test proceeded at a scanning rate of 5 mV/s and in the range of OCP ±1.5 V. The data curves of the tested samples for the potentiodynamic polarization and EIS experiments were analyzed through CorrView and ZSimpWin software, respectively. Test of Hydrogen Evolution. All the samples were immersed in the SBF solution at 37 ∘ C for 168 hours for analyzing the corrosion behaviors of coatings on Mg alloys. The hydrogen evolution volumes of the samples in the SBF solutions were recorded every 12 hours. Three sets of samples were prepared and tested, respectively. All the tests in this paper have been repeated at least three times and five times for the data with obvious errors. Phosphate Coating with the Calcium Salts Pretreatment Film. The preparation of samples was shown in Materials and Methods. Mg-5Zn-1.5Ca alloy pieces, after removing all of the oxide film, were treated by calcium salt treatment in the pretreatment solution. All of the samples were pretreated in the same solution: 1.2 mol/L CaO and 0.21 mol/L HNO 3 at pH 1.8. SEM images of the composition of calcium salts pretreatment solution of the film are shown in Figure 1. After water rinse, samples were coated with phosphate coating in different solution; their chemical components are shown in Table 1. The manganese content in the phosphate coating was obtained by EDS. The details of the other elements of EDS are listed in Figure 3. In Table 1, the manganese content in the phosphate coating obtained from solution containing 0.03 mol/L Mn 2+ was less (0.96%). The manganese content in the phosphate coating obtained from solution containing 0.05 mol/L Mn 2+ was up to 13.83%. The manganese content of phosphate coatings was increased with the Mn 2+ addition in the phosphating solution. Figure 1 showed SEM image of the calcium pretreatment film on the Mg-5Zn-1.5Ca alloy. After pretreatment, it can be seen that a base film with uniform and dense microstructure formed on the substrate. The pretreatment film was not sufficient to fully cover the Mg alloy substrate, but the small crystal nucleus provided a rough surface to promote the formation of the later phosphate conversion coatings and offered basic protection for the Mg alloy. Figure 2 illustrated the XRD patterns phosphate coatings with various Mn 2+ contents on Mg-5Zn-1.5Ca alloy substrates. Dicalcium dihydrate (DCPD, CaHPO 4 ⋅2H 2 O) and Ca 2 Mg 6 Zn 3 were detected in CaP coating in Sample 1, and DCPD and Ca 2 Mg 6 Zn 3 characteristic peaks remained in the phosphate coatings, but their peaks' intensity became weak in Sample 4, and they were rarely observed in Sample 5. A small quantity of tricalcium phosphate (TCP, Ca 3 (PO 4 ) 2 ) that was often reported to appear in CaP coatings [27,43] was also detected in all of the coatings. In coating of Sample 2, 0.01 mol/L Mn 2+ was added to the phosphate coating, and manganese hydrogen phosphate trihydrate (MHPT, MnHPO 4 ⋅3H 2 O) can be weakly detected, and its diffraction peak intensity increased with the increase of the Mn 2+ addition in phosphate coating solution of Sample 3, Sample 4, and Sample 5. Microstructure of Phosphate Coatings before Immersed Test. The SEM surface morphology of coatings on samples was shown in Figure 3. Along with the gradual increase of Mn 2+ content, the morphologies of the coatings show an obvious trend of variation. In all of the figures, the scattered flakes reduced on the surface morphology of samples and lumpy crystals started to appear. The flakes have been reported in researches as a typical CaP coating morphology, and the lumpy crystals could be considered a typical MnHPO 4 structure as reported in studies of manganese phosphate coatings on Mg alloy [44]. As revealed in Figure 3(a), the surface of a CaP coating was uniform and dense, while the size of a flake approximated 10 m. From Electrochemical Corrosion Behavior of the Samples. Potentiodynamic polarization curves and their corresponding electrochemical parameters of the samples were shown in Figure 4 and Table 2, respectively. Figure 4 showed the corrosion potential ( corr ) and corrosion current densities ( corr ) of phosphate coatings. Samples with more positive corr would have greater potential of corrosion. As shown in Table 2, corr of Sample 1 and Sample 2 was more positive and corr of Sample 4 was the least positive and was 204 mV/SCE more negative than Sample 1. Thus, it can be deducted that, in the corrosion process, Sample 4 tended less to be corroded. corr was in direct proportion to corrosion rate; corr of Sample For further evaluation on the corrosion behaviors of the coatings, the EIS test was employed. Figure 5 and Table 3 show the Nyquist plots and the main fitting results for the coated samples, respectively. The equivalent circuits and detailed description for the Nyquist plot in this study were similar to those in our prior research [45]. The Nyquist plot for coatings of Samples 1, 2, and 3 contained one high frequency (HF) capacitive loop and one medium frequency (MF) capacitive loop. This diagram was analyzed using the equivalent circuit in Figure 6(a), which reveals the presence of different intersurface areas with different characteristics. The HF capacitive loop was related to the dielectric properties of the coating (characterized by and ), and ct was the charge transfer resistance. The second equivalent circuit shown in Figure 6(b) was used to describe the Nyquist plots for coatings of Sample 4 and Sample 5. The plots only contained one capacitance loop, which refers to the simple process at the solution/coating interface and implies that the coatings have full coverage and could supply good corrosion protection for the substrate Mg alloy. dl represents the electric double layer capacity at the solution/substrate interface at pinholes [1]. As compared to the base solution coated samples, the solution containing Mn 2+ coated samples had smaller dl values, especially for the Sample 4 coated one. This implies that the coating of Sample 4 has denser surface structures than other coatings. The total surface resistance ( total ) of the coating of Sample 4 was 36273 Ωcm 2 , which was 11.5 times total of the coating of Sample 1. Thus, either the potentiodynamic polarization or the EIS test indicates that the coating of Sample 4 on a Mg-5Zn-1.5Ca alloy possesses better anticorrosion ability than other coatings in this study. Corrosion Behaviors of the Samples in Immersion Tests. The SEM photos for the surface morphologies of the samples after being corroded by the SBF solution for 168 hours are shown in Figure 7. After being immersed in the SBF solution, the flake-like coating of Sample 1 and Sample 2 was undistinguishable, cracked, and covered by corrosion products as illustrated in Figures 7(a) and 7(b). During the immersion process, the second phase particles Ca2Mg6Zn3 acted as cathodes and caused the dissolution of the coating and the exposure of the Mg alloy substrate. On the surface of Sample 3 in Figure 7(c), the flakes were still observable and they cover most of the cracks so as to provide a better protection for the Mg alloy. The coating of Sample 4 as shown in Figure 7(d) is intact and almost free of pitting corrosion. The three components CaHPO4⋅2H2O, Ca3 (PO4)2, and MnHPO4⋅3H2O were homogenized in the coating, which may cause the change of corrosion mechanism from local corrosion to uniform corrosion. This indicates that the lumpycrystal manganese compound offered a most improved corrosion protection for the Mg alloy. However, a small amount of corrosion cracking occurs in Figure 7(e) with the further increasing Mn content. The content of MnHPO4⋅3H2O in the XRD analysis in Figure 2 showed that the concentration of MnHPO4⋅3H2O was the highest. This is possibly caused by the stress corrosion of the lumpy coating in the simulated body fluid. Therefore, these phenomena indicate that the coating of Sample 4 provided a complemented structure that would protect Mg alloy durably. Analysis from Figure 7 and the XRD results in Figure 2 can be made as follows: (1) The second phase particles Ca 2 Mg 6 Zn 3 containing Ca and Zn are used as cathodes in the film layer which caused the anode of the film to dissolve and the Mg alloy matrix was exposed. Therefore, the corrosion is more severe (11.25 wt% Mg exposed) in Figure 7(a). (2) Corrosion cracking also is shown in Figure 7(b) and Figure 7 (3) The film layer in Figure 7(d) was intact and free of corrosion. CaHPO 4 ⋅2H 2 O, Ca 3 (PO 4 ) 2 , and MnHPO 4 ⋅3H 2 O homogenized the film, the corrosion current was evenly dispersed, and the film was more resistant to corrosion. The relative potential of these three components and the Mg alloy matrix changed, which is the cause of the change of local corrosion mechanism. (4) A small amount of corrosion cracking is shown in Figure 7(e). The content of MnHPO 4 ⋅3H 2 O in the XRD analysis in Figure 2 showed that the concentration of MnHPO 4 ⋅3H 2 O was higher than that of the other four samples. Maybe when concentration of the higher hard MnHPO 4 ⋅3H 2 O in the coating surface is more than a certain value, stress corrosion crack is generated on the coating in the simulated body fluid. The mechanism would be further studied in our next work. The samples were, respectively, immersed in the SBF solution for 168 hours, while the hydrogen (H 2 ) evolution volumes variation was shown Figure 8. Results in Figure 8 indicate that the H 2 evolution volume of all the coatings containing Mn 2+ was relatively lower than that of the coating of Sample 1 in the SBF, which implies better anticorrosion reaction. Coating of Sample 4 had the lowest H 2 evolution volume in this test, which remained almost 0 ml for the first 96 hours and increased very slowly to 1.4 ml/cm 2 measured at the 168th hour. Standard deviation in Figure 8 also demonstrates that H 2 evolution volume of Sample 4 was more stable than that of other coatings. Hence, these results stated that the coating of Sample 4 had the promising property of lower H 2 evolution volume applied on Mg alloy surface when served as body implant materials. Conclusions (1) Manganese-calcium phosphate coatings were prepared on an Mg-5Zn-1.5Ca alloy in calcium phosphating solutions with different addition of Mn2+. A calcium salt pretreatment film was applied to homogenize the subsequent manganese-calcium phosphate coating. (2) Mn content in the phosphate coatings has significant influences on the coating morphology and degradation behavior. With the increasing Mn addition, the degradation resistance of the manganese-calcium phosphate coatings was gradually improved. (3) The optimum coating prepared in solution containing 0.05 mol/L Mn2+ had a uniform and compact microstructure and was composed of MnHPO4⋅3H2O, CaHPO4⋅2H2O, and Ca3(PO4)2. The electrochemical and immersion corrosion test in simulated body fluid revealed that the optimum coating had a greatly improved surface stability and degradation resistance compared to the calcium phosphate coating without Mn addition. Conflicts of Interest The authors declare no conflicts of interest. Authors' Contributions Yichang Su and Yingchao Su conceived and designed the experiments; Yichang Su and Wei Zai performed the experiments; Yichang Su analyzed the data and wrote the paper.
2018-04-26T23:46:28.678Z
2018-02-13T00:00:00.000
{ "year": 2018, "sha1": "201d0c2730714587970a1a1662cb7bfc66128e17", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/scanning/2018/6268579.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8bd83d221bf0d8c28e5a78b953a61839d987c7fc", "s2fieldsofstudy": [ "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
259817462
pes2o/s2orc
v3-fos-license
Emerging clinical applications in oncology for non‐invasive multi‐ and hyperspectral imaging of cell and tissue autofluorescence Hyperspectral and multispectral imaging of cell and tissue autofluorescence is an emerging technology in which fluorescence imaging is applied to biological materials across multiple spectral channels. This produces a stack of images where each matched pixel contains information about the sample's spectral properties at that location. This allows precise collection of molecularly specific data from a broad range of native fluorophores. Importantly, complex information, directly reflective of biological status, is collected without staining and tissues can be characterised in situ, without biopsy. For oncology, this can spare the collection of biopsies from sensitive regions and enable accurate tumour mapping. For in vivo tumour analysis, the greatest focus has been on oral cancer, whereas for ex vivo assessment head‐and‐neck cancers along with colon cancer have been the most studied, followed by oral and eye cancer. This review details the scope and progress of research undertaken towards clinical translation in oncology. | INTRODUCTION The characterisation of cells and tissues in oncology may be undertaken for a wide variety of reasons, notably for tumour diagnosis and monitoring, disease prognosis, and surgical margin definition. Conventional diagnostic methods, for example, non-invasive radiological imaging, or tissue sampling via surgical access, are often not sufficiently informative for optimal clinical decision making [1,2]. Additionally, they can require the use of exogenous agents (e.g., radiographic contrast agents, 5-aminolevulinic acid) which can have drawbacks or contraindications, or the collection of tissue biopsies, which, depending on context, may have diagnostic limitations or lead to negative patient outcomes [3][4][5]. As such, there is a clinical demand for genuinely label free, non-invasive methods of assessment for cells and tissues, able to noninvasively provide the complex biochemical information needed for tumour characterisation and mapping. Assessment of the native fluorescence of endogenous fluorophores-autofluorescence imaging-has received growing attention as a potential solution to this challenge. Numerous metabolites exist that can be excited to emit light at specific, sometimes uniquely characteristic wavelength ranges [6,7]. Although often regarded as interfering noise in bioimaging technologies that rely on tagging molecules of interest with fluorescent markers, autofluorescence directly reflects cell and tissue biochemistry and metabolic changes without biopsy, fluorescence probe staining or fixation. Amongst the most prevalent autofluorophores are key indicators of cellular metabolism and its redox state, reduced nicotinamide adenine dinucleotide (NADH) and flavin adenine dinucleotide (FAD), whose relative concentrations give the optical redox ratio [8]. NADH and FAD are the principal electron donors and acceptors of oxidative phosphorylation, respectively. NADH has excitation maxima at 290 and 351 nm and emission maxima at 440 and 460 nm, while FAD has a single excitation maximum at 450 nm with its corresponding emission maxima at 535 nm [9] (Figure 1). Other related endogenous fluorophores, such as NADPH and other members of the flavin family, have sufficiently similar excitation/emission profiles that they are not spectrally distinct from NADH and FAD, but differences in their decay rate (assessable by fluorescence-lifetime imaging microscopy (FLIM) [10]) can allow them to be discriminated if needed. By imaging the same area with a series of different excitation and/or emission combinations (channels) it is possible to obtain spectral profiles of the cells or tissue under examination. This imaging strategy is termed multi or hyperspectral imaging-depending on the number of channels ultimately utilised-multispectral imaging generally refers to a small number of bands (e.g., [3][4][5][6][7][8][9][10] while hyperspectral images have more bands (up to hundreds, typically with much narrower spectral bandwidth). Here, these terms are used interchangeably. The hyperspectral/ multispectral images can then be mapped to the characteristics of known autofluorophores with greater precision than techniques that only target select key wavelengths of individual autofluorophores [7,[11][12][13]. In this process, adjustments typically need to be made for background fluorescence [14,15] from other autofluorophores. Hyper-and multispectral imaging quantify autofluorescent molecules at a single pixel level, making it possible to produce spatial maps of these informative disease markers in cells and tissues [7,11,16]. In hyper and multispectral imaging, every pixel is characterised by its own spectral profile reflecting the distributions of fluorophores. The hyper-and multispectral imaging data sets contain spatial and molecular information which can be assessed through Big Data-driven modern image analysis methodologies for the direct prediction of highly informative cell and tissue characteristics [17,18]. The spectral and spatial features extracted via image analytics provide an additional dimension of assessment than standard spectrometry without image information [19]. Importantly, hyper-and multispectral imaging of autofluorescence can yield non-invasive methodologies for rapid assessment [20], requiring very limited amounts of sample, down to the single cell level [21]. Some of these use single-photon fluorescence which has low-cost instrumentation whereas two-photon autofluorescence requires specialised instrumentation and highly trained personnel [22,23]. Beyond all other conditions, research attention has been focused on the application of hyper and multispectral imaging of cell autofluorescence to oncology. The native fluorescence from cancerous tissue/cells has been utilised to examine their physiological processes and changes at the cellular level. Changes in the metabolic environment of cells (e.g., normal vs. cancer cells, changes after treatment) can be explored through the fluorescence spectra and abundance of fluorophores. Endogenous fluorophores including NADH, NADPH, FAD and porphyrins have generally been explored to analyse metabolic changes [24,25], whereas elastin [26] and collagen have been used to look at structural changes [27]. The metabolic profile between healthy cells and cancer cells differs due to their varying genetics and metabolic microenvironment. Tumours create a hypoxic environment where oxygen levels are approximately 1%-2% lower compared to healthy tissue [28]. In this case, glycolysis is the main form of ATP production compared to oxidative phosphorylation in normal cells. In glycolysis, the cellular redox status changes where NAD + is reduced to NADH. It also supports the pentose phosphate pathway (PPP), aiding in the production of NADPH [29]. Consequently, this increase in production of NADH and NADPH is important for cancer cells to maintain metabolic homeostasis [29]. The changes in metabolism in cancerous and noncancerous tissue have been studied by analysing the fluorescence spectra of fluorophores [24,25]. NADH and FAD are the main coenzymes involved in cellular metabolism and are generally the predominate fluorescent signals present, thus are often used as metabolic markers [30]. Dramicanin et al. found increased NADH concentration in malignant breast tissue, suggesting that this was due to damaged mitochondrial metabolism and a shift to anaerobic metabolism [25]. They also found an increase in intensity of PpIX in the malignant tissue, due to the tissue having better vascularity [25]. Similar results were found by Pu et al. who observed an increase in NADH and FAD content in breast tissue compared to normal tissue, however, there was a decrease in collagen [24]. The main fluorophore in the ECM of breast tissue is type 1 collagen. For cancerous tissue to metastasise, they degrade the ECM, consequently leading to the loss of collagen [31,32]. Therefore, by analysing the fluorescent intensity/profile of these autofluorophores, biochemical and physiological information can be collected. Tryptophan is another autofluorophore that has been studied in cancer research. Tryptophans are amino acids and are important for protein synthesis. Zhang et al. found a higher concentration of tryptophan in MDA-MB-231 breast cancer cells compared to MCF-7 (nonaggressive breast cancer cells) or normal fibroblast cells [33]. Cancer cells are known to take up more tryptophan as there have larger amino acid transporters on the cell membrane, thus leads to the suppression of the immune response against the cancer cells [33]. As such it has been clear from the earliest stages of research on endogenous fluorescence that there was great potential to address the major oncological priorities of accuracy in detection (minimising false negative results which result in treatment delay and false positives which can cause further invasive testing and unnecessary therapy) as well as speed of tissue characterisation (intraoperative assessment being extremely valuable and rapid diagnoses improving treatment planning and sparing patients potentially unnecessary periods of stress). In this review, we explore all potential clinical applications of hyper and multispectral imaging of cell and tissue autofluorescence that have been investigated for the field of oncology, and summarise the evidence currently supporting their translation. | HYPER AND MULTISPECTRAL AUTOFLUORESCENCE IN ONCOLOGY A scoping review strategy [34] was employed to identify all contemporary studies on hyper and multispectral imaging of autofluorescence with a focus on addressing an oncologic, clinical issue. We utilised title, key and indexing terms (e.g., multispectral, hyperspectral, spectral, multi-modal, endogenous fluorescence, native fluorescence, autofluorescence) joined by Boolean operators in Pubmed, Scopus, and Embase, from 2010 to September 2021. An initial 3307 potential studies were identified, which was reduced to 213 on review of the titles and abstracts with a final 32 studies included after consideration of the full texts. These showed that studies have been carried out for multiple forms of cancer, with diverse applications considered. The approaches taken and resulting findings of these studies are detailed in this review to provide an overview of the field and inform future research and translation. Significant differences in intensities for high and low-grade tumours. Redox ratio higher in healthy tissue than tumour, and low grade higher redox ratio than high. (Continues) It is readily apparent that work has focused on accessible tumours that can be reached endoscopically (e.g., colon and oral) or external tumours (e.g., ocular and skin). This is unsurprising as this optical imaging technology is best suited for application in areas of the body which do not require surgery to access. As such, it is meaningful to consider studies in terms of which were applied to live tumours still in the body (in vivo; Section 3) and which assessed excised and surgical specimens (in vitro; Section 4). | IN VIVO ASSESSMENT OF CANCER The characterisation of tissues by autofluorescence can be carried out non-invasively, without necessitating its removal for processing and histology staining. As such, one of the key areas of interest for the application of multi and hyperspectral microscopy to oncology has been cancer screening. Many forms of cancer diagnosis involve the invasive collection of biopsies, which creates a difficult balance for clinicians who must judge when suspicious tissue warrants further investigation. | Oral cancer Oral cancers, which have good accessibility for in vivo assessment and screening have received significant attention for the application of hyperspectral imaging of autofluorescence. Early research by Roblyer et al. [40] used a surgical microscope for the assessment of nonneoplastic, dysplastic and cancerous oral tissue. Excitation was at 365, 380, 405 and 450 nm (each with 50 nm bandwidth) and 18 emission features were obtained using colour channels, including the mean values of the red, green, and blue channels as well as the ratio of the mean red-to-green, red-to-blue, and green-to-blue pixel values. They found that excitation at 405 nm gave the best image contrast, and the ratio of red-to-green fluorescence intensity computed from these images provided the best classification of dysplasia/cancer versus non-neoplastic tissue, with a sensitivity of 100% and a specificity of 85% in the validation, although the ability to separate precancerous lesions from cancerous was more limited. Using a multispectral FLIM handheld endoscope [41] excited clinically suspicious oral lesions at 355 nm, collecting emission bands at 390 ± 20, 452 ± 22.5, and >500 nm. They were able to diagnose mild dysplasia and early-stage oral cancer with AUC > 0.9. Furthermore, with a negative predictive value (the ratio of true negatives to total negative test results) of 98%, this gives strong support that the multi-spectral assessment of oral lesions could be used to avoid unnecessary biopsies in this region. In vivo, hyperspectral imaging of autofluorescence (455 nm excitation, 500-720 nm emission) applied to a mouse model of oral cancer, detected tongue neoplasia with an AUC of 0.84 ± 0.06 [42]. Classification of specific neoplastic transformations was also undertaken with accuracies of 75%, 76%, 83% and 91% for normal, dysplasia, carcinoma in situ and squamous cell carcinoma Normal and cancerous ovaries discriminated; sensitivity = 100% sensitivity, specificity = 51%. Specificity increased to 69% by dividing autofluorescence data with green reflectance values. Note: Detection = studies with a focus on the detection or grading of neoplasia at the lesion level (potential application to screening or diagnosis); Delineation = studies with a focus on defining tumour regions within tissue (potential application to the definition of surgical margins); Characterisation = Studies with a focus on investigating the properties of neoplastic tissues without applying those findings to differentiation. respectively. In an in vivo study using a hamster model, Pal et al. [43] used multiphoton autofluorescence microspectroscopy (780 nm excitation, emissions collected 400-650 nm) to characterise oral epithelial squamous cell carcinoma (OSCC). A red shift of a blue-green (480-520 nm) peak and a prominent peak for OSCC and some high-grade dysplasia at 635 nm (which was attributed to PpIX) were observed. The fluorescence intensity of the PpIX peak and the ratio of this peak and the blue-green peak had statistically significant differences between control and neoplastic tissues. Other works have focused more on the characterisation of neoplastic properties over the development of diagnostic algorithms. Another study of FLIM endoscopy (emission 390 ± 20, 452 ± 22.5, and >500 nm; excitation 355 nm) [44] found several autofluorescent features with statistically significant different distributions between precancerous and cancerous oral lesions and normal oral tissue. While a single device which combined assessment of reflectance and autofluorescence (with an emission range of 471-667 and a 405 nm LED used for excitation) was used to create snapshot images of normal controls and patients with oral cancer [45] and demonstrated that abnormal tissues had autofluorescence spectra with low intensity, relative decreases in blue/green wavelength region and an increase in red wavelength region. | Skin cancer Skin cancers have similarly ready accessibility for in vivo assessment to oral cancers. Fluorescence lifetime imaging (FLIM) dermoscopy was used by Romano et al. [54] with a triple emission band 390 ± 20, 452 ± 22, and >496 nm and excitation at 355 nm to discriminate nodular basal skin carcinoma (BCC) CC from healthy tissue, achieving an AUC of 0.82. Interestingly, in Lihachev et al. [55], the advancing quality of smartphone cameras was exploited to develop a translatable system suitable for remote primary evaluation of suspicious skin lesions. Here skin autofluorescence was assessed at 405 nm excitation, with emissions captured in the red and green spectral bands by a Samsung Galaxy Note 3 smartphone-integrated CMOS RGB image sensor, with BCC shown to have lowered intensity compared to surrounding healthy skin. | Cervical cancer Another in vivo study was carried out in a mouse model of cervical cancer (TC-1 cells transformed with human papillomavirus, injected into the flank) [58]. Here, multispectral imaging-with a 638 ± 3 nm LED and a multiband-pass filter with the visible spectral range tailored to detect NADH and flavin autofluorescence (used for background image) and 700 nm for PpIX-helped to visualise and highlight tumours by distinguishing them from normal areas. This study also found that their system was sensitive to real-time dynamic photochemical reactivity through assessment of PpIX photobleaching, raising its potential application for treatment monitoring. | EX VIVO ASSESSMENT OF CANCER Despite its advantages for clinical translation applications, the in vivo assessment of tumours can be difficult to achieve, especially for areas less accessible than oral and skin cancers. As such, the majority of research on the application of hyper and multi-spectral assessment of cancer autofluorescence has been carried out ex vivo on tissue biopsies or, in some cases, in vitro cultured cells from neoplastic cell lines. Even here, the accelerated nature of this form of imaging, which does not require periods of fixation and staining, has major potential for guiding surgeries and some of the research has direct potential for translation. | Head and neck cancer One group of authors has published several studies investigating the use of hyperspectral imaging of autofluorescence for assisting surgeries for head and neck cancer. In Halicek et al. [49], they applied their system, which for fluorescence microscopy used a 455 nm excitation source with emissions collected every 10 nm from 500 to 720 nm at 10 nm increments, to the detection of cancer in thyroid and salivary glands using normal tissues, tissue from the primary tumour and tissue from the tumour margin from patients undergoing resection. For thyroid tumours, they obtained AUCs of 0.85 ± 0.2 for all thyroid tumours, 0.81 ± 0.03 for papillary thyroid carcinoma, 0.86 ± 0.06 for medullary and insular thyroid carcinomas, 0.95 ± 0.02 for follicular adenoma And carcinoma, and 0.98 ± 0.01 for poorly differentiated carcinoma. For salivary tumours, AUC was 0.60 ± 0.30 for parotid, and 0.80 ± 0.14 for other tumours. The same system was applied in [50] for the investigation of head and neck squamous cell carcinoma (SCC) margin detection in surgical specimens. For their conventional SCC cohort, inter-patient experiments had a median AUC of 0.93 for discriminating tumour only from normal only, while for HPVpositive SCC the median was 0.86. Their study showed that autofluorescence hyperspectral imaging (and also reflectance) outperformed fluorescent dye-based imaging methods, with the capacity to accurately detect cancer margins in ex-vivo specimens within minutes. In two further studies, hyperspectral imaging of autofluorescence was compared to hyperspectral imaging of reflectance [52,53]. As above, cancers were delineated from normal tissue in fresh surgical specimens of people with head and neck cancer using 455 nm excitation and collecting emissions at 10 nm intervals from 500 to 650 nm [53]. Overall, they obtained AUCs of 0.82 ± 0.20 for oral cavity, 0.72 ± 0.31 for gland, and 0.74 ± 0.26 for larynx and pharynx and 0.81 ± 0.11 for paranasal and nasal. In another study aiming to apply hyperspectral imaging, with autofluorescence contrasted to reflectance for tumour margin assessment in surgical tissue specimens [52], normal tissues were discriminated from cancerous tissue in the oral cavity (AUC = 0.83 ± 0.19) and the thyroid gland (AUC = 0.74 ± 0.33). In both cases, hyperspectral imaging of reflectance was noted to have outperformed autofluorescence. Using an inverted multiphoton microscope with timecorrelated single photon counting (enabling FLIM imaging) a different group investigated hyperspectral imaging of ex vivo head and neck cancer patient tissues and showed that adenocarcinoma tissues had higher redox ratios coupled with lower NADH and FAD fluorescence lifetimes compared to squamous cell carcinoma tissue [51]. NADH signal was isolated with excitation wavelength at 750 nm and a 440/80 nm bandpass emission filter, while FAD signal was isolated with an excitation wavelength at 890 nm and a 550/100 nm bandpass emission filter. Adenocarcinoma samples additionally had higher collagen content than squamous cell carcinoma tissues (identified through second harmonic generation imaging). | Oral cancer In addition to the in vivo studies of multispectral imaging of oral cancer autofluorescence (Section 3.1) several works have also investigated excised tissue sectionsgenerally with a view towards developing the technology towards in vivo application. Two such studies, Yan 2017 and Yan 2021 applied different constructions of their LED-Induced Autofluorescence (LIAF) multispectral imager to ex vivo sections of tissue from oral cancer patients and healthy controls. The first study [46] investigated using 365 or 405 nm excitation and emission filters at 490-590, 590-690 and 650-750 nm to discriminate tissue sections from patients with oral cancer from those without. Optimum discrimination (sensitivity 84.68%, specificity 76.24% and accuracy 80.66%) was achieved by illumination with the 365 nm LED and no filters. In a follow-up study [47], 365 and 405 nm excitation LEDs were applied with emission filters with centre wavelengths at 470, 505, 525, 532, 550, 595, 632, 635, and 695 nm. Single-layer network processing was used to select six classifiers using the 470, 505, 532 and 550 nm emission filters which could predict the presence of oral cancer with a sensitivity 96.15%, specificity 69.55% and accuracy 82.85%. Based on the high sensitivity and nonreliance on expert interpretation the authors concluded that the LIAF multispectral imager would be useful for rapid screening and early detection of oral cancer. Another study applied independent content analysis to separate spectral mixtures in hyperspectral images of keratinised tissues from oral cancer patients [48]. They used two excitation wavelengths (330-385 and 470-490 nm) with emissions collected by a 'hyperspectrometer' [48] spanning 400-1000 nm, and obtained good correlation coefficients with the known characteristics of autofluorophores (0.92 ± 0.09 and 0.97 ± 0.03, respectively). The authors noted that the detection of keratinised tissue was of no particular diagnostic value for the early diagnosis of cancer (occurring as it does in the later stages), but the study was undertaken as an early pilot of the application of the technology to diagnostics. | Colon cancer Most studies described in this review studied systems with an increased number of emission wavelengths assessed, stimulated by a comparatively limited number of excitation wavelengths. In contrast, Deal et al. used a very broad range of excitation wavelengths (360-550 nm) in 5 nm increments, with a long-pass emission filter and dichroic beamsplitter used to separate excitation and emission light at 550 nm [35]. They demonstrated that their technology was able to separate signals of endogenous fluorophores (collagen, elastin, PpIX, FAD and NADH) in order to detect relative differences in concentrations of fluorophores between normal and neoplastic colon tissue. The Identification of specific fluorophores as biomarkers in colon cancer was also the focus of a study by Banerjee et al., who undertook hyperspectral imaging with a wide field Xenon-lamp based spectral imager capable of illumination from 260 to 650 nm and detection from 340 to 650 nm [36] to target tryptophan (excitation 280 nm, emission 300-410 nm), collagen (excitation 370 nm, emission 410-500 nm as well as excitation 440 nm, emission 600-680 nm), FAD (excitation 440 nm, emission 500-600 nm). They showed that by dividing the image intensity of tryptophan-pixel by pixel-with the image intensities of FAD and collagen they achieved superior contrast for enhancing the visibility of colonic neoplasms. This built on work from the same group in colon cancer where they found that hyperspectral imaging with excitation from 260 to 650 nm and emission collection from 340 to 650 nm resulted in cellular emission spectra with a peak at 330-340 nm when excited at 280 nm-consistent with the emission of tryptophanand that peak emission from cancerous cells was about twice that of normal cells [37]. Meyer et al. investigated the discrimination of 2D cultured cancerous and normal epithelial colon cells by redox ratio using two-photon excited fluorescence (TPEF) and an algorithm for the selection of optimised bandpass filters for the detection of autofluorophores without reliance on prior knowledge of their characteristics [38]. Their hyperspectral multiphoton microscopy system (which applied a 785 nm laser and spectrometer detection of emission) achieved a 31.5% improvement of cancer-non-cancer discrimination compared to the use of previously researched values from published literature. An endoscopic system which combined four imaging modalities-white light imaging, high-frequency ultrasound brightness-mode imaging, integrated backscattering coefficient (IBC) imaging and also multispectral autofluorescence imaging (375 nm excitation, emission collected 420-700 in 20 nm steps)-was used to characterise excised colon tissue [39]. The multispectral assessment achieved sensitivity 0.86 and specificity 0.85 for discriminating between normal and cancerous tissue. | Brain cancer Fluorescence-guided surgery is already in common use for brain surgery, typically making use of 5-aminolevulinic acid (5-ALA) stimulation of the accumulation of the autofluorophore PPIX in order to better define tumour margins and optimise the safety of the resection of tumours [65]. A study was undertaken on fixed biopsy tissues of primary (glioblastoma), secondary (metastasis) tumour and control cortex to define biomarkers to aid in the surgical resection of high-grade brain tumours without the administration of exogenous factors [62]. Excitation was performed in deep UV (275 nm) and near infra-red (690-1040 nm) with the detection modalities including fluorescence imaging, spectroscopy and fluorescence lifetime imaging. These were used to define ratios for tyrosine-tryptophan, tryptophan-collagen, and tryptophan-NADH which enabled discrimination with 90% sensitivity and 73% specificity as well as NADH-FAD, and Porphyrin-NADH ratios which enabled 97% sensitivity and 100% specificity. A multiscale algorithm which used the three most effective markers. Porphyrin-NADH ratio, Tryptophan collagen ratio and average lifetime at 890 m separated primary tumours from healthy regions with only 1.8% overlap, secondary tumours and healthy regions with 0% overlap, and primary tumours from secondary tumours with 6.7% overlap. The same group used two photon microscopy to discriminate normal brain tissue from glioblastomas and brain metastasis in fresh biopsies [63]. Excitation at 890 nm with emission measured from 380 to 780 nm in 10 nm steps was used to define used NADH/FAD, fitted SHG intensity and the average lifetime which resulted in 100% sensitivity and 50% specificity. | Breast cancer The real-time detection of breast cancer was investigated by Carvar et al. [56] who created single colour-coded images for the assessment of surgical margins and needle-based biopsies using data cubes with excitations at 375, 405 and 488 nm with 10 spectral bins bounded by two of the emission wavelengths. By assessing cellular concentrations of NADH and FAD they found definable differences between cancer and a benign condition fibroadenoma for which differential diagnosis is needed. These authors built a high-speed set-up which could generate 10 data cubes per second, which was sufficiently rapid to be able directly to monitor surgical margins at the cellular level during lumpectomies. An earlier study investigated the use of autofluorescence (excitation 340 nm, emission detected from 400 to 720 nm) combined with diffuse reflectance spectroscopy to evaluate tumour margins in tissue masses removed during partial mastectomies [57]. They achieved 85% sensitivity and 96% specificity for the classification of negative and positive margins. Additionally, neo-adjuvant chemotherapytreated and non-treated tissue could be discriminated with 100% accuracy. | Eye cancer Despite its accessibility and the desirability of technology which would minimise the collection of biopsies, relatively little work was found that applied the hyperspectral assessment of autofluorescence to the detection of ocular cancers. Habibalahi et al. investigated ocular surface squamous neoplasia (OSSN) in tissue biopsies and mapped the samples' multispectral profiles to expert histological assessment [17]. They applied a high number of LED excitation wavelengths (340,368,373,378,382,388,391,394,405,413,432,441,455,460,470,491 and 510 ± 5 nm) with emissions collected across filters at 420-460, 454-496, 573-613 and 575-650 nm. They obtained a pixel-wise correlation between histology assessment and multispectral analysis of 78% for inter-patient classification and 94% for intra-patient classification. As their methodology was fully automated with the potential to produce diagnostic maps rapidly and in quasi-real time, they noted its potential for translation for intraoperative assessment for defining tumour boundaries for OSSN. This was reinforced in a related work where the application of 10 or 5 select channels from their set could detect OSSN with 1% and 14% misclassification errors, decreasing imaging times by 75% and 80%, respectively [20]. In an additional work published since this review's primary search, the same group used a similar technology with 59 channels to automatically discriminate pterygium and/or OSSN from 50 patients from normal tissue with an accuracy of 88%, and also defined boundaries in close agreement with hematoxylin and eosin stained sections [66]. | Gastric cancer The potential application of hyperspectral imaging of autofluorescence to the early diagnosis of gastric cancer was studied by Li et al. [59]. Here, samples from patients pathologically diagnosed as non-atrophic gastritis, premalignant lesions or gastric cancer were collected and imaged using hyperspectral technology (361 nm excitation, emission 450-680 nm, collected every 2 nm). They showed that the average spectra of the investigated forms of gastric cancer differed at 496, 546, 640 and 670 nm emission wavelengths. A diagnostic model which used the hyperspectral data achieved accuracy, specificity and sensitivity above 94%, which the authors concluded supported the application of hyperspectral imaging of autofluorescence for the non-invasive, sensitive, real-time diagnosis of early gastric cancer. | Bladder cancer In Pradère et al., the potential of multispectral imaging of autofluorescence for the detection of bladder cancer was investigated using fixed samples of healthy and cancerous urothelium [60]. Excitation was at 870 nm with emission detection covering 380-780 nm in 10 nm steps. Significant differences in intensities of emission were observed for high and low-grade tumours. Further, the calculation of the redox ratio indicated that healthy tissues had a higher ratio compared to tumour samples, and low-grade tumours had higher ratios than high. | Lung cancer Using a three-dimensional in vitro model of lung cancer (reconstructed human epithelium with human lung fibroblasts and lung adenocarcinoma cell lines) a twophoton laser-induced autofluorescence microscopy system (excitation at 720 m, emission 400-650 nm in 10 nm steps) was able to detect differences in spectral and intensity heterogeneity at the edges of tumours [61]. Noncancerous tissue had twice the intensity of cancerous, with autofluorescence decaying in the direction of the main body of the tumour-indicating potential sensitivity of multispectral assessment of autofluorescence to the impact of tumours on their microenvironment. | Ovarian cancer Renkoski et al. imaged freshly resected human ovaries with excitation at 365 nm and emission collected on eight spectral bands from 400 to 600 nm as a step towards developing a tool for screening for ovarian cancer [64]. Linear discriminant analysis was used to define a model which was able to classify normal and cancerous ovaries with 100% sensitivity and 51% specificity, with specificity being able to be increased to 69% by dividing autofluorescence data with green reflectance values to correct for variations in tissue absorption. The same algorithm classified ovaries with benign neoplasms as non-malignant. | CONCLUSIONS Hyper and multispectral imaging of autofluorescence has been trialled in vivo and ex vivo for the non-invasive characterisation of neoplastic tissue and suspect lesions. A primary driver of in vivo human translation seems to be tissue accessibility, as the non-invasive nature of the technology means that it can be trialled on surface contexts (e.g., skin and oral cancer) with no real potential for negative patient outcomes. Despite its potential to enable non-destructive assessment, ex vivo applications of multi and hyperspectral imaging dominate the field. Colon cancer stands out as cancer with relatively good in vivo accessibility that has not been investigated in this context. At the same time, we highlight brain tumours as cancer which, despite the comparative difficulty of accessing in vivo, would benefit greatly from a reduction in the clinical burdens created by the biopsy. Most studies, in vivo and ex vivo, used human clinical samples, with cell and animal models being relatively uncommon. Although this is an advantage to the field sample sizes were relatively low with studies positions as pilots of proofs-of-concept, and consequently little consideration was given to the generalisability of clinically meaningful comparisons (e.g. discriminating visibly suspicious lesions from fully healthy tissue). Additionally, many works limited themselves to identifying whether quantifiable differences in autofluorescence could be established, without investigating these characteristics' actual utility for cancer identification. Other studies which did investigate accuracy would have been improved by comparisons to current standard methods. Greater translational work is needed for this technology to achieve its potential. Desirable optical characteristics for the non-invasive assessment of cells and tissues by hyperspectral imaging are highly specific to the intended application, with areas of functionality often laying in direct conflict with one another. Speed of image acquisition, for instance generally comes at the cost of image resolution and/or number of spectral channels assessed. Increasing the intensity of light used can mitigate this specific trade-off-enabling high-resolution images to be captured quickly-but this creates the potential for photobleaching and damage to sensitive tissues in vivo. High-sensitivity cameras for capturing emissions represent another mitigatory strategy, but their sensitivity applies equally to background lightdifficult to control for in vivo, clinical applications-and they can be very costly with diminishing returns. Similarly, high magnification objectives can improve resolution without necessitating increased exposure times or higher intensity light, but are also expensive and give a more limited field of view, slowing data collection. However, with a considered focus on necessary characteristics, the process of optimisation can enable the development of hyperspectral devices able to perform their target function even under highly restrictive circumstances. A good example from outside of oncology is given by a series of works [67][68][69][70][71][72][73], where the development and validation of a hyperspectral catheter, able to generate real-time images inside of a beating to guide tissue ablation for the treatment of atrial fibrillation. Two main technological strategies were observed; the application of one or very few excitation wavelengths with broad (often spectroscopic) assessment of emissions, and the use of a higher number of excitation wavelengths with a lower number of emission wavelengths assessed, often using wavelength-specific filters. Generally, the former was applied when specific fluorophores were being targeted as disease biomarkers and the latter was used when the intention was to develop a discriminatory spectral signature. Where the objective was tissue characterisation with real-time videography both excitation and emission wavelengths were constrained relative to other applications. The overwhelming focus has been on the discrimination of neoplastic tissue from normal tissue or suspicious but benign lesions. However, there is some indication that autofluorescence can indicate tumour characteristics, including metastatic potential [74] and drug response [75,76], and future works should consider novel applications for the technology.
2023-06-06T06:17:50.432Z
2023-06-05T00:00:00.000
{ "year": 2023, "sha1": "0ca4cdef7bd609721485749bff81b97de7d889d7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1002/jbio.202300105", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "50798bc3d62a6197d7cd27491db115b5c4a2c7c5", "s2fieldsofstudy": [ "Biology", "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
119289307
pes2o/s2orc
v3-fos-license
Fracture size effects in nanoscale materials: the case of graphene Nanoscale materials display enhanced strength and toughness but also larger fluctuations and more pronounced size effects with respect to their macroscopic counterparts. Here we study the system size-dependence of the failure strength distribution of a monolayer graphene sheet with a small concentration of vacancies by molecular dynamics simulations. We simulate sheets of varying size encompassing more than three decades and systematically study their deformation as a function of disorder, temperature and loading rate. We generalize the weakest-link theory of fracture size effects to rate and temperature dependent failure and find quantitative agreement with the simulations. Our numerical and theoretical results explain the crossover of the fracture strength distribution between a thermal and rate-dependent regime and a disorder-dominated regime described by extreme value theory. Nanomaterials have remarkable mechanical properties, such as enhanced strength and toughness [1,2], but display considerable size effects and sample-to-sample fluctuations, which represent an issue for engineering applications. Our current understanding of fracture sizeeffects in macroscopic disordered media relies on extreme value theory which relates the strength to the statistics of the weakest region in the sample [3,4]. While the theory does not consider the effect of stress concentrations and crack interactions, numerical models for the failure of elastic networks with disorder show that an extreme value distribution describes failure at large enough scales, although the form usually deviates from the standard Weibull distribution [5][6][7][8]. Understanding size effects in nanomaterials is still an intriguing open issue also because of the presence of rate-dependent thermal effects that would invalidate the weakest-link hypothesis [9]. Yet, the Weibull distribution is commonly used to fit experimental data in carbon based nanomaterials [10], although the tensile strength is observed to depend on the strain rate [11]. Testing fracture properties of graphene is quite challenging due to the difficulty in applying high tensile stresses in a controlled fashion on nanoscale objects [12][13][14]. Therefore numerical simulations represents a viable alternative to understand the size dependence of its mechanical behavior [15][16][17][18][19]. Numerical simulations of defected carbon nanutubes suggest that failure is de- * stefano.zapperi@unimi.it scribed by the Weibull distribution in quasistatic, zerotemperature conditions [20]. Finite temperature molecular dynamics simulations reveal, however, that the average tensile strength of nanotubes [21] and graphene [19,22,23] depends on temperature and loading rate. Despite these insightful results, a comprehensive theory describing the size dependent fracture strength distribution of carbon nanomaterials, elucidating the role of thermal fluctuations and strain rate, is still lacking. Here we perform large scale molecular dynamics simulations of the deformation and failure of defected monolayer graphene sheets for a wide range of sample sizes, vacancy concentration, temperature and strain rate. To explain the observed temperature and rate dependence of the tensile strength distribution, we generalize extreme value theory to the case of thermally activated rate dependent fracture. The resulting theory is shown to be in excellent agreement with our simulations and provides a general framework to explain rate-dependent thermal effects in the failure of disordered nanomaterials. Based on our theory, we derive a simple criterion that allows to assess the relative importance of structural disorder and thermal fluctuations in determining failure. Using this rule, one can readily show that the failure of nanoscale samples is more prone to thermal induced failure, while the fracture macroscopic samples are more likely to be ruled by quenched disorder. This confirms previous results showing that in the limit of very large samples failure is ruled by extreme value statistics (although not necessarily by the Weibull law) [8,24]. The paper is organized as follows. In section I we de-arXiv:1508.05720v1 [cond-mat.mes-hall] 24 Aug 2015 scribe the molecular dynamics simulation model and in section II discuss the numerical results. The theory is described in details in section III where we also compare its prediction with experiments. Section IV discusses the general implications of our work to understand size effects in materials at different scales. Appendix A provides details on the choice of interatomic potential and appendix B discusses the fitting method. I. MODEL We perform numerical simulations of the deformation and failure of defected monolayer graphene using the LAMMPS molecular dynamics simulator package [25]. The carbon-carbon atom interaction is modeled with the "Adaptive Intermolecular REactive Bond Order" (AIREBO) potential [26]. In order to simulate a realistic bond failure behavior, the shortest-scale adaptive cutoff of the AIREBO potential has to be fine-tuned [15,22], as detailed in appendix A. The simulated system consists of single layer, monocrystalline graphene sheets, composed of a variable number N of atoms: N varies from approximately 10 3 to 50 × 10 3 atoms. The sheets are prepared by placing the atoms on a hexagonal lattice; the characteristic lattice length scale λ = 1.42Å is chosen so that the system is initially in an equilibrium configuration. The sheets have an almost square shape lying on the XY coordinate plane; their lateral size depends on N and varies between 50 and 360Å (5 and 36 nm). When placing defects on the sheets, a fixed fraction of atoms is randomly removed; this corresponds to vacancy concentrations P = 0.1, 0.2 and 0.5%. While the graphene layer is essentially 2D, the atom positions are integrated in all the three spatial directions; also, the layers have no periodic boundary conditions. The simulations are performed by stretching the samples along the X coordinate axis, corresponding to the "armchair" direction of the graphene hexagonal structure. We select two boundary strips of atoms at the opposite X-ends of the sheet. These strips are 3.5Å wide, corresponding to 4 atom layers. Hence, the atoms are free to move in the Y and Z directions, but follow an imposed motion along the stretching direction (X). This constraint induces an initial pre-stress on the sheet that is visible in the stress-strain curve (see Fig.1b). The Y-end boundaries are left free. The system is thermostated by means of a Berendsen [27] thermostat with a temperature ranging from 1K to 800K, and a characteristic relaxation time equal to 0.1 ps; the simulation timestep is set to 0.5 fs to insure a correct time integration of the atoms dynamics. These parameters lead to a slightly underdamped atom dynamics. Before the stretching protocol is started, the system is allowed to relax to thermal equilibrium from the initial constrained state. Afterwards, one of the lateral strips is set in motion, so that the sample is subject to a constant engineering strain rateε independent of the system size. The strain rates lie between The crack nucleates from one of the defects already present in the material (not necessarily the most stressed) and rapidly grows untill the graphene sheet complete failure is achieved. b) The stress strain curve displays temperature dependent fracture strength. The prestressed initial condition (ε = 0) is due to the constraint applied to the atoms belonging to the 4 outmost layers of the sheet, which are subject to the stretching along X. 1.28 × 10 7 s −1 and 1.28 × 10 9 s −1 . As for other molecular dynamics simulations, the simulated strain rates are much higher than those applied experimentally, but the deformation speed is still much lower than the sound speed in graphene. The chosen strain rate is reached by adiabatically ramping upε, in order to minimize the creation of shock waves in the material. As a matter of fact, visual inspection of velocity fields shows that shock waves are rapidly damped and do not significantly influence the system dynamics. Simulations are carried on until the graphene sheet fractures. Failure statistics are sampled over 100 realizations for each condition in which we vary vacancy concentration P , temperature T , strain rateε and system size N . The only the exception is provided by systems characterized by T = 300K, ε = 0.128 × 10 8 s −1 , N = 20 × 10 3 and N = 50 × 10 3 atoms, where 50 samples were simulated. II. SIMULATIONS An example of the fracture process is shown in Fig. 1a, where the graphene structure is seen from above at four different times during the nucleation and ensuing growth of the crack (see also Video 1). The color code represents the XX component of the symmetric per-atom stress tensor σ xx , including both potential and kinetic terms. Typical stress strain curves are reported in Fig. 1b, showing that the tensile strength depends on temperature T . Our results provide a clear indication that it also depends on system size N , vacancy concentration P and strain ratė ε, as we discuss below. Fig. 2a reports the average failure stress σ as a function of system size for different values of the porosity P , showing that the larger and more defective a sample is, the weaker it is. A more complete description of the failure statistics is obtained by the survival distribution S(σ), defined as the probability that a sample has not yet failed at a stress σ. The numerical results for S(σ) are reported in Fig. 2b. If a system of volume V fails according to extreme value statistics, the survival distribution should depend on the volume as S(σ) = S 0 (σ) V /V0 , where S 0 (σ) is the survival distribution of a representative element of volume V 0 , the smallest independent unit in the sample [28]. If we express the volume in terms of the number of atoms N and their atomic volume V a , the survival probability can be written as is a suitable function which is a power law x κ in case of Weibull distribution [4], and exponential e x for Gumbel distribution [3]. Fig. 2 shows that the Ndependence of the survival distribution follow the prescriptions of extreme value theory, but f (x) is not a power law, indicating that the Weibull distribution does not represent the data. This is confirmed by the size scaling of the average failure stress that does not follow a power law, as would be expected from the Weibull distribution. The survival distribution depends also on temperature and strain rate, as shown in Fig. 3, which is hard to reconcile with the weakest link hypothesis un- Graphene fracture size effects. a) The average failure stress for defected graphene depends on the system N size and on the vacancy concentration P . Simulations are carried out with T = 300K andε = 0.128 × 10 8 s −1 . The lines are the theoretical prediction as discussed in the supporting information. They do not arise as direct fit of the numerical curves, but result from the analytical evaluation of the integral expression of σ n. b) The failure stress survival distribution at T = 300K,andε = 0.128 × 10 8 s −1 for different system sizes with vacancy concentration equal to P = 0.1% (blue) , P = 0.2% (green) and P = 0.5% (red). When the survival probability distributions are rescaled by N according to the predictions of the extreme value theory, the data collapse into a single curve that only depends on the vacancy concentration P . derlying the Weibull distribution. Indeed, by monitoring the local stress field σ xx before failure, we estimate that only in less than 20% of the samples (for N = 50 × 10 3 ) the final crack nucleates in the most stressed region. In 50-60% of the cases, the final crack is nucleated in regions that ranked fourth of more in terms of stress. This is a clear indication that failure is not dictated by the weakest link. To understand our simulation results, we generalize extreme value theory taking into account thermal fluctuations. We describe the system as a set of n representative elements of volume V 0 (slabs) such that the thermally activated failure of a single element induces global failure. Each representative i element obeys linear elasticity up to a critical strain ε i c , so that the elastic energy of the sample under an external stress σ is given by where E is the Young modulus. The sample is loaded at constant strain rateε so that σ(t) = Eεt and critical strains are distributed according to a probability density function ρ(ε c ). Assuming the slabs noninteracting and identicals, the survival probability for the entire sample is given by the product of the survival probabilities of each representative element S n (σ|T,ε) = [S 0 (σ|T,ε)] n , according to the theory of breaking kinetics [29]. The representative volume survival probability is defined as where Σ 0 (σ|ε c , T,ε) represents the survival probability of a single slab characterized by a failure strain ε c . Eq. 2 reduces to the standard extreme value theory when Σ 0 (σ|ε c , T,ε) = 1, but otherwise depends on temperature and strain rate. In general, however, the theory predicts that log(S n )/n should not depend on the system size, as verified by our simulations (see Fig. 2b). To estimate the survival distribution of the single slab Σ 0 (σ|ε c , T,ε) we make the phenomenological hypothesis that the material failure arises as a thermally activated process. Historically, the idea that the solid failure can be described by means of the Kramer's theory, where the intrinsic energy barrier is reduced proportionally to the applied field, has firstly appeared in material science to treat the kinetic fracture of solids under applied stresses, and dates back to the works of Tobolsky and Eyring [30] and, later, of Zhurkov [31]. More recently it has been successfully applied to the study the failure of fibers [32], gels [33], wood and fiber-glasses [34] where the potential energy barrier is given by the Griffith crack nucleation energy [35]. Most of previous work focused on the thermal dependence of the average strength or the failure time in creep experiments and did not address the survival distribution and its size dependence. To this end, we start from recent theories developed for single-molecule pulling, where the molecule rate coefficient for rupture (or unbinding) is modified by the presence of an external time-dependent force [36][37][38][39][40][41][42][43]. In our case, the stress-dependent failure rate of a single element characterized by a failure strain ε c , is given by an Arrhenius like form [39,40,43] (3) where k 0 is the Kramer's escape rate from the potential well described in Eq. 1 [37,44], with a characteristic frequency ω 0 . In our numerical simulations one end of the graphene sheet is held fixed, while the other is pulled at constant strain rateε: this can be interpreted as the action of a stiff device [40,43] for which Eq.3 has been derived. Σ 0 (σ|ε c , T,ε) obeys to the following first-order rate equation [41] dΣ 0 (σ|ε c , T,ε) dt = −k(σ(t)|T, ε c )Σ 0 (σ|ε c , T,ε) , where σ(t) = Eεt. The survival probability is then readily obtained as Notice that Eq. 6, only holds for σ < Eε c since otherwise the element fails with probability one (when σ Eε c the Kramer's theory incorrectly predicts k(σ|T, σ, ε c ) 0, since it only holds for energy barriers k B T [39]). Finally, inserting Eq. 6 in Eq. 2 and, in turns, into the constitutive equation for the theory of breaking kinetics, we obtain B. Limiting behavior of the theoretical survival distribution The survival distribution reported in Eq. 7 is written as a convolution of the disorder distribution ρ(ε c ) with a temperature and rate dependent kernel. It is instructive to study its limiting behaviors since this allows to assess the relevance of thermal and rate dependent effects for fracture statistics. Our starting point is the expression for the conditional survival probability Σ 0 (σ|ε c , T,ε) reported in Eq. 6. It is convenient to study its behavior in term of the dimensionless parameter λ ≡ (V 0 E)/(k B T ), the ratio between the elastic energy of a representative volume element and the thermal energy. In terms of λ we can write Σ 0 (λ) ≡ exp(−G(λ)), where Thermal fluctuations can be neglected when G(λ) → 0, yielding the usual disorder-induced survival probability distribution It is interesting to consider first the limit of λ → ∞, corresponding to very low temperature and large representative volume elements. In this limit, the exponential factors in G(λ) dominates and the function goes to zero even for small strain rates. In more generality, thermal fluctuations become negligible wheṅ Therefore there is a temperature and stress dependent critical strain rate above which we can neglect thermal fluctuations. Another interesting limit is the low stress regime (i.e. Hence, thanks to Eq. 2, the survival distribution for a representative element is given by where ε c = ∞ 0 dε c ε c ρ(ε c ). Therefore, the survival probability distribution function for the entire system can be recast as displaying a linear dependence on the applied stress, irrespective of the failure strain distribution function ρ(ε c ). C. Fit of the numerical data Eq. 7 provides an excellent fit to the results obtained from numerical simulations of defected graphene at different defect concentrations P , temperature T and loading rateε. To fit the numerical simulations with Eq. 9, we first need to establish the form of ρ(ε c ). This is a phenomenological function describing the distribution of failure strains of representative volume elements at zero temperature. A reasonable estimate of its functional form can be obtained from simulations at low temperature (i.e. T = 1K), where thermal fluctuations are negligible, as discussed in details in appendix B. The numerical outcomes indicate that ρ(ε c ) follow the Gumbel distribution [3] (see Fig. 4). We then insert the resulting form of ρ(ε c ) in Eq. 7 which we adopt as a fitting function for the numerical survival probability S(σ), with ω 0 and V 0 as fitting parameters. The representative volume V 0 ranges between 0.1nm 3 and 0.25nm 3 , while the characteristic frequency is found in the range ∼ 6 × 10 6 s −1 and ∼ 10 8 s −1 (see Fig. 5) Moreover, from the survival distribution we also calculate, without additional fitting, the system size dependence of the average tensile strength σ n , which displays an excellent agreement with simulations results as shown in Fig. 2a. Further details on the fitting methodology and the analytical expressions used in our model are reported in appendix B. IV. DISCUSSION In conclusions, we have performed extensive numerical simulations for the tensile failure of defected graphene focusing on the size effects of the strength distribution for At present it is not possible to compare our numerical and theoretical predictions directly to experiments. Experimental measurements of the strength of graphene sheets are mostly based on indentation tests [12], while tensile tests only recently appeared in the literature [14] but thermal, rate and size effects have not been studied. Furthermore, most experimental studies are focusing on the strength of graphene in pristine conditions [12] without defects or pre-existing cracks. Our theory is, however, very general yielding predictions that should be applicable also to other carbon nanomaterials and allows to formulate general considerations on the relevance of thermal effects for fracture. Eq. 7 suggests that thermal fluctuations can be neglected for large enough strain rate, since in this limit Σ 0 1 and the sample fails according to the weakest link statistics. In our simulations, we have E 10 12 Pa, V 0 0.1nm 3 so that at room temperature we estimate λ 10 5 . If we use this value in Eq. 8, we find that for to failure (i. e. for σ > 0.9(Eε c )) and thermal effects should therefore be relevant. Indeed using ω0 ε 10 −2 in Eq. 10, one can readily show that thermal effects start to become relevant for T > 10K in agreement with our simulations. The same argument suggests that in macroscopic samples, with larger representative volume elements, thermally activated failure can often be ignored, even at room temperature. Consider for instance a ceramic material, like sintered α-alumina [45], with E = 10 11 Pa and a typical tensile strength of σ = 10 8 Pa. Assuming that the representative volume element corresponds to a grain size of V 0 1(µm) 3 , we can estimate λ 10 14 . Now the exponential factors impose that G(λ) → 0 even at low strain rates, implying that the strength distribution should be described by conventional extreme value theory. Indeed, experiments show that the strength distribution is described by Weibull statistics with parameters that are largely temperature independent [45]. Our theory thus provides a simple way to estimate the relevance of thermal and rate dependent effects for fracture. This result could have important implications for applications to micro-and nano-mechanical devices whose reliability may crucially depend on the control of thermally activated failure. The carbon-carbon atom interactions were modeled using the "Adaptive Intermolecular REactive Bond Order" (AIREBO) potential [26], which was originally developed as an extension of the "REactive Bond Order" potential (REBO) [46]. In turn, the REBO potential was developed to describe covalent bond breaking and forming with associated changes in atomic hybridization within a classical potential; it has proven an useful tool for modelling complex chemistry in large many-atom systems. The AIREBO potential improves the REBO potential with an adaptive treatment of non-bonded and dihedral angle interactions that is employed to capture the bond breaking and bond reformation between carbon atom chains. The analytical form of the AIREBO potential (as discussed in the documentation [25]) is written as: The E REBO term has the same functional form as the hydrocarbon REBO potential developed in [46]. We will not cover here the details of the energetic terms which are thoroughly discussed in the mentioned reference. In short, the REBO term gives the model its short to medium range reactive capabilities, describing short-ranged C-C, C-H and H-H interactions (r < 2Å). These interactions have strong coordination-dependence through a bond order parameter, which adjusts the attraction between the i, j atoms based on the position of other nearby atoms and thus has 3-and 4-body dependencies. A more detailed discussion of formulas for this part of the potential are given in [26]. The E LJ ij term adds longer-ranged interactions (2 < r < r cutof A) using a form similar to the standard Lennard-Jones potential. It contains a series of switching functions so that the short-ranged LJ repulsion (1/r 12 ) does not interfere with the energetics captured by the E REBO ij term. The extent of the E LJ ij interactions is determined by a cutoff argument; in general the resulting E LJ ij cutoff is approximately 10Å, in this work we consider a cutoff of approximately 14Å. Finally, the E Torsion ijkl term is an explicit 4-body potential that describes various dihedral angle preferences in hydrocarbon configurations. The AIREBO potential has been extensively used to simulate and predict mechanical properties of carbonbased materials, i.e. fullerene, carbon nanotube and graphene [22]. Furthermore, it offers a valid tradeoff between accuracy and computational efficiency; a realistic fracture of large system sizes can be simulated in reasonably short time scales (a few hours on recent computers). Other interaction models can offer little improvement to the actual realism of the simulation, at the cost of much larger computational costs: for example, the ReaxFF potential, or DFT semiclassical approaches could describe more accurately the fast time scales of chemical reactions, but this would not change the ultimate failure length of the C-C bond: the expected maximum elongation for a C-C bond in graphene is around 0.178 nm. On the other hand, the use of faster but too simplistic models (e.g. Lennard-Jones potentials, mass and spring systems, or other elastic models) fail to significantly reproduce a realistic behavior. However, in order to simulate a realistic bond failure behavior, the short-scale C-C adaptive cutoff (r c ) of the AIREBO potential has to be tuned. In fact, it has been observed [15,47] that, during simulations of fracture of covalent bonds and without cutoff tuning, the shortestscale potential introduces a sharp increase of bond forces near the cutoff distances, which in turn causes spurious increase in fracture stress and strain [22]. It should also be noted here that this phenomenon is specifically relevant for perfect graphene and CNT lattices, while it is much less pronounced in defected samples, due to the disorder induced in the lattice by the atom vacancies. This issue has been solved in the past by incrementing the short-scale cutoff lenght of the potential; the cited papers increase this parameter to 2.0Å. This, however, has the side effect of leading to a singular behavior in the atomic pair potential when the atom atom distance is exactly 2Å. We performed stretching simulations varying the cutoff parameter from r c = 0.17 nm (default value) to r c = 0.2 nm in both armchair (X) and zigzag (Y) direc-tions of the graphene sheet with no vacancies (P = 0). The stress-strain curve obtained from the numerical simulations are shown in Fig. 6. For r c < 0.195 nm, a sharp increase on tensile stress for large strains is observed, leading to an unphysical ultra-high failure stress and corresponding failure strain. Increasing the r c in the range 0.195 ≤ r c ≤ 0.2 nm strongly suppresses this phenomenon. Moreover, the stress-strain data reported in Fig. 6, clearly display that the failure strain varies from 0.13 to 0.25 when r c is in the range 1.95 < r c ≤ 2, whereas the failure stress exhibites a much weaker fluctuation (from 85 × 10 9 Pa to 95 × 10 9 Pa). Finally we notice that for defected samples like those investigated in the present article, i.e. P = 0, the values of the failure stresses and strains do show a much less marked dependence on the choice of r c , whenever 1.95 < r c ≤ 2. Appendix B: Details of the fitting method To fit the numerical simulations with Eq. 7, we first obtain ρ(ε c ) from simulations at low temperature (i.e. T = 1K). As shown in Fig.4, the numerical survival distribution function − ln S(σ) N , obtained at T = 1K anḋ ε = 0.128×10 8 s −1 , can be nicely fitted with the following exponential form Ae − σ Eε 0 . The theoretical prediction for the survival probability distribution furnished by Eq.9 requires − ln ∞ σ/E dε c ρ(ε c ) = Ae − σ Eε 0 , once we assume that V 0 ≡ V a when T → 0. Hence, we obtain ρ(ε c ) = Ae which corresponds to a Gumbel distribution of failure strains [3]. The numerical values of the fitting parameters A, ε 0 are reported in the caption of Fig.4 for three vacancy concentrations P . We notice that the simulated samples for T = 1K are 250 in the case of P = 0.1% and 850 for P = 0.2% and 800 P = 0.5%. We then perform the least square fit of the numerical survival probabilities − ln S(σ) N , obtained for different values of T ,ε and porosities P (see Fig.s 3a,b, 7, 8), with the following function where the fitting parameters are the representative volume element V 0 and the characteristic frequency ω 0 . The atomic volume V a has been evaluated by considering a density of 38.18 atoms per nm 2 and a sheet thickness equal to 0.335 nm, yielding V a = 8.744 × 10 −3 nm 3 . For any value of P , the corresponding values of A and ε 0 ob-tained from the best fit of the data in Fig.4 . (B4) This quantity can be analytically calculated and plotted as a function of N , setting n = N V a V0 , as shown in Fig.2a for T = 300K,ε = 0.128 × 10 8 s −1 and three values of the vacancy concentration P . We emphasize that in this case no fit, but just the numerical evaluation of the integral expression of σ n B4 is provided, making use of the proper values of A, ε 0 , V 0 , ω 0 , obtained by fitting the survival probabilities displayed in Fig.4 and Fig.8. Fig.5 for P = 0.2% and P = 0.5%. For P = 0.1% the least square fit gives V0 = 0.3806 ± 0.0003nm 3 and ω0 = 4.1416 ± 0.0006 × 10 7 s −1 . The set of parameters A, ε0, V0 and ω0 which characterize uniquely the theoretical expression B2 (dashed lines) are inserted into Eq.B4 to calculate the mean average rupture stress σ n as a function of N , shown in Fig.2a.
2015-08-24T08:29:07.000Z
2015-08-19T00:00:00.000
{ "year": 2015, "sha1": "b8436eaf7244120bfcd587127368e7453a067aa6", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1508.05720", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b8436eaf7244120bfcd587127368e7453a067aa6", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
257466842
pes2o/s2orc
v3-fos-license
Chorea as a Manifestation of Systemic Lupus Erythematosus Systemic lupus erythematosus (SLE) is an autoimmune disease with multisystemic manifestations, including central nervous system involvement. Chorea is a hyperkinetic movement disorder, characterized by involuntary, dance-like and poorly coordinated movements. Acute-onset chorea is a rare neuropsychiatric inaugural manifestation of SLE. This presentation is frequently associated with positive antiphospholipid antibodies, and it usually improves with immunosuppressive treatment. We report the case of a 20-year-old female, who presented with acute onset left hemichorea and fever. Analysis showed active urine sediment. A detailed anamnesis and evaluation revealed several clinical manifestations suggestive of SLE with multiorgan involvement: neurological, renal, cardiac, hematological, joint and mucocutaneous. This case emphasizes the importance of keeping a high clinical awareness for rarer presentations of common autoimmune disorders, such as SLE, which can be severe and should be promptly treated. Furthermore, the relevance of SLE in the differential diagnosis of acute-onset movement disorders in young patients is highlighted in this report. Introduction Systemic lupus erythematosus (SLE) is a chronic autoimmune disorder, characterised by the presence of autoantibodies, systemic inflammation and multiorgan involvement, including, less frequently, the central and peripheral nervous system [1][2][3][4]. Neuropsychiatric symptoms of SLE (NPSLE) are particularly difficult to identify and diagnose but account for a considerable percentage of manifestations, with an estimated prevalence ranging from 37% to 95%. The lack of uniformity in definitions, nomenclature, and diagnostic criteria contributes to the considerable variability in the estimated prevalence of NPSLE [5][6][7]. In 1999, the American College of Rheumatology issued a proposal for the nomenclature and definition of NPSLE [4]. The proposal defined 19 syndromes, including movement disorders. Chorea is a hyperkinetic movement disorder characterised by involuntary, dance-like, and unpredictable movements. Chorea, although rare, is well described in SLE, with a cumulative incidence of 0.6% [7]. Other causes may be genetic, vascular, inflammatory, infectious, drug-induced or metabolic. We report a case of acute onset hemichorea as a form of NPSLE. Case Presentation A 20-year-old female presented to the emergency department with 24 hours onset of involuntary movements of the left hemibody. She denied being pregnant and had no relevant past medical history or chronic medication, including use of anticonception medication. On admission, she had exuberant choreiform movements of the left hemibody, afflicting the face and limbs, with predominant involvement of the upper limb (Video 1). The remaining neurological examination was unremarkable. Physical examination showed discrete malar erythema, millimetric papular lesions on the right forearm extensor surface, painful oral ulcers, episcleritis of the right eye and signs of arthritis of the right joint. Initial workup revealed pancytopenia, an increased erythrocyte sedimentation rate (ESR), a positive direct Coombs test and leukoerithrocyturia, with normal renal function, as shown in Table 1. Brain computed tomography scan was normal. Haloperidol 0.5mg twice daily was started to manage the disabling movement disorder. A thorough questioning history revealed a vespertine fever in the preceding eight weeks, initially attributed to a cystitis due to the presence of leukoerythrocyturia and treated with antibiotics. Later, due to the emergence of asthenia, weight loss, odynophagia and an inflammatory asymmetric polyarthralgia of the small and large joints, pancytopenia, the case was interpreted as a possible infectious mononucleosis. The patient was monitorised and no specific treatment was initiated. The diagnostic approach included an autoimmune workup that showed an anti-nuclear antibody (ANA) titer of 1:1,280 (normal <1: 40), with cytoplasmic staining suggestive of anti-ribosome antibodies; a positive antidouble-stranded DNA and complement consumption with C3 35 mg/dL (normal range [NR] 83-193) and C4 8.4mg/dL (NR 15-57). Antiphospholipid antibodies were negative. Urinalysis showed nephrotic-range proteinuria (4.1g/24h). Chest radiography exhibited an increased cardiothoracic ratio, and a moderate volume pericardial effusion was confirmed by transthoracic echocardiogram. A magnetic resonance imaging (MRI) of the brain showed small T2 hyperintense areas in the subcortical bihemispherical and periventricular white matter ( Figures 1A, 1B), compatible with ischemic lesions so aspirin was introduced The patient was started on immunosuppressive treatment -methylprednisolone 1g/day for three days, followed by prednisone 1mg/kg/day and mycophenolate mofetil, up to a dose of 3g/day, with a favourable response. The choreic movements ceased after the first days of hospitalisation, and haloperidol was progressively tapered. Discussion The lack of specificity of imaging abnormalities and the absence of specific biomarkers make the diagnosis of NPSLE difficult, relying essentially on clinical judgement. No complete comprehension of the pathogenesis and the consequent pathophysiological changes to explain the CNS involvement in SLE; yet it is likely that multiple mechanisms coexist immune, inflammatory, and thrombotic/ischemic [8]. Chorea has a wide list of differential diagnoses. In this patient, the acute-onset chorea could be due to vascular, infectious, endocrine/metabolic, toxic or autoimmune causes, even though the asymmetric onset would could normally suggest a structural lesion [8,9]. Nonetheless, a careful anamnesis and adequate interpretation of clinical data and blood analysis soon unveiled a disease with multiple organ involvement, which was most likely of autoimmune origin, considering the patient's age and clinical presentation. Chorea affects mainly women (in almost 90% of cases) and may precede the onset of other findings of SLE [10]. The presence of antiphospholipid antibodies is more frequent in patients that develop chorea when compared with other SLE patients (92% vs 25%, respectively) [11]. Most patients have antiphospholipidious antibodies which may be associated with hormonal variations, with increased levels of oestrogen such as pregnancy or the taking of contraceptives (which is not the case), independent of the presence of a vascular event. The pathophysiology of the chorea in lupus is still not well understood. Manifestations can be unilateral even in the absence of structural injury in imaging exams [12]. Concerning imaging studies, MRI is the most common exam undertaken in the investigation of these patients, since it allows the differential diagnosis with other causes -however, it is nonspecific for SLE diagnosis. Although chorea/hemibalism is classically associated with subthalamic nucleo lesions, presentations with hemichora/hemibalism are described in international records in patients with stoke in white substance, provided they affect the nucleo-linking pathways [13]. There are neither SLE-specific changes nor correlations between imaging findings and clinical manifestations and the MRI are normal in 40% of all NPSLE [14]. So, hemichora/chorea is a classic manifestation of neurolupus, lesions outside the ganglios of the base -already described, maybe by lesion of the pathways between the ganglios of the base. There is no consensus about therapeutic strategies in NPSLE, but the recommendation is a therapy based on the primary pathogenic mechanism: either immunosuppressive treatment or anticoagulation/antiplatelet treatment, depending on the most likely involved pathophysiologic mechanisms. The combination of both treatment options is preferred when both mechanisms are considered a possibility [7]. In the case of persistent chorea, patients may be symptomatically treated with haloperidol or other neuroleptics [3]. Conclusions The causal relationship between neuropsychiatric symptoms and SLE is still a challenge and requires the exclusion of other potential causes. In this case, a urinary tract infection is by far more likely in a young and healthy woman than a systemic disease, so the diagnosis was significantly delayed in this patient. Neurological manifestations, although a sign of severe disease, are usually manageable and respond well to immunosuppressive therapy, as observed in our patient. Our case highlights the crucial importance of a well-conducted interview and clinical observation, and the richness of a thorough differential diagnosis, underlining the importance of maintaining a high index of suspicion. This case illustrates the relevance of SLE in the differential diagnosis of acute-onset movement disorders in young patients. The engagement of a multidisciplinary team is crucial for the successful management of these patients. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2023-03-12T15:47:10.954Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "381efbec7d74d4dfd852296b913a95d2f27dc5b5", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/133749/20230308-18437-1jgr2dw.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a775473f062674d2ec46bda72e1fe17e05d46080", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
229350949
pes2o/s2orc
v3-fos-license
Efficacy and safety of montelukast sodium combined with fluticasone in the treatment of adult bronchial asthma Abstract Background: Bronchial asthma (BA) is a chronic airway inflammatory disease with reversible airflow limitation as the main clinical manifestations, such as wheezing, cough, shortness of breath, chest tightness, etc, mediated by a variety of inflammatory cells, which can be recurrent. Clinical can improve symptoms, but cannot be cured; glucocorticoid is the most important first-line medication. Clinical practice has shown that montelukast sodium combined with fluticasone in the treatment of adult BA can improve clinical efficacy and reduce adverse reactions. The purpose of this study is to systematically study the efficacy and safety of montelukast sodium combined with fluticasone in the treatment of adult BA. Methods: The Chinese databases (CNKI, VIP, Wanfang, Chinese Biomedical Database) and English databases (PubMed, the Cochrane Library, Embase, Web of Science) were searched by computer, for the randomized controlled clinical studies of montelukast sodium combined with fluticasone in the treatment of adult BA from establishment of database to October 2020. Two researchers independently extracted the relevant data and evaluated the quality of the literatures, and used RevMan5.3 software to conduct meta-analyze of the included literatures. Results: This study assessed the efficacy and safety of montelukast sodium combined with fluticasone in the treatment of adult BA through total effective rate, pulmonary function (FEV1, FVC, PEF, FEV1/FVC), and adverse reactions. Conclusion: This study will provide reliable evidence-based evidence for the clinical application of montelukast sodium combined with fluticasone in the treatment of adult BA. OSF Registration number: DOI 10.17605/OSF.IO/CKQFM Introduction Bronchial asthma (BA) is a chronic inflammatory disease of the airway mediated by many inflammatory cells, such as eosinophils, mast cells, T lymphocytes, [1,2] which often leads to airway hyperresponsiveness, extensive and variable reversible airflow limitation, and causes recurrent symptoms such as wheezing, expiratory dyspnea, chest tightness, or cough. It affects 10% of the world's population and creates a serious social and economic burden. Children and adolescents are more likely to suffer from, [3] adults are also vulnerable, and it repeatedly attacks, affecting work and life, causing great distress to adults. The main drugs used in BA are glucocorticoid,b2 receptor agonist, anticholinergic agent, theophylline, leukotriene receptor antagonist, mast cell membrane stabilizer and others. [4] Because the symptoms of acute attack of BA are serious, the medicine is required to have the characteristics of quick onset and good curative effect. Inhaled glucocorticoid can enhance local antiinflammatory effect of airway and reduce systemic effect, so it is the most effective drug for asthma. However, BA attacks repeatedly and is difficult to cure, and long-term use of hormones will lead to certain adverse reactions, [5] therefore, the current clinical treatment of BA is mostly glucocorticoid combined with other drugs. Among them, montelukast sodium is an oral selective leukotriene receptor antagonist, which plays an anti-inflammatory role by regulating the biological activity of leukotriene and relaxes bronchial smooth muscle. [6] In the clinical treatment of BA, the use of montelukast sodium combined with inhaled glucocorticoid can better fuse the mechanism of the 2 drugs and better inhibit the production of inflammation. Finally achieve the anti-inflammatory effect, to achieve the relief of symptoms of patients. Studies shows that montelukast sodium combined with fluticasone has a better clinical effect in adult BA. At present, there are many randomized controlled studies, [7][8][9][10] which show that montelukast sodium combined with fluticasone in the treatment of adult BA can effectively improve the clinical efficacy, control the symptoms of clinical acute attack, improve the lung function of patients, and have high clinical application value. However, there are differences in the research scheme and curative effect of each clinical trial, which leads to the uneven results, which to some extent affects the promotion of the therapy. Therefore, this study plan systematically evaluated the efficacy and safety of montelukast sodium combined with fluticasone in the treatment of adult BA. It provides a reliable reference for the clinical application of montelukast sodium combined with fluticasone in the treatment of adult BA. Protocol register This protocol of systematic review and meta-analysis has been drafted under the guidance of the preferred reporting items for systematic reviews and meta-analyses protocols (PRISMA-P). In addition, it has been registered on open science framework (OSF) on October 25, 2020. (Registration number: DOI 10.17605/OSF. IO/CKQFM). Ethics As the protocol does not require patient recruitment and personal information collection, it does not require approval from an ethics committee. 2.3. Eligibility criteria 2.3.1. Types of studies. We will collect all available randomized controlled trials (RCTs) on montelukast sodium combined with fluticasone in the treatment of adult BA, regardless of blinding, publication status, and region; however, the language will be limited to only Chinese and English. Research subjects. Adult patients with definite diagnosis of BA, [11] excluding pregnant or lactating women, patients complicated with malignant tumor, patients complicated with severe heart, brain, kidney, lung, liver diseases, and other diseases. Nationality, race, gender, and course of illness are not limited. 2.3.3. Interventions. The treatment group and the control group were given routine basic treatment, including bed rest, cough and phlegm elimination, oxygen therapy, and appropriate antiinfection treatment for patients with complicated infection. The control group was treated with fluticasone at the same time, and the treatment group was treated with montelukast sodium combined with fluticasone. Outcome indicators. (1) Primary outcome: the overall effective rate. Total effective rate = (cure number + effective number)/ total number * 100%. Cure: BA symptoms basically disappear, occasionally mild disease can be alleviated without medication, peak expiratory flow (PEF) diurnal fluctuation <20%, PEF or forced expiratory volume in the first second (FEV1) not less than 80% after treatment. Significant: symptoms are significantly reduced compared with that before treatment, PEF diurnal fluctuation <20%, PEF or FEV1 reached 60% to 79% of the predicted value after treatment, and drug treatment is still needed when the disease occurred. Improve: BA symptoms are alleviated to a certain extent. After treatment, PEF or FEV1 reach the predicted value of 45% to 55%, requiring bronchodilators or glucocorticoids. Invalid: no improvement in clinical symptoms and no change in PEF or FEV1 detection values, or even deterioration of the condition. (2) Secondary outcomes: pulmonary function [FEV1); forced vital capacity (FVC); (PEF); ratio of forced expiratory volume to forced vital capacity in the first second (FEV1/FVC)]; recurrence rate; incidence of adverse reactions; quality of life of patients. Exclusion criteria (1) Duplicate published papers; (2) Articles published as abstracts or with incomplete data and unable to obtain complete data after contacting the author; (3) Studies in which data are clearly wrong; (4) Studies in which interventions combined with other therapies, such as traditional Chinese medicine, acupuncture and moxibustion; (5) Studies in which interventions with other glucocorticoids; (6) Studies with no related outcome indicators. Search strategy "meng lu si te na"(Montelukast sodium), "Fu ti ka song" (Fluticasone), "zhi qi guan xiao chuan"(bronchial asthma), and English retrieved words such as "Montelukast sodium," "Fluticasone," "bronchial asthma," etc, were searched in English databases, including PubMed, the Cochrane Library, EMBASE, and Web of Science. The retrieval time was from the establishment of the database to October 2020, and all domestic and foreign literatures on montelukast sodium combined with fluticasone for the treatment of adult BA were collected. Take PubMed as an example, and the retrieval strategy is summarized in Table 1. Data screening and extraction Referring to the research selection method in the 5.0 edition of the Cochrane collaborative network system evaluator manual, according to the PRISMA flow chart, 2 researchers read the topics and abstracts independently. First exclude RCT, which clearly does not meet the inclusion criteria. Read the full text carefully for studies that may meet the inclusion criteria and determine the final RCT. Two researchers cross-check each other to include the results. In case of disagreement in the screening process, the 2 parties shall discuss and resolve, if not, listen to the opinions of third parties and reach an agreement. Excel2013 was also used to extract relevant information, including: Clinical studies (title, first author, year of publication, sample size, sex ratio, mean age, mean course, stage); Interventions (Type, dosage, frequency, course of treatment of conventional symptomatic support therapy in treatment group and control group; Dosage, frequency, course of treatment of fluticasone in treatment group and control group; Dose, frequency, course of treatment of montelukast sodium); Risk bias assessment elements in randomized controlled trials; Outcome indicators. The literature screening process is shown in Figure 1. Literature quality assessment By using Cochrane Handbook 5.1.0 evaluation criteria, the quality of the literature was evaluated from 5 aspects: inclusion bias, scheme bias, measurement bias, follow-up bias, and score bias. If these 5 indexes are consistent, the quality of the literature is high, otherwise the bias may be generated. On the basis of the performance of the literature included in the above evaluation items, the 2 researchers will give low-risk, unclear, or high-risk judgments and cross-check each after completion. If there are differences, discussions will be held. If there is no agreement between the 2, it will be discussed with third-party researchers. 2.8. Statistical analysis 2.8.1. Data analysis and processing. Use RevMan5.3 software provided by Cochrane collaboration for statistical analysis. Relative risk (RR) is taken as the statistic of the dichotomous variable. For continuous variables, when the tools and units for measuring indicators are the same, choose the weighted mean difference (WMD), but when the tools and units for measuring indicators are different, choose standardized mean difference (SMD). All the above are represented by effect value and 95% confidence interval (95% CI), respectively. Heterogeneity test: the I 2 value is used to quantitatively evaluate the heterogeneity between studies. The variation between different studies in systematic evaluation is called heterogeneity. If I 2 50%, that the heterogeneity is good, the fixed effect model is adopted. If I 2 >50%, it is considered to be significant heterogeneity, the source of heterogeneity will be explored by subgroup analysis or sensitivity analysis. If we cannot analyze the source of heterogeneity, random effect model can be used to analyze. If there is significant clinical heterogeneity between the 2 groups, and grouping analysis is not available, descriptive analysis will be used. 2.8.2. Dealing with missing data. If there are missing data in the article, contact the author via email for additional information. If the author cannot be contacted, or the author has lost relevant data, descriptive analysis will be conducted instead of metaanalysis. Subgroup analysis. Subgroup analysis was carried out according to age group: youth, middle age and old age, and subgroup analysis was carried out according to course of treatment. Sensitivity analysis. In order to test the stability of metaanalysis results of indicators, a one-by-one elimination method will be adopted for sensitivity analysis. 2.8.5. Assessment of reporting biases. Funnel plots were used to assess publication bias if no fewer than 10 studies were included in an outcome measure. Moreover, Egger and Begg tests were used for the evaluation of potential publication bias. 2.8.6. Evidence quality evaluation. The Grading of Recommendations Assessment, Development, and Evaluation (GRADE) will be used to assess the quality of evidence. It contains 5 domains (bias risk, consistency, directness, precision, and publication bias). And the quality of evidence will be rated as high, moderate, low, and very low. Discussion BA is the result of reduced immune system due to exposure to allergens or environmental exposure. [12] Typical asthma attack symptoms are easy to identify, but the etiology of asthma is complex, its attack and the body's reactivity, that is, genetic factors and atopic quality of individual differences, allergens, and stimuli of different quality and quantity, can lead to asthma attack symptoms of ever-changing. Sick people, especially adults, may gradually lose their ability to work, which places a huge economic burden on the health care system and society. According to the latest China Pulmonary Health Research (CPH) survey in 2019, the prevalence of asthma in people over 20 years (including 20 years old) in China is 4.2%, and the total number of patients is 45.7 million, which is far more than expected. [13] Studies have shown that adult-onset asthma (AOA) has a worse prognosis than childhood-onset asthma (COA). [14] Therefore, the treatment of adult BA has received more and more attention. According to previous studies, the occurrence of the disease is not only related to a variety of cell-mediated inflammatory reactions, but also involved in neuroregulation and immune response. Eosinophils, mast cells, T lymphocytes, neutrophils, smooth muscle cells, airway epithelial cells, leukotriene, interleukin, and other components are involved in mediating the inflammatory response. [15][16][17] Glucocorticoids are the most effective first-line drugs for asthma. Its mechanism is to inhibit the metabolism of arachidonic acid, reduce the synthesis of leukotriene and prostaglandin, activate and improve the responsiveness of airway smooth muscle to b2 receptors, inhibit the chemotaxis and activation of eosinophils, and inhibit cytokine synthesis, reduce microvascular leakage. However, long-term or heavy use or intravenous drip can cause electrolyte disorders, interfere with the body's immunity and other side effects, such as hoarseness, candida oropharynx infection, osteoporosis, increased cortical acid and other adverse reactions. Inhalation can make the drug contact the airway in the largest area and improve the affinity of glucocorticoid receptor. At present, the commonly used clinical drugs are budesonide, momethasone, fluticasone-formoterol inhalants, and so on. [18][19][20] Leukotriene antagonists are Cys-LT competitive antagonists that reduce inflammation and Th2 response by blocking Cys-LT. Leukotriene receptor antagonists (LTRA) have been recommended for the treatment of persistent asthma, representing drugs such as montelukast sodium. [21] Clinical use of inhaled glucocorticoid fluticasone combined with montelukast sodium in the treatment of adults BA can improve lung function, improve the quality of life of patients, and reduce adverse reactions caused by long-term use of hormones, has achieved good clinical effect. The clinical efficacy of montelukast sodium combined with fluticasone in the treatment BA adults is reliable. However, the evidence from RCTs is inconsistent. With the increasing number of clinical trials, it is urgent to systematically evaluate montelukast sodium combined with fluticasone in the treatment of adult BA. In this study, we will summarize the latest evidence of the efficacy of montelukast sodium combined with fluticasone in adult BA. This work also provides useful evidence for determining the efficacy and safety of montelukast sodium combined with fluticasone in the treatment of adult BA patients, which is beneficial for both clinical practice and health-related decision makers. However, this systematic review has some limitations. There may be some clinical heterogeneity in the way of routine western medicine therapy and the degree of patient's condition. The course of disease is also different, and may have a certain impact on the results. Due to the limitation of language ability, we only search English and Chinese literature, and may ignore the research or report of other languages. Author contributions Data collection: Huiling luo and Hongmei Han.
2020-12-23T06:16:39.936Z
2020-12-24T00:00:00.000
{ "year": 2020, "sha1": "1868b62d7d2bf7f546701e70f2ed2a47f201a15a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1097/md.0000000000023453", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8c62e53844dd34081c44ccadccc89656a1729d7a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4993488
pes2o/s2orc
v3-fos-license
Adiponectin Supports Human Glioma Cells Survival against Temozolomide through Enhancement of Autophagic Response in Glioma Cells Objective: To investigate the role of adiponectin in human glioma cell lines against the temozolomide and the molecular regulation mechanism. Methods: Human glioma cell lines U251 and U-87MG were cultured in Dulbecco’s modified eagle medium (DMEM) containing 4500 mg/L glucose. MTT was used to measure cell growth ratio. Western blot was used to detect the protein levels of autophagy-related protein (Beclin 1, LC3 I/II, p62) and phosphorylated AMPK (p-AMPK) in human glioma cell lines. After AICAR and Compound C were administered, the change of p-AMPK and the autophagy level were examined by western blot. Results: While adiponectin stimulates AMPK in phosphatase and up-regulates the level of autophagy, human glioma cell lines obtain more resistance against the temozolomide, which is facilitated by AICAR and weakened by Compound C. Conclusion: As an important adipokine, adiponectin can up-regulate the glioma cell autophagy by activating the AMPK signaling pathway which increases the resistance of glioma cells to temozolomide. P. Sun et al. 2 intracranial tumors.At present, the treatment of glioma is mainly surgical resection, combined with radiotherapy, chemotherapy and other comprehensive treatments, but the overall therapeutic effectiveness is discontented with high recurrence rate and the poor prognosis [1] [2].On the premise of excising the tumor completely, improving the sensitivity of radiotherapy and chemotherapy has become the main treatment strategy.Recently adiponectin, an important member of the adipokines, has be certified to inhibit the proliferation and promote the apoptosis of breast cancer, prostate cancer, colorectal cancer and other tumor cells, and consequently to be a significant factor which restrains the growth of tumor [3]- [7].However, some studies indicated that, in the hypoxic state, adiponectin could increase the survival rate of rectal cancer cells by up-regulating the level of autophagy [8].Liu Lei et al. [9] proved that autophagy could enhance tumor cells' resistance to chemotherapeutic drugs.Researches for adiponectin and glioma are still deficient.So that adiponectin could increase the resistance of glioma cells to temozolomide by up-regulating autophagy needs to be further verified.This study used human glioma cell lines U-87MG and U251 as the research object and detected the role of adiponectin in human glioma cell lines against the temozolomide and the molecular regulation mechanism. Cell Growth Ratio Assay The MTT was dissolved in PBS solution at a concentration of 5 mg/ml and filtered through a 0.22μm filter to sterilize and remove insoluble residues, and then stored in the amber vials at 4˚C for a month.After 48 and 72 h incubation, 20 μl of the MTT solution was added to each well of 96-well plates and incubated for 4 h at 37˚C in a humidified atmosphere of 5% CO 2 .At end of the incubation period, DMSO was added to each well of 96-well plates for removing insoluble residues.The absorbance was determined at 490 nm.The A490 was taken as an index of the cell viability and the activity of mitochondria.The net absorbance from the plates of cells cultured with the control medium (not treated) was considered as 100% of the cell growth ratio and the mitochondrial activity. Western Blot The expression of adiponectin receptor 1, adiponectin receptor 2, LC3 I/II, becline 1, p62, AMPK, p-AMPK were determined by western blotting.Total protein content in each lane was determined using the bicinchoninic acid assay and then each lane containing equal amounts of protein were separated on a 10% -15% SDSP-AGE gel.After transfer to nitrocellulose membranes, blots were blocked with 5% nonfat milk in 0.2% Tween 20 in Tris-buffered saline (TBS) for 1 h and then incubated with primary antibody at 4˚C overnight.Blots were then washed five times for 10 min in washing buffer (0.2% Tween 20 in TBS), followed by incubation for 2 h at room temperature with a specific secondary antibody.Subsequently, these specific antibodies were detected by using ECL Western blotting detection (Millipore, USA) and the fluorescence excitation was imaged on X-ray film.Normalization was based on the protein level of GAPDH and analyzed by Imagine J software. Statistical Analysis All statistical analysis was performed using SPSS 13.0 (SPSS Inc.USA).All data were presented as mean ± SEM.All data were analyzed by analysis of t-test when normality and homogeneity of variance assumptions are satisfied.Significance was set at P < 0.05. The Expression of the Adiponectin Receptor in Glioma Cell Lines Adiponectin can't display biological functions until adiponectin is combinated with adiponectin receptor (ADIPOR), locating on cell membrane surface.There are both ADIPOR1 and ADIPOR2 expression (Figure 1) in glioma cell lines U251and U-87MG detected by WB, which is the basis for further study. Adiponectin Dose-Response and Time-Response Experiments 5 × 10 3 cells per well were seeded in 96 well plates in 1% FBS containing medium for 24h.Cells were subsequently treated with increasing concentrations (0 μg/ml, 0.1 μg/ml, 0.5 μg/ml, 1 μg/ml, 3 μg/ml, 10 μg/ml) of adiponectin and incubated for a further 24 h.we assessed the cell growth ratio of the human glioma cell line, U251 and U87-MG, by MTT assay.It has been discovered that adiponectin at 3 μg/ml reliably produced near maximal stimulation and was used for subsequent mechanistic studies (P < 0.05, Figure 2(a)).After serum starvation for 24 h, U251 and U-87MG cells were treat with 3 μg/ml adiponectin for a different time course (1 h, 6 h, 12 h, 24 h, 48 h).We find that adiponectin in the time course, 24 h, reliably produced near maximal stimulation and was used for subsequent mechanistic studies (P < 0.05, Figure 2(b)). Effect of Adiponectin on the Expression of Autophagy Related Protein in Human Glioma Cells The protein levels of autophagy in U251 and U-87MG cells were determined by Western blotting.After being incubated with 3 μg/ml adiponectin for 24 h, compared to the control group, the expression of Beclin1 and the ratio of LC3 II/I were significantly increased, the expression of p62 was sharply reduced (Figure 3(a)).Gray value analysis by Image J showed that the expression of autophagy related proteins (Beclin1, LC3, p62) in the 3 μg/ml group compared with the untreated group has statistical difference(P < 0.05, Figure 3(b)). Effect of Adiponectin on the Expression of AMPK in Human Glioma Cells We detected the expression of AMPK and p-AMPK in glioma cells by Western blotting.U251 and U-87MG cells were incubated with 3 μg/ml adiponectin for a different time course.The expression of p-AMPK began to be increased at 12 h and reached the peak at 24 h (Figure 4(a)).Gray value analysis showed that compared to the control group, the expression of p-AMPK was significantly increased respectively after being incubated with adiponectin for 12 h, 24 h, 48 h.The difference had statistically significant (P < 0.05, Figure 4(b)). Effect of Activating or Inhibiting AMPK Signal Pathway on Autophagy of Glioma Cells Incubated with Adiponectin WB showed that AICAR can increase the AMPK phosphorylation and the expression of autophagy related protein Beclin1 of glioma cell after being incubated with adiponectin, and gray value analysis showed that the expression of p-AMPK and Beclin were significantly increased in Ad + AICAR group compared to the control group; CC can reduce the AMPK phosphorylation and the expression of autophagy related protein of glioma cell after being incubated with adiponectin.Gray value analysis also showed that the expression of p-AMPK and Beclin were significantly reduced in Ad + CC group compared to the control group.The difference had statistically significant (P < 0.05, Figure 5). Adiponectin Increases the Resistance of Glioma Cells to Temozolomide We added different concentrations of temozolomide into culture medium after seeding plated the glioma cells for 48 h and found that 100 μM, 1 mM temozolomide can significantly reduced the cell survival rate (P < 0.05, Figure 6).After adding 1 mM temozolomide for 24 h, we added 0 μg/ml, 1 μg/ml, 3 μg/ml, 10 μg/ml adiponectin respectively for another 24 h and tested the cell growth ratio by MTT.Adiponectin can enhance the resistance of glioma cells to temozolomide, and this biological effect of adiponectin was facilitated by AICAR and weakened by Compound C (P < 0.05, Figure 6(c), Figure 6(d)). Discussion Adiponectin, which is an important adipokine, is known to be a key molecule in the positive correlation between obesity and cancer, such as breast cancer, prostate cancer and colorectal cancer [3]- [7].Previous studies have shown that adiponectin is a tumor suppressor that can inhibit the occurrence of tumor [8].Our understanding of the relationship between glioma and obesity is uncertain yet.Furthermore, effects of adiponectin on glioma were poorly reported.In this research, we used U251 and U87-MG glioma cell lines to explore the role of adiponectin in human glioma cell lines against the temozolomide, in order to illustrate adiponectin can up-regulate glioma cells autophagy and increase the resistance to the chemotherapeutic drug temozolomide. Autophagy is a homeostatic and evolutionarily conserved process, which degrades cellular contents containing useless macromolecules such as long-lived proteins, faulty aggregates or misfolded proteins, as well as damaged or redundant organelles such as mitochondria, endoplasmic reticulum, or peroxidase, in order to maintain the the cell homeostasis [10]- [14].In mammalian cells, the process that Beclin1 isolates from Bcl-2 and combines with PI3K is a key step to initiate autophagy.After being sheared and modified by Atg7 and Atg3, LC3 I becoming LC3 II is the specific process of autophagy, so that the ratio of LC3 II/I is regarded as an important indicator of the formation of autophagosome.The directly combination of p62 and LC3 is involved in the autolysosome's development.If large number of autophagosome formed in cells without p62 consumption, it suggests that the formation of autolysosome was restrained and autophagy was inhibited [15].This reaserch found that adiponectin can up-regulate the expression of Beclin1 in glioma cells, increase the ratio of LC3 II/I and consume a large amount of p62, so it can be considered that adiponectin can promote the autophagy in glioma cells.In addition, adiponectin, adding into the cell culture medium, can stimulate AMPK in phosphatase and increase the expression of Beclin1 simultaneously in glioma cells.This step was facilitated by AICAR and weakened by Compound C. So it can be considered that the mechanism of adiponectin up-regulating the level of autophagy in glioma cells is related to the activation of AMPK signaling pathway, which consists with the previous conclusion that AMPK is an important target for the activation of autophagy. Autophagy plays a double-edged sword role in the occurrence and development of tumor and has different effects in the different stages of the tumor.Autophagy limits tumor formation by preventing the accumulation of damaged proteins and organelles; after tumor formated, autophagy promotes tumor cell survival.In the process of treatment with chemotherapy drugs, autophagy makes a positive sense for the cancer cells to survive and adapt to changes in the external environment.As a result, autophagy presents a cell protection function what we don't want to see-enhancing the tumor cells resistance to chemotherapeutic drugs [14] [16] [17].Compared to glioma cells incubated only with temozolomide for 48 h, the growth ratio of glioma cells, which were incubated with temozolomide for 24 h and added adiponectin for another 24 h, was significantly increased.What's more, AICAR and compound C interfering the above effects further.It is illustrated that adiponectin up-regulated the glioma cell autophagy by activating the AMPK signaling pathway which increased the resistance of glioma cells to temozolomide. Conclusion As an important adipokine, adiponectin can up-regulate the glioma cell autophagy by activating the AMPK signaling pathway, which increases the resistance of glioma cells to temozolomide.The biological effects of adiponectin are thought to be clinically important in the pathophysiology of tumor development and progression. Figure 1 . Figure 1.The expression of the adiponectin receptor in U251 and U-87MG. Figure 2 . Figure 2. Effect of adiponectin on U251 and U-87MG cells proliferation (a) U251 and U-87MG cells were cultured with increasing concentrations of adiponectin for 24 h.(b) U251 and U-87MG cells were treat with 3 μg/ml adiponectin for a different time course.Cell growth ratio was assessed by MTT assay.Mean ± SEM, N = 6, * P < 0.05 versus untreated control. Figure 3 . Figure 3. Changes of autophagy related protein after being treated with 3 μg/ml adiponectin for 24 h in U251 and U-87MG cells.(a) Western blot; (b) gray value analysis.The expression of Beclin1 and the ratio of LC3 II/I were significantly increased, and the expression of p62 was sharply reduced.* P < 0.05 versus untreated control. Figure 4 . 6 Figure 5 . Figure 4.The time course of activation of AMPK determined by western blot in U251 and U-87MG cells after treated with adiponectin.(a) Western blot; (b) gray value analysis.The ratio of p-AMPK to AMPK were increased and peaked at 24 h after treated with adiponectin.* P < 0.05 versus untreated control. Figure 6 . Figure 6.Changes of the resistance of glioma cells to temozolomide after treating with adiponectin.((a), (b)) 100 μM, 1 mM temozolomide can significantly reduced the cell growth ratio.((c), (d)) Adiponectin can enhance the resistance of glioma cells to temozolomide, and this biological effects of adiponectin was facilitated by AICAR and weakened by Compound C. Mean ± SEM, N = 6, * P < 0.05.
2018-04-20T17:44:56.482Z
2016-03-24T00:00:00.000
{ "year": 2016, "sha1": "b9f75e8778bea897353f78ae62f3b6de2c3e548e", "oa_license": "CCBY", "oa_url": "https://www.scirp.org/journal/PaperDownload.aspx?paperID=64918", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "b9f75e8778bea897353f78ae62f3b6de2c3e548e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
261540422
pes2o/s2orc
v3-fos-license
Modeling of trabecular bone transition into plastic deformation stage under uniaxial compression . This article deals with the nonlinear behavior of trabecular bone tissue under uniaxial compression. The model of this behavior is a stress-strain curve with an ascending branch, a peak point, and a descending branch. The known stress-strain model predicts the behavior of trabecular bone tissue at the pre-peak and partially at the post-peak stage of deformation. The model does not take into account the transition of trabecular bone into the plastic stage of deformation and the appearance of residual deformations, which (depending on the scale) may be physiologically unacceptable. The aim of this work is to predict the transition point of trabecular bone into the plastic state. The article proposes and implements an approach based on the joint application of the stress-strain model and the differential energy criterion of brittle fracture. This study contributes to the development of new models, the use of which improves the possibilities of analyzing the mechanical behavior of trabecular bone tissue under mechanical impact, which is important for the practice of load rationing in traumatology and sports medicine. The small amount of initial data is a positive quality of the proposed approach to modeling the transition of trabecular bone into the plastic state. Given the small volume of studies using the proposed approach, it is necessary to continue research in this direction, despite the good agreement of the modeling results with the experimental data known from the literature. Introduction Trabecular bone is a spongy, porous material with a cellular structure.It is the basis of the epiphyses of all tubular bones (such as the femur and tibia), as well as vertebrae, pelvic bones, etc. [1].The mechanical properties of this bone tissue depend on their density, age, sex, geometry and anatomical location [2].When loaded, bone deforms and internal forces are generated in the trabeculae; if the forces and/or deformations are excessive, they cause local bone damage or destruction [3][4][5].Bone properties play an important role in the biomechanical responses of the musculoskeletal system.Abnormal loading and inelastic deformations can cause changes in bone architecture, affecting bone remodeling and leading to pathological bone conditions [6][7][8].Analysis of the strength and function of trabecular bone as a living tissue is a complex multifaceted problem [9][10][11], the contribution to which is important in scientific and practical terms [12,13]; this article deals only with the biomechanical aspects of this problem and is limited to the relationship between the load on bone and bone deformations caused by this load. The study of the mechanical properties of bone is important both for understanding the mechanism of injury and for developing optimal osteosynthesis structures.The mechanical condition of the periprosthetic bone has a direct impact on implant fixation [7].It is important to know the load limits for the safe functioning of the bone-fixation system, because plastic deformation of the trabecular bone may occur during implantation due to local physiologically unacceptable loading [8,9]. From the viewpoint of material mechanics, trabecular bone exhibits complex nonlinear behavior.A typical load-displacement curve for compression of trabecular bone specimens (Figure 1) is similar to the load-displacement curve for wood [14].This means that we can use the same approaches to model the mechanical behavior of some wood and bone specimens at the macro level: testing this assumption is the goal of this paper.The curve in Figure 1 reflects several stages and states of trabecular bone during compression: initial compaction and formation of contact sites for load transfer through the trabecular system; a stage of almost linear deformation on the ascending branch of the curve; the yield point, which usually corresponds to 0.2% of residual (plastic) deformation; the peak point (corresponding to the ultimate load); the post-peak descending branch, on which one can approximate the point of bone tissue transition into almost plastic state (plateau phenomena [3]); compaction of the trabecular system, which is reflected locally by ascending segments of post-peak branch in Figure 1 [7,8]. Numerous simulation studies have been performed by many researchers to model and predict the compression behavior of trabecular bone [9,10,11].To enable a correct comparison of experimental data obtained in different laboratories at different times, experimental load and displacement data converted into stress-strain curves (Figure 1) using initial sample sizes [3].Despite numerous studies and advances in modeling the mechanical behavior of trabecular bone tissue, the question of the criteria for transition to the plastic deformation stage remains poorly understood.One possible approach to solving this issue considered in this paper. Methodology and results To analyze the stress-strain relationship, we will use the Blagojevich model [16,17], which we write in the form of two equations: (2) Model (1) belongs to the class of the simplest, but rather universal models [12].To calculate the stress () as a function of strain (), it is necessary to know the peak stress value ( ) and the parameters and in equations ( 1) and ( 2).The parameters and depend on the stiffness of the sample: for stiff structures and materials, the parameter values are higher than for soft ones.The values of parameter a can be determined from the results of tests at the pre-peak stage, by analogy with [18].Parameters and can be determined using experimental data, applying the method of least squares [17].The possibility of calibrating the model by fitting the parameters according to the version [18] is shown in Figure 2, where the red line corresponds to the values of = = 1 in equations ( 1) and ( 2); the green lines correspond to the values 0.75≤ ≤9.00 and 0 ≤ ≤ 800.2) and the experimental curve by [15]. Figure 3 shows the above experimental curve (Figure 1) and the theoretical curve by equations ( 1) and ( 2) at = 7.0 and = 7.1.Figure 3 shows that the model by equations ( 1) and ( 2) corresponds to the experimental data in the pre-peak stage of deformation, but only partially in the post-peak stage.This means that equations ( 1) and ( 2) do not model the plateau phenomenon [3], i.e., these equations do not model the transition of the sample into the plastic deformation stage. Consequently, the model in the form of equations ( 1) and ( 2) should supplemented with the criterion of transition to the plastic state. Trabecular bone contains mineral and organic components [1], which manifested by a combination of elastic and plastic properties.Brittle fracture realized at small strains, but the process of plastic deformation continues after brittle fracture (with known limitations).Therefore, the brittle fracture criterion can used as a criterion for the transition of trabecular bone to the plastic deformation stage, which shown by the example of wood in [19].The justification for this criterion given in [20].The transition point to the plastic stage can be determined analytically or graphically.The graphical way of determining this point explained in Figure 4; this point is on the post-peak branch of the stress-strain curve (red circle).Point 1 in Figure 4 corresponds to the largest angle of slope of the tangent, i.e., the highest value of the modulus of elasticity.In this case, =167 MPa, which exceeds the known value (136 MPa) from [15], the discrepancy can explained by the approximate nature of the model.Point 2 in Figure 4 is the transition point of the trabecular bone into the plastic stage of deformation; this stage modeled by the horizontal segment of the straight line.The dotted line on the descending branch in the same figure exists only theoretically and not realized in experiments [15] (Figure 5). Discussion In this paper, the energy criterion of brittle fracture used as a criterion for the transition of trabecular bone during uniaxial compression to the plastic deformation stage.Plastic deformation leads to the appearance of residual deformations, which (depending on the scale) can be physiologically unacceptable [1].Local plastic deformations of trabecular bone tissue appear during osteosynthesis [1,8,9].Therefore, analysis of trabecular bone behavior under mechanical impact is important for the practice of load rationing in traumatology and sports medicine. Note that the energy criterion discussed above predicts the transition point of trabecular bone tissue into the plastic state on the post-peak branch of the stress-strain curve.Our experience in applying this criterion has shown that, in a number of cases, the transition point is near the peak point [19]. It should also note that the considered model and the criterion of transition to the plastic state are approximate, since they do not take into account all the special features of the structure and functioning of trabecular bone tissue.In addition, it is necessary to take into account the conditions of the experiments.Nevertheless, the gradual accumulation and analysis of new experimental data and modeling results contribute to progress in the affected area. Conclusion The paper substantiates the relevance of studies related to predicting the transition point of trabecular bone tissue during uniaxial compression into the plastic state. It is shown that the known stress-strain model ( 1), ( 2) predicts the behavior of trabecular bone tissue at the prepeak and partially at the postpeak stage of deformation (Fig. 3).However, the model does not take into account the transition of trabecular bone tissue to the plastic deformation stage. An approach based on the combined application of model ( 1), ( 2) and the differential energy criterion of brittle fracture was proposed and implemented [19,20].An important feature of the developed approach is a small amount of initial data. Comparison with experiments known from the literature confirmed the adequacy of the approach and the consistency of the simulation results and experimental data for uniaxial compression of trabecular bone tissue (Fig. 4). This study contributes to the creation of a new tool, the use of which improves the ability to analyze the mechanical behavior of trabecular bone tissue under mechanical action, which is important for the practice of load rationing in traumatology and sports medicine. Given the small number of studies using the proposed approach to simulating the transition of trabecular bone into a plastic state, it is necessary to continue research in this direction, despite the good agreement of the simulation results with the experimental data known from the literature.
2023-09-06T15:17:16.179Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "0a0d62b8a863896e8ff1fcede9dd70be7a7bf914", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/57/e3sconf_ebwff2023_02003.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "746a719b4134b7e94a59450496956adbb0992354", "s2fieldsofstudy": [ "Engineering", "Biology" ], "extfieldsofstudy": [] }
215730406
pes2o/s2orc
v3-fos-license
Distribution of arbuscular mycorrhizal fungi (AMF) in Terceira and São Miguel Islands (Azores) Abstract Background The data, presented here, come from samples collected during three research projects which aimed to assess the impact of land-use type on Arbuscular Mycorrhizal Fungi (AMF) diversity and community composition in pastures of Terceira Island (Azores, Macaronesia, Portugal) and also in the native forest of two Azorean Islands (Terceira and São Miguel; Azores, Macaronesia, Portugal). Both projects contributed to improving the knowledge of AMF community structure at both local and regional scales. New information Little is known on the AMF communities from Azores islands and this study reports the first survey in two Azorean Islands (Terceira and São Miguel). A total of 18,733 glomeromycotan spores were classified at the species level from 244 field soil samples collected in three different habitat types – native forests (dominated by Juniperus brevifolia and Picconia azorica), semi-natural and intensively-managed pastures. Thirty-seven distinct spore morphotypes, representing ten glomeromycotan families, were detected. Species of the family Acaulosporaceae dominated the samples, with 13 species (38% of the taxa), followed by Glomeraceae (6 spp.), Diversisporaceae (4 spp.), Archaeosporaceae (3 spp.), Claroideoglomeraceae (3 spp.), Gigasporaceae (3 spp.), Ambisporaceae and Paraglomeraceae, both with the same number of AMF species (2 spp.), Sacculosporaceae (1 sp.) and Entrophospora (family insertae sedis). Members of the family Acaulosporaceae occurred almost exclusively in the native forests especially associated with the Picconia azorica rhizosphere, while members of Gigasporaceae family showed a high tendency to occupy the semi-natural pastures and the native forests of Picconia azorica. Members of Glomeraceae family were broadly distributed by all types of habitat which confirm the high ecological plasticity of this AMF family to occupy the more diverse habitats. Introduction Arbuscular mycorrhizal fungi (AMF) are one of the most important groups of below-ground biota (Jeffries et al. 2003; . These obligate symbionts live in association with approximately 80% of vascular plants and have essential ecological roles, namely, they facilitate plant growth through enhancing uptake of several macro-and micro-nutrients of low mobility (e.g. P, Zn, Cu) in soil (Brundrett and Tedersoo 2018). Arbuscular mycorrhizas can also provide other ecological functions such as influencing the microbial and chemical environment of the mycorrhizosphere, stabilising soil aggregates (Rillig and Mummey 2006) and conferring plant tolerance to several abiotic (Göhre and Paszkowski 2006;Li et al. 2013;Chitarra et al. 2016) and biotic (Vos et al. 2012;van der Heijden et al. 2015) stresses. AMF are, therefore, beneficial for plant performance, playing a crucial role for the sustainability of natural and agricultural ecosystems ) and important ecosystem services (Chen et al. 2018). However, despite their ecological role, little is known about how their community structure varies in relation to habitat type in the Azores archipelago. The Azores archipelago has an extended area of grasslands (Martins 1993), including natural grasslands, semi-natural pastures and intensive pastures (Cardoso et al. 2009). It also has the unique native forest, Laurisilva, which has more endemic plants and animals than any other habitat in the region. In the last 500 years, as a consequence of human activity, much of this native forest has been replaced by man-made habitats and has been subjected to fragmentation (Borges et al. 2005). Thus, immediate action to restore and expand native forest is required to avoid the ongoing loss of endemic species (Terzopoulou et al. 2015). AMF play an important role in habitat restoration, by improving plant nutrition and performance under environmental stress by facilitating plant adaptation in both nursery and field conditions (Babu and Reddy 2011). Therefore, understanding the AMF diversity in the native forest will help to define strategies for management and restoration of such endangered forests. An important step in restoration strategies is the re-establishment of adapted native plant species (Ferrol et al. 2004). A good understanding of mycorrhizal associations in undisturbed localities could then be used to provide information about AMF inoculum production for use in the rehabilitation of degraded ecosystems. In this contribution, we list the species of Arbuscular Mycorrhizal Fungi (AMF) found in ecological studies, comparing anthropogenically disturbed pastures and forests of Terceira Island (Azores, Macaronesia, Portugal) and also in the native forests of São Miguel Island (Azores, Macaronesia, Portugal). General description Purpose: In this contribution, we list the AMF species found in pastures from different landuse types of Terceira Island to investigate the effect of disturbance on AMF community structure. Native forests from Terceira and São Miguel Island were also sampled to observe patterns of AMF species composition and distribution, in order to provide baseline information for later use in establishing strategies for conservation of Picconia azorica and Juniperus brevifolia, in particular and native Azorean forests, in general. Project description Study area description: All data used in this study came from surveys about AMF diversity and composition in different ecosystems (pasturelands and native forests) conducted in two Islands of the Azorean archipelago, Terceira and São Miguel (Melo et al. 2014, Melo et al. 2017 (Fig. 1). The sampling areas were cattle-grazed upland pastures of two different types from Terceira and four fragments of native forests from each Island (Table 1) (Fig. 2). The two pasture types include semi-natural pastures with low grazing intensity and frequency (managed for more than 50 years, with low stocking density, grazed only in summer and with a relatively high diversity of grasses and forbs) and intensively-managed pastures with high grazing intensity and frequency (managed for more than 30 years, with high stocking density, grazing during all year and characterised also by a depauperate vascular flora of five or fewer dominant species) (Melo et al. 2014). The semi-natural pastures, Pico Galhardo (TER_SP_PG) and Terra Brava (TER_SP_TB) (Fig. 1) are included in Terceira Natural Park and are dominated by the perennial grasses Holcus lanatus and Agrostis castellana, have a high floristic diversity (Dias 1996, Borges 1997, often including other grasses such as Anthoxanthum odoratum, Lolium multiflorum, Holcus rigidus and Poa trivialis and non-forage species including Lotus uliginosus, Rumex acetosella ssp. angiocarpus, Potentilla anglica, Hydrocotyle vulgaris, Plantago lanceolata and Lobelia urens (For more details see Melo et al. 2014). The intensively-managed pastures, Agualva 1 (TER_IP_R1) and Agualva 2 (TER_IP_R2) (Fig. 1) resulted from the conversion of undisturbed native forest to wood production of nonnative trees and then to permanent pastures. They are now surrounded by an exotic eucalyptus plantation. The vegetation is dominated by Holcus lanatus and Lolium perenne, but also has high populations of Trifolium repens, P. lanceolata, Cyperus esculentus, Mentha suaveolens, Cerastium fontanum and Rumex conglomeratus (Dias 1996, Borges 1997. Site Longitude Latitude In Terceira Island, the native forests included two fragments from Natural Park -Pico Galhardo (TER_NF_PG) and Lagoinha (TER_NF_LA) (Fig. 1) azorica -Terra Brava (TER_NF_TB) and Serreta (TER_NF_SE) (Fig. 1). Terra Brava is located in the very wet Laurisilva at 650 m altitude ( Fig. 1) 1) and is characterised by a low diversity of plants, dominated by Morella faya and P. azorica and, occasionally, by L. azorica. These forests are located in the most thermophilic areas of Azores and are almost extinct (Dias 1996). The highest canopy is dominated by a dense cover of P. undulatum and, rarely, by L. azorica. This forest is mixed with other invasive woody species, including Metrosideros excelsa, E. globules, A. melanoxylon, Sphaeropteris cooperi, Fuchsia magellanica and Rubus inermis. The herbaceous stratum is dominated by Dryopteris azorica, Hedera helix var. azorica, Smilax aspera and Gomphocarpus fruticosus (Dias 1996). Sampling methods Sampling description: In semi-natural and intensively-managed pastures, the soil samples with associated roots were randomly collected with a shovel, from the rooting zone of the dominant plant species, H. lanatus, to a depth of 0 -20 cm. In native fragments of P. azorica, the distance between samples taken on each site was a minimum of 25 m and maximum of 40 m and the distance between sample sites was about 20 km in Terceira and 15 km in São Miguel. Each soil sample was geo-referenced and consisted of four subsamples collected from different points (approximately N, S, E and W) around the rooting zone of each P. azorica plant with a shovel to a depth of 0 -20 cm or 0 -30 cm, depending on the soil conditions and the depth of rhizosphere system. The litter layer was removed during sampling and replaced afterwards. Subsequent samples were taken from the same marked plants following the cardinal points. In the case of native fragments of J. brevifolia, the distance between samples taken on each site was between 25 m and 40 m and the distance between sample sites was about 5 km in Terceira and 24 km in São Miguel. The sample collection followed the same procedure as for P. azorica (Melo et al. 2017). For all habitat types, each soil sample consisted of approximately 2 kg of rhizosphere soil. In the lab, the soil samples were air-dried, sieved through a 2 mm mesh and stored at 4ºC before analysis. Quality control: Frequently, spores directly extracted from the soil are low in number and contaminated by other organisms, which makes their identification difficult. Consequently, it is necessary to establish trap cultures to promote sporulation and provide specimens for detailed examination. Open pot-trap cultures (Gilmore 1968) were established from each soil sample collected at semi-natural and intensive pastures with one-week-old Zea mays seedlings (Melo et al. 2014). Soil samples collected from the native forests were used to establish two of such cultures, one with one-week-old Z. mays seedlings and another one with micropropagated J. brevifolia and P. azorica seedlings (Melo et al. 2017). Establishment of single or multi-spore cultures of the different AM fungal morphotypes with Plantago lanceolata as host plant was attempted in pots of river sand. Spores with a healthy appearance (oily contents; without evidence of contamination by non-AMF) of each AM fungal morphotype were used as inoculum by placing them on a seedling root system under a dissecting microscope, immediately before transplanting into the pot (Melo et al. 2017. Specimens were given a voucher number, linked to their culture attempt number. Individual microscope slides were numbered serially so that photographic images could be traced back to their specimen of origin and details were recorded in a database to allow complete tracking of culturing history and linkage of related voucher specimens. The new cultures were placed in a climate-controlled plant growth chamber. When needed, cultures were watered with deionised water. Individuals of morphologically characterised spore types, extracted from field soil, trap cultures or single spore cultures, were used for DNA analysis. Molecular characterisation, including DNA extraction, PCR, cloning, RFLP, sequencing and phylogenetic analyses, is published in Melo et al. (Melo et al. 2017). Step description: Glomeromycotan spores were extracted from 50 g of air-dried soil from each sample (field soil, trap cultures and single or multi-spore cultures) by wet sieving and sucrose centrifugation (Walker 1992) and stored at 4ºC in autoclaved water, pending examination. Different spore types were initially separated in water under a stereomicroscope. Representatives of each morphotype were identified through a compound microscope in a 4:1 mixture of polyvinyl alcohol lacto-glycerol (PVLG) and Melzer's reagent, photographed and stored as semi-permanent slide preparations. Counts were made for the total number of spores of each morphotype under a dissecting microscope after classification into either known species or types that could not be placed in a current species, based on colour, size, surface ornamentation, hyphal attachment, reaction to Melzer's reagent and wall structure. Identification of spores was carried out by use of primary literature and experience from more than 40 years of taxonomic study of the Glomeromycota by C. Walker (e.g., Walker and Trappe 1981, Walker et al. 1984, Koske and Walker 1985, Walker et al. 1986, Walker and Diederichs 1989, Walker 1992 Description: The following data table includes all the records for which a taxonomic determination of the species was possible. The dataset submitted to GBIF (Melo et al. 2019) is structured as a sample event dataset, with two tables: event (as core) and occurrences. The data in this sampling event resource have been published as a Darwin Core Archive (DwCA), which is a standardised format for sharing biodiversity data as a set of one or more data tables. The core data table contains 226 records (eventID). One extension data table also exists with 665 occurrences. An extension record supplies extra information about a core record. The number of records in each extension data table is illustrated in the IPT link. This IPT link archives the data and thus serves as the data repository. The data and resource metadata are available for downloading in the downloads section.
2020-04-02T09:13:05.571Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "ccb11c25dbd90ec0f5c4777a0f257279875092f6", "oa_license": "CCBY", "oa_url": "https://bdj.pensoft.net/article/49759/download/pdf/", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "984022872078977efc88e1f1ebf2e51209fb9070", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
250039239
pes2o/s2orc
v3-fos-license
Renin Production by Juxtaglomerular Cell Tumors and Clear Cell Renal Cell Carcinoma and the Role of Angiotensin Signaling Inhibitors Objective: To profile juxtaglomerular cell tumors (JXG) and histologic mimics by analyzing renin expression; to identify non-JXG renin-producing tumors in The Cancer Genome Atlas (TCGA) data sets; and to define the prevalence of hypertension (HTN) and patient outcomes with angiotensin signaling inhibitor (ASI) use in tumors of interest. Patients and Methods: Thirteen JXGs and 10 glomus tumors (GTs), a histologic mimic, were evaluated for clinicopathologic features; TCGA data were analyzed to identify non-JXG renin-overexpressing tumors. An institutional registry was queried to determine the incidence of HTN, the use of ASIs in hypertensive patients, and the impact of ASIs on outcomes including progression-free survival (PFS) in a tumor type with high renin expression (clear cell renal cell carcinoma [CC-RCC] diagnosed between January 1, 2005, and December 31, 2012). Results: We found an association between renin production and HTN in JXG compared with GT. Analysis of TCGA data found that a subset of CC-RCCs overexpress renin relative to 29 other tumor types. Furthermore, analysis of our institutional registry revealed a high prevalence (64%) of HTN among 1203 patients treated with radical or partial nephrectomy for nonmetastatic CC-RCC. On multivariable Cox regression, patients with HTN treated with ASIs (34%) had improved PFS (hazard ratio, 0.76; 95% CI, 0.57 to 1.00; P=.05) compared with patients with HTN not treated with ASIs (30%). Conclusion: The identification of renin expression in a subset of CC-RCC may provide a biologic rationale for the high prevalence of HTN and improved PFS with ASI use in hypertensive patients with nonmetastatic CC-RCC. J uxtaglomerular cell tumor (JXG), a renal tumor that overexpresses renin, was described as a cause of hypertension (HTN) as early as 1967. 1 These patients typically present with secondary hyperaldosteronism, characterized by hypokalemia, HTN, and a normal aldosterone to renin ratio (in contrast to primary hyperaldosteronism). [2][3][4] Although exceedingly rare, these tumors provide valuable insight into mechanisms of HTN. [2][3][4] Juxtaglomerular cells, considered myoendocrine cells, are the cell of origin and contribute to blood pressure regulation through the renin-angiotensin-aldosterone axis. 2 Unique features of these cells at the ultrastructural level include rhomboidshaped protogranules and expression of renin. [2][3][4] Histologic mimics of JXG include pericytic tumors of the kidney, which are designated glomus tumor (GT), glomangioma, and glomangiomyoma on the basis of variable proportions of perivascular glomus cells, vascular elements, and smooth muscle. 5,6 These tumors lack renin production, the characteristic ultrastructural feature of JXG, and more than half of GTs exhibit rearrangements of NOTCH1-3 genes. 7 Whereas transcriptomic meta-analysis suggests that renin production by nonneoplastic tissue is limited to the juxtaglomerular cells in the kidney, few contemporary studies have sought to define the extent of "paraneoplastic" overexpression of renin in a pan-cancer setting. 8 Rare reports have described either renin expression associated with HTN in clear cell renal cell carcinoma (CC-RCC) or the detection of renin by immunohistochemistry (IHC) in perivascular/endothelial cells of CC-RCC, albeit before the advent of more contemporary diagnostic tools. [9][10][11] This is of particular interest given recent reports of improved cancer-specific outcomes with the use of angiotensin signaling inhibitors (ASIs) in patients with metastatic renal cell carcinoma (RCC). [12][13][14] Indeed, several prior studies have suggested that the renin-angiotensin system can significantly affect the tumor microenvironment through the regulation of angiogenesis, inflammation, immunomodulation, and tumor fibrosis. 15,16 Signaling through angiotensin II and angiotensin II type 1 receptor is thought to promote vascular endothelial growth factoremediated angiogenesis and oncogenesis, whereas the angiotensin II/angiotensin II type 2 receptor axis is thought to have a contrasting effect. 15,16 However, these hypothesized functions of the reninangiotensin system are based on canonical models of renin-mediated signaling. We studied a large series of JXGs and their histologic mimics with a focus on renin expression. In addition, we conducted a pancancer investigation of The Cancer Genome Atlas (TCGA) data sets to identify non-JXG tumor types with renin overexpression. The TCGA query identified CC-RCC as having the highest prevalence of renin overexpression. We therefore queried our institutional RCC registry to ascertain the prevalence of HTN, ASI use, and impact of ASI use on outcomes for patients with nonmetastatic CC-RCC. Patient Specimens This study was approved by the Institutional Review Board at Mayo Clinic in Rochester (MCR; 20-002237). JXG and GT of the kidney were identified from the pathology archives at MCR and Memorial Sloan Kettering Cancer Center, and this included previously published cases. 2,3,5 Three cases of CC-RCC profiled by TCGA (TCGA-BP-4758, TCGA-BP-5001, and TCGA-BP-4341) that showed high REN gene expression were retrieved from the surgical pathology archives at Memorial Sloan Kettering Cancer Center and evaluated for renin expression by IHC. Histopathology, Immunohistochemistry, and Electron Microscopy All available cases of JXG and GT were subjected to histopathologic review based on contemporary criteria. Immunohistochemistry was performed on whole slide sections (see Supplemental Methods, available online at http://www.mayoclinicproceedings.org, for additional details of antibody clones). Optimization for renin IHC was performed at MCR using the Leica Bond RX stainer. Slides were retrieved for 20 minutes using Epitope Retrieval 2 (EDTA; Leica) and incubated in Protein Block (Dako) for 5 minutes. The renin primary antibody was diluted to 1:7500 in Background Reducing Diluent (Dako) and incubated for 15 minutes. The detection system used was Polymer Refine Detection System (Leica). Immunostaining visualization was achieved by incubating slides for 10 minutes in 3,3 0 - RNA Sequencing Library preparation and gene expression profiling by RNA sequencing were performed at MCR using formalin-fixed, paraffin-embedded tissue (6 JXGs, 2 GTs). Gene expression summaries (".count" files) in CC-RCC, papillary RCC (P-RCC), and chromophobe RCC (Ch-RCC) cohorts in TCGA were downloaded from the Genomic Data Commons data portal. These data, together with gene expression summaries of MCR samples obtained through the MAP-RSeq pipeline of JXG and GT, were processed using the edgeR package in R (R Foundation for Statistical Computing) with default parameters. 17 Log 2 -normalized expression values were obtained by the cpm function and plotted in R. Data Extraction From TCGA Data Sets Publicly available molecular profiling data from relevant TCGA studies were accessed and visualized using cBioPortal. 18,19 Relative gene expression status for REN and ATP6AP2 was accessed for 30 primary tumor types (see Supplemental Methods for additional details). Patient Outcomes and Statistical Methods Because a subset of CC-RCC showed renin overexpression relative to 29 other tumor types within queried TCGA data sets, clinical outcomes of CC-RCC as they relate to HTN and ASI use were studied. The MCR nephrectomy registry was queried to identify 1203 adults treated with radical or partial nephrectomy for nonmetastatic CC-RCC between January 1, 2005, and December 31, 2012. The electronic medical record was accessed to determine the incidence of HTN before nephrectomy based on International Classification of Diseases, Ninth Revision (ICD-9) and Tenth Revision (ICD-10) codes, most of which were ICD-9 code 401.9 for "unspecified essential hypertension" or ICD-10 code I10 for "essential (primary) hypertension." The percentage of hypertensive patients who were treated with angiotensin-converting enzyme inhibitors (ACEIs) and angiotensin receptor blockers (ARBs) was determined on the basis of ordered medications. Patients were divided into 3 exposure groups: patients without HTN, patients with HTN but not treated with ACEIs or ARBs, and patients with HTN treated with ACEIs or ARBs. Comparisons of clinical and pathologic features between groups were evaluated using Wilcoxon rank sum and c 2 tests. Patient outcomes included disease progression-free survival (PFS), cancerspecific survival, and chronic kidney disease stage PFS after surgery as estimated by the Kaplan-Meier method. Associations with these outcomes were evaluated using competing risks multivariable Cox proportional hazards regression models. See Supplemental Methods for additional details. Juxtaglomerular Cell Tumors and Glomus Tumors: Clinicopathologic Features Thirteen JXGs and 8 GTs were reviewed ( Table 1). All patients with JXG and available clinical information had documented HTN (9/9 patients). The median tumor size and age at nephrectomy were 1.9 cm and 25 years, respectively. In contrast, only 2 patients with GT had associated HTN (without hypokalemia). Although GT occurred in older patients (median age at nephrectomy, 55 years), most tumors had a similar size compared with JXG (median, 3.7 cm). None of these 21 patients had documented disease progression. Common IHC markers, such as smooth muscle actin, CD34, and collagen IV, did not distinguish these 2 entities. In addition, diagnostic markers of angiomyolipoma, a mesenchymal neoplasm composed of perivascular epithelioid cells (cathepsin K, melan A, and HMB45), were negative in all JXGs and GTs. Electron microscopy identified rhomboid protogranules in 1 JXG ( Figure 1A and B). juxtaglomerular cells in nonneoplastic kidney (n¼16; Figure 1C). No difference in renin expression was seen in renal biopsy specimens of 5 patients with primary hyperaldosteronism (n¼5) compared with renin expression in juxtaglomerular cells in nonneoplastic kidney of normotensive patients (n¼16) using IHC. All cases of JXG (n¼8) showed diffuse and strong expression of renin by IHC and on gene expression studies ( Figure 1D-F). In contrast, no expression of renin was identified in histologic mimics including renal and extrarenal GT (n¼10) and angiomyolipoma ( Figure 2; Table 2). RNA Sequencing No high-confidence reads to support structural rearrangements were identified using RNA sequencing in JXG (n¼6) and renal GT (n¼2), including for NOTCH1-3 genes. Gene expression of NOTCH1/2 genes in JXG was not significantly higher compared with CC-RCC; however, NOTCH3 showed higher expression (data not shown), and the significance of this finding is unclear. Juxtaglomerular Cell Tumor REN Gene Expression vs TCGA Data Relative to 29 other tumor types, CC-RCC had the highest prevalence and degree of REN overexpression, whereas a subset of both CC-RCC and Ch-RCC showed high ATP6AP2 (REN receptor) expression relative to other tumor types ( Figure 3A and B). However, JXG (n¼6) showed significantly higher REN gene expression compared with CC-RCC, P-RCC, and Ch-RCC profiled by TCGA (log 2 fold change of 11.5 compared with CC-RCC; Figure 1F). Of note, REN/ ATP6AP2 expression in combined cases of CC-RCC, P-RCC, and Ch-RCC profiled by TCGA showed a mutually exclusive pattern of overexpression ( Figure 3C) (Table 2). Consistent with prior reports, renin was found to be localized in vascular endothelial cells rather than in tumor cells (Supplemental Figure, available online at http://www.mayoclinicproceedings.org; Figure 3D and E). 10,11 Prevalence of HTN and ASI Use in CC-RCC Finally, a query of our nephrectomy registry revealed a high prevalence of HTN in patients with CC-RCC before nephrectomy (771/1203 [64%]), with a median age at nephrectomy of 61 years (interquartile range, 52 to 70 years). Further analysis revealed that approximately half of these patients were treated with ASI (412/771 [53%]; Figure 3F). Table 3. Kaplan-Meier analyses demonstrated that group 3 patients had improved PFS compared with group 2 patients but not group 1 patients (Figure 4) Table 2). DISCUSSION On histopathologic examination, JXG and GT showed significant morphologic overlap, characterized by the presence of a perivascular proliferation of uniform polygonal to spindle cells, and distinguishing features included renin expression. JXG is likely to have a different cell of origin as it arises from juxtaglomerular cells. Our results found an association between renin production and HTN in JXG compared with histologic mimics such as GT. In our cohort of patients with CC-RCC, the prevalence of HTN was 64%, with a median age at nephrectomy of 61 years. In some studies, the corresponding prevalence of HTN in the 55-to 64-year age range is 53% for men and 52% for women. 20 Although age may be a potential confounding factor, our results suggest that the prevalence of HTN in patients with CC-RCC may be higher compared with the general population. In addition, we found that many CC-RCCs overexpress renin in vascular endothelial cells, providing a promising hypothesis to explain the elevated prevalence of HTN in patients with CC-RCC. In accordance with recent studies reporting that ASI use is associated with improved outcomes in hypertensive patients with metastatic RCC, the use of ACEIs and ARBs in our study was associated with improved PFS in hypertensive patients with nonmetastatic CC-RCC after radical or partial nephrectomy. [12][13][14] Further studies are warranted to corroborate this finding and to elucidate an underlying mechanism to determine whether angiotensin blockade may benefit patients with CC-RCC. Limitations of our study involve a lack of data correlating serum renin levels with HTN. Collection of such data was not feasible because of the retrospective nature of the study and control of HTN with various therapeutic agents. In addition, our clinical outcomes used electronic medical record data to stratify patients by HTN status and ASI use, and prospective data collection is necessary to address the data limitations from this retrospective cohort. CONCLUSION Although, on histopathologic examination, JXG and GT showed significant morphologic overlap, JXG significantly overexpressed renin, allowing the 2 entities to be differentiated; JXG also overexpressed renin relative to CC-RCC, which in turn significantly overexpressed renin relative to other solid tumors. Accordingly, patients with CC-RCC also had a high prevalence of HTN. Among hypertensive CC-RCC patients, ASI use was associated with improved PFS. The identification of renin expression within the tumor microenvironment of CC-RCC provides a biologic rationale for the high prevalence of HTN in these patients and for improved PFS with ASI use in hypertensive patients with nonmetastatic CC-RCC. These findings highlight a need for further evaluating ASI as a potential therapeutic agent for such patients in both the metastatic and nonmetastatic settings. POTENTIAL COMPETING INTERESTS The authors report no competing interests. Hazard ratios, 95% CIs, and P values represent associations after accounting for the competing risk of death from other causes for the outcomes of disease progression and death from RCC and the competing risk of death from any cause for the outcome of CKD stage progression. d Hazard ratios, 95% CIs, and P values represent associations after adjusting for the disease progression score for the outcome of disease progression; the death from RCC score for the outcome of death from RCC; and age at surgery, sex, solitary kidney, body mass index, estimated glomerular filtration rate, diabetes, preoperative estimated glomerular filtration rate, and radical surgical approach for the outcome of CKD stage progression. Associations with CKD stage progression after further multivariable adjustment for preoperative proteinuria in the subset of patients with nonmissing data for this feature were similar (data not shown).
2022-06-26T15:07:15.537Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "aeb0aca50a7e6204203b8bd3b1d3f51a5286b1a0", "oa_license": "CCBYNCND", "oa_url": "http://www.mayoclinicproceedings.org/article/S0025619622002142/pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "7d5e638643a4422ad8b0d30238972f37cc89f793", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269892963
pes2o/s2orc
v3-fos-license
Increasing Rate of Fatal Streptococcus pyogenes Bacteriemia—A Challenge for Prompt Diagnosis and Appropriate Therapy in Real Praxis Streptococcus pyogenes, group A streptococci (GAS) bacteriaemia, is a life-threatening infection with high mortality, requiring fast diagnosis together with the use of appropriate antibiotic therapy as soon as possible. Our study analysed data from 93 patients with GAS bacteraemia at the General University Hospital in Prague between January 2006 and March 2024. In the years 2016–2019 there was an increase in GAS bacteraemia. Mortality in the period 2006–2019 was 21.9%; in the period 2020–2024, the mortality increased to 41.4%, p = 0.08. At the same time, in the post-2020 period, the time from hospital admission to death was reduced from 9.5 days to 3 days. A significant predictor of worse outcome in this period was high levels of procalcitonin, >35.1 µg/L (100% sensitivity and 82.35% specificity), and lactate, >5 mmol/L (90.91% sensitivity and 91.67% specificity). Myoglobin was a significant predictor in both compared periods, the AUC was 0.771, p = 0.044, and the AUC was an even 0.889, p ≤ 0.001, respectively. All isolates of S. pyogenes were susceptible to penicillin, and resistance to clindamycin was 20.3% from 2006–2019 and 10.3% in 2020–2024. Appropriate therapy was initiated in 89.1%. and 96.6%, respectively. We hypothesise that the increase in mortality after 2020 might be due to a decrease in the immune status of the population. Introduction Sepsis, defined as life-threatening organ dysfunction caused by a dysregulated host response to infection, is a major cause of mortality from any infectious disease worldwide [1].Epidemiological analyses increase general awareness and knowledge of infectious diseases, and molecular biological methods contribute to the identification of pathogenicity and virulence factors in causative agents, as well as to the elucidation of risk factors for the onset and development of infection in patients.Reducing mortality due to infections is a global public health priority [2].In 2005, the WHO reported a global estimate of 18.1 million cases of severe Streptococcus pyogenes disease, with 1.78 million new cases of severe disease and 517,000 deaths per year [3]. Streptococcus pyogenes, group A streptococci (GAS), is a gram-positive, facultatively aerobic bacterium.The spectrum of infections caused by S. pyogenes is wide, ranging from respiratory tract infections, septic arthritis, puerperal sepsis and necrotizing fasciitis to streptococcal toxic shock syndrome.All these forms of the disease can be complicated by bacteraemia [4].Streptococcal infections are highly contagious.Transmission most often happens from person to person, either by the droplet route or direct contact and, rarely, through contaminated food, leading to outbreaks of the disease.With few exceptions, streptococcal infections occur sporadically [5], although outbreaks of invasive infections caused by certain clones of S. pyogenes have been described, such as in Israel [6].Most streptococcal infections are mild or self-healing [7].However, in the last few years, there has been an increasing incidence of streptococcal infections in both children and adults worldwide, with an increase in invasive forms of infections [8][9][10][11][12][13]. The most common form of invasive GAS infection is bacteraemia (up to 75%).Localized infections without bacteraemia or necrotizing fasciitis occur less frequently-19% and 7%, respectively [9].Although bacteraemia is the most common form of invasive infection among streptococcal infections, it is not one of the most common infections in comparison with other causative agents; its incidence does not exceed 8% [14,15].However, among the causative agents causing bloodstream infections with the highest related mortality, S. pyogenes ranks fifth, behind Escherichia coli, Klebsiella pneumoniae, Staphylococcus aureus and Pseudomonas aeruginosa [14]. The pathogenesis of streptococcal infections is studied in detail, but the pathophysiology of mainly invasive forms is still unclear [16].The essential condition for successful pathogenesis is the ability to invade and/or modulate the immune response.Patients with severe invasive GAS infections have been shown to have significantly lower serum vascular endothelial growth factor (VEGF) concentrations compared to those with non-invasive forms of infection [17].S. pyogenes produces a variety of surface-bound, intracellularly produced and extracellularly produced factors, such as adhesins, pili, cytolysins, spreading factors and immune evasion factors that directly or indirectly influence the immune response.S. pyogenes also developed several strategies to evade the host immune response [18,19].With the development of new diagnostic techniques, new immunomodulating enzymes have been identified, although not all of them have a clear role in the development of infection.Some of them could be used as biotechnological tools, or as drugs in the fight against autoimmune diseases, or as vaccine candidates [20][21][22].Most enzymatic activities inhibit inflammatory processes, such as neutrophil chemotaxis mediated by IL-8 and C5a (via SpyCEP, ScpA), the inhibition of trapping and killing in neutrophil extracellular traps (via the DNAases SpnA and SdaI, as well as other streptococcal DNAses), the inhibition of killing by antimicrobial peptides (AMPs) and antimicrobial chemokines (both SpeB) and the killing of macrophages (via NADase, S5nA).Notable exceptions are SpeB's activity on gasdermins that induce pyroptosis and SpeB's activity on IL-1β and H-kininogen, which have potent proinflammatory effects [23].S. pyogenes secretes a variety of highly mitogenic exotoxins that stimulate large numbers of T cells and antigen-presenting cells.This results in a massive release of pro-inflammatory cytokines and can lead to systemic inflammation and multiorgan failure [24]. The gold standard treatment for GAS sepsis is a combination of cell wall synthesis inhibitors (i.e., beta-lactams or glycopeptides) and protein synthesis inhibitors (MLS antibiotics) [25].Although S. pyogenes is susceptible to penicillin in vitro [26], penicillin monotherapy treatment of GAS infections with toxin production has been associated with high morbidity and mortality due to the "Eagle effect" [27,28]. Worldwide increasing resistance of S. pyogenes to clindamycin might decrease its effect [29,30], with the surprising exception of Spain, where tetracycline, erythromycin and clindamycin resistance rates declined between 2007 and 2020 [31].Data from the Czech Republic also do not show an increase in resistance to erythromycin and clindamycin [32].The risk of resistance to beta-lactam antibiotics is very low due to S. pyogenes' incapacity to transfer genes horizontally.Nevertheless, strains of S. pyogenes (emm43.4/PBP2x-T553K)with increased insensitivity to beta-lactams have already been described in the USA [33]. The use of biomarkers as possible predictors of aetiology in septic patients is repeatedly discussed.Nevertheless, currently there is no single commercially available biomarker that would be 100% specific and sensitive to discriminate between gram-negative and gram-positive sepsis [34].This hinders clinical sepsis pathway implementation, potentially leading to an inappropriate choice of antibiotic therapy, which in the worst-case results in patient death [35]. Frequently used biomarkers, such as the total white blood cells, neutrophil count and C-reactive protein (CRP), lack the specificity to discriminate between SIRS and sepsis [34].In this sense, procalcitonin (PCT)-a prohormone of calcitonin-was shown to have the best accuracy to identify patients with invasive bacterial infections because inflammatory stimuli including severe infection leads to its upregulated production in different tissues [36].Even though elevated PCT serum concentrations are not exclusive to infections (they can also be elevated during paraneoplastic processes and in patients with solid tumours or with major trauma [37]), at this moment, PCT is considered among the best clinically available biomarkers to diagnose sepsis in routine praxis [38] and can be used as a guide to fulfil the principles of antimicrobial stewardship (AS).Using biomarkers to predict worse outcome in GAS sepsis has not been yet studied, to our knowledge. In our retrospective study, biomarkers, routinely available in every healthcare facility, were evaluated and compared in patients with GAS bacteraemia in the period up to 2019, before COVID-19, and beyond.GAS sepsis with positive blood culture is rare [39], and, therefore, our cohort was unique due to its large number of GAS patients.The aim was to identify biomarkers with the highest informative value predicting a worse outcome of the disease or the risk of death of the patient. Study Design and Setting, Inclusion Criteria The present retrospective study included all patients admitted to the General University Hospital in Prague with Streptococcus pyogenes bacteraemia, confirmed through blood cultures, during the period between January 2006 and March 2024. Only confirmed cases with complete data (PCT, CRP, neutrophil to leukocyte ratio (NLR), white blood cells count (WBC), blood cultures, albumin, lactate, creatinine, myoglobin, recommended antibiotic (ATB) treatment) were analysed.We obtained electronic medical health records for each patient from hospital information system. The aim was to identify biomarkers with the highest informative value predicting a worse outcome and to evaluate the appropriateness of initial therapy. All PCT values and other laboratory parameters were recorded within the first 24 h after the onset of sepsis as baseline data.Blood cultures were drawn at sepsis onset before the start of antimicrobial therapy and processed and analysed according to local laboratory standards.S. pyogenes was identified by latex agglutination.Antibiotic susceptibility was determined according to EUCAST rules.The investigators, considering all available clinical and microbiological data, identified the focus of infection retrospectively.For analysis, foci of infection were grouped into three categories (soft tissue, respiratory and other).Risk factors such as diabetes mellitus and chronic renal insufficiency were obtained from the patients' medical records. Appropriate treatment was defined as a combination of a beta-lactam antibiotic with an antibiotic that inhibits protein synthesis (macrolides, lincosamides, oxazolidinones); in the case of using only one appropriate antibiotic, the treatment was marked as semiappropriate.Initial treatment with an antibiotic other than the ones mentioned above was evaluated as inappropriate. Statistical Analysis Continuous values were tested to normal distribution by the Shapiro-Wilk test, with negative results, and are expressed as medians and interquartile range (IQR).The categorical variables are expressed as a number and percentage (%).The continuous variables were compared by Kruskal-Wallis test for more than two groups and by 2-sided Mann-Whitney test for comparison of two groups (included post-hoc analysis).In the post-hoc analysis, the p-values were corrected by Bonferroni correction for multiple pairwise comparison.We performed 4 pairwise (clinically meaningful) comparisons; thus, we used Bonferroni correction by 4. The categorical values were compared using the Fisher's exact test (for 2 × 2 table) or the chi-square test.The survival analysis was performed by Kaplan-Meier survival analysis.The Kaplan-Meier curves were compared by Logrank test.The ROC analysis and ROC comparisons (the hypothesis that the difference between the two ROC AUCs is 0) are utilized to emphasize the important differences between particular parameters.The p value lower than 0.05 was considered statistically significant.Statistical analyses were performed with MedCalc ® Statistical Software version 22.021 (MedCalc Software Ltd., Ostend, Belgium; https://www.medcalc.org;Accessed on 15 April 2024). Twelve (18.8%) patients received appropriate initial therapy in 2006-2019 and ten (34.5%) in 2020-2024, p = 0.12.Considering semi-appropriate therapies as appropriate, 57 (89.1%) patients met this criterion in the first follow-up period and 23 (79.3%) patients in the following period, p = 0.43.Even in the subpopulation of non-surviving patients, the inclusion of semi-appropriate therapies in the appropriate category did not have a significant impact on the evaluation of the cohort. Due to the significant change in the time from patient admission to death, Figure 2-the Kaplan-Meier survival curve, in the monitored periods-we decided to analyse individual parameters for the subgroups of patients in the periods 2006-2019 and 2020-March 2024.In the subgroup of non-surviving patients, there were 14 patients in the first period, of which 5 (35.7%) were men, and 12 patients in the following period, where the representation of men and women was equal, 6 and 6.The comparison of individual groups of patients in the post-hoc analysis showed that the individual groups are comparable in age, with no statistical significance between survivors and non-survivors in both monitored periods (Figure 3A).There is a significant difference in the level of procalcitonin in surviving and non-surviving patients in the period 2020-2024-the median PCT is 9.47 µg/L and 67.5 µg/L, respectively, p = 0.002.As well, the prognostic significance of lactate is significant, with the median in survivors being 1.75 mmol/L and 7.1 mmol/L in non-survivors in the period 2020-2024, p = 0.005 (Figure 3B,D).An easily available marker in routine practice, creatinine, is also significantly elevated in non-surviving patients in the period 2020-2024, median 103 µmol/L vs. 272.5 µmol/L, p = 0.005) (Figure 3C).The comparison of individual groups of patients in the post-hoc analysis showed that the individual groups are comparable in age, with no statistical significance be-tween survivors and non-survivors in both monitored periods (Figure 3A).There is a significant difference in the level of procalcitonin in surviving and non-surviving patients in the period 2020-2024-the median PCT is 9.47 µg/L and 67.5 µg/L, respectively, p = 0.002.As well, the prognostic significance of lactate is significant, with the median in survivors being 1.75 mmol/L and 7.1 mmol/L in non-survivors in the period 2020-2024, p = 0.005 (Figure 3B,D).An easily available marker in routine practice, creatinine, is also significantly elevated in non-surviving patients in the period 2020-2024, median 103 µmol/L vs. 272.5 µmol/L, p = 0.005) (Figure 3C). Prognostic Performance of Biomarkers When comparing routinely available biomarkers using receiver operation characteristic (ROC) curve analysis, high levels of lactate (AUC 0.97, p < 0.001) myoglobin (AUC 0.889, p < 0.001), PCT (AUC 0.882, p < 0.001) and creatinine (AUC 0.855, p < 0.001) were useful in predicting worse outcome or risk of death in patients with GAS bacteraemia in the patient cohort 2020-2024 only (Figure 4A-D).From these biomarkers, only myoglobin was useful to predict worse outcome in patients hospitalised before 2020 (AUC 0.771, p = 0.044) (Figure 4B).CRP and NLR could not be used to predict worse outcome in patients in either of the tested periods (Figure 4E,F).The best cut-off levels, according to the Youden index, for the 2020-2024 cohort were 5 mmol/L, with a sensitivity of 90.91% and specificity of 91.67% for lactate, 1039 µmol/L with a sensitivity of 88.89% and specificity of 80.00% for myoglobin, 35.1 µg/L with a sensitivity of 100% and specificity of 82.35% for PCT and 140 µmol/L with a sensitivity of 91.67% and specificity of 76.47% for creatinine.For the 2006-2019 cohort, the best cut-off value for myoglobin was 1073 µmol/L with a sensitivity of 83.33% and specificity of 75.00%.Sensitivity, specificity, cut-off values and Youden index J for individual biomarkers in the two tested periods are summarised in Table 2. The incidence of diabetes mellitus was comparable in both groups (Table 1).Appropriate antibiotic therapy was initiated in three (21.4%)non-surviving patients in the period 2006-2019 and in five (41.7%) in the following period, p = 0.4. Prognostic Performance of Biomarkers When comparing routinely available biomarkers using receiver operation characteristic (ROC) curve analysis, high levels of lactate (AUC 0.97, p < 0.001) myoglobin (AUC 0.889, p < 0.001), PCT (AUC 0.882, p < 0.001) and creatinine (AUC 0.855, p < 0.001) were useful in predicting worse outcome or risk of death in patients with GAS bacteraemia in the patient cohort 2020-2024 only (Figure 4A-D).From these biomarkers, only myoglobin was useful to predict worse outcome in patients hospitalised before 2020 (AUC 0.771, p = 0.044) (Figure 4B).CRP and NLR could not be used to predict worse outcome in patients in either of the tested periods (Figure 4E,F).The best cut-off levels, according to the Youden index, for the 2020-2024 cohort were 5 mmol/L, with a sensitivity of 90.91% and specificity of 91.67% for lactate, 1039 µmol/L with a sensitivity of 88.89% and specificity of 80.00% for myoglobin, 35.1 µg/L with a sensitivity of 100% and specificity of 82.35% for PCT and 140 µmol/L with a sensitivity of 91.67% and specificity of 76.47% for creatinine.For the 2006-2019 cohort, the best cut-off value for myoglobin was 1073 µmol/L with a sensitivity of 83.33% and specificity of 75.00%.Sensitivity, specificity, cut-off values and Youden index J for individual biomarkers in the two tested periods are summarised in Table 2.The incidence of diabetes mellitus was comparable in both groups (Table 1).Appropriate antibiotic therapy was initiated in three (21.4%)non-surviving patients in the period 2006-2019 and in five (41.7%) in the following period, p = 0.4. Discussion This study presents a unique cohort of patients with GAS bacteraemia in one tertiary hospital between 2006 and 2024.We observed an increasing trend in the occurrence of GAS bacteraemia between 2016 and 2019.A similar trend was also reported in other studies worldwide [8][9][10][11][12][13].Despite improving intensive care, source control and adequate antibiotic treatment, mortality in invasive GAS infections remains high, reaching up to 59% in the case of septic shock [40].Mortality from 2016-2019 was 17.9%, lower than in subsequent years.Since 2019, mortality has increased, reaching 41.4% between 2020 and 2024.At the same time, the time from admission to the hospital to death has been reduced from a median of 9.5 days to 3 days.This fact is very alarming and was also a challenge for a more detailed analysis of the cohort of patients with GAS bacteraemia admitted to the hospital up to and from 2019, inclusive. Blood culture (BC), the "gold standard" for the diagnosis of bloodstream infection, requires at least 48-72 h before the results of microorganism and antibiotic susceptibility are available.Due to the speed of development of symptoms of invasive GAS infection, traditional microbiological techniques for detection and identification of the infectious agent, based on cultivation methods, are slow and do not provide the necessary information in a timely manner [41].The use of molecular biological methods, especially PCR, significantly shortens the time needed to clarify the etiological agent [42].However, this type of diagnosis is not routinely available in all healthcare facilities in the Czech Republic and certainly not in the POCT regime. Advanced life support and resuscitation together with prompt antibiotic treatment constitute a fundamental aspect of sepsis management.For adults with possible septic shock or a high likelihood for sepsis, administering antimicrobials immediately is recommended, ideally within 1 h of recognition of sepsis [43,44]. PCT and CRP are currently the most widely used inflammatory biomarkers in routine praxis in our country.PCT has shown a significant prognostic value even upon admission to the hospital, as lower serum levels have been associated with a higher probability of survival in patients with sepsis [45].Upon admission, PCT levels are strongly related to the severity of the inflammatory reaction.PCT impairs the endothelial barrier function, causing capillary leak and refractory hypotension with subsequent multiple organ failure during sepsis [46]. In our study from 2020, we proposed the use biomarkers, especially PCT, in the differential diagnosis of GAS sepsis to improve the initial choice of antibiotics [47].To our knowledge, we were the first to report that S. pyogenes can produce an inflammatory response similar to or higher than gram-negative (GNEG) bacteria [48].S. pyogenes' GNEGlike inflammatory response might be explained via recognition of the GAS pore-forming toxin streptolysin O by TLR4, leading to high expression of pro-inflammatory cytokines [49]. Common biomarkers, such as CRP, PCT, differential blood count, lactate or creatinine, are available everywhere in the Czech Republic, even in urgent mode.CRP has a high sensitivity for inflammation but little specificity for infection.CRP has a slower onset than PCT and peaks up to 24 h later.When interpreting CRP values, it is necessary to consider potential limitations (impairment of liver protein synthesis in severe hepatopathies and blocking of IL-6 during biological therapy-in both cases, CRP levels barely increase) [50]. CRP, as the most frequently determined marker, was elevated in both periods, median 235.1 mg/L and 252 mg/L; the difference was not significant: p = 0.97.Even when comparing the group of deceased patients in the observed analogues, the difference was not significant: 316.9 mg/L vs. 273 mg/L, p = 0.94.According to the Youden index, CRP determination could only be used as a screening test for worse outcome, where values higher than 326 mg/L have 100% specificity but only 41.7% sensitivity. Although there was no significant difference in PCT levels in the overall comparison of either of the two periods, 16.5 µg/L vs. 37 µg/L, p = 0.17, the difference is still clinically relevant.However, when comparing surviving and non-surviving patients in the two tested periods, PCT values were significantly higher in non-surviving patients in the cohort 2020-2024.PCT has a high sensitivity and specificity for the detection of systemic infection.However, PCT levels increase not only at the instigation of PAMPs (pathogenassociated molecular pattens, e.g., liposaccharide in gram-negative), but also DAMPs (damage-associated molecular pattens, e.g., cell decay products).The advantage of PCT is its fast dynamics.PCT is used as a guide for antimicrobial therapy.Gram-negative infections are associated with higher levels of PCT than most gram-positive ones, except for pyogenic streptococci.PCT monitoring is not only diagnostic but also prognostic [51].Our data show that unlike CRP, PCT can be used as a prognostic marker; values higher than 35.1 µg/L show 100% sensitivity and 82.35% specificity.The Youden index for this criterion is 0.8235. The neutrophil-lymphocyte ratio can be used as a marker to differentiate between GAS sepsis and sepsis induced by other gram-positive bacteria [48].However, in the group of patients with GAS bacteraemia, there are no differences between the observed periods; the median NLR was 22.5 vs. 27, p = 0-79.Similarly, even among non-surviving patients, the difference in the median NLR was not significant: 26.9 vs. 28.9,p = 0.63. Lactate is a product of anaerobic glycolysis.It is used by gluconeogenesis as a source of energy.In systemic inflammation, the pathophysiology changes and lactate accumulates for various reasons (hypoxic-hypoperfusion type, hyperlactatemia in mitochondrial dysfunction, reduced pyruvate dehydrogenase complex activity, etc.).Sepsis-associated hyperlactatemia (SAHL) has been described [52].The lactate level is a valuable marker to assess the severity of the condition, and monitoring lactate levels serves as a prognosis of patient's condition [53].In our cohort, although there was no significant difference between the study periods, medians of 2.7 mmol/L and 5.0 mmol/L, p = 0.12, there was a significant difference among the cohorts of deceased patients: 3.75 mmol/L vs. 7.1 mmol/L, p = 0.025.Similar to PCT, lactate can be used as a prognostic biomarker, where lactate levels higher than 5 mmol/L show 90.91% sensitivity and specificity of 91.67%; the Youden index for this criterion is 0.8258.Similar to lactate, albumin levels also characterize the severity of the condition.Rapid changes in plasma albumin levels reflect the formation and the degree of capillary leakage in systemic inflammation.Therefore, a sharp drop in albumin levels can be used as a better estimate for the severity of the condition [54].In our cohort, all patients had hypoalbuminemia on the day of admission; the difference between the cohort 2006-2019 and 2020-2024 was 25 g/L vs. 23.5 g/L, p = 0.77.When comparing deceased patients in the follow-up periods, the median albumin was 18.5 g/L in 2006-2019 and 24 g/L in the following period, p = 0.07.Recent studies pointed to the use of heparin-binding protein (HBP) as a potent indicator of worse outcome in patients with sepsis [55].However, HBP is not routinely tested in the Czech Republic. The most common source of bacteraemia was skin and soft tissue infections, found in 48 (75%) and 18 (62.1%)patients, respectively.The biomarker that reflects this is myoglobin.Indeed, myoglobin was the only biomarker that could predict worse outcome in patients in both study periods.In the 2006-2019 period, the AUC was 0.771 and the Youden index for criterion >1073 µmol/L was 0.5833, sensitivity 83.33%, specificity 75.00%, p = 0.044.In the period 2020-2024, the AUC was 0.889 and the Youden index for criterion >1039 µmol/L was 0.6889, sensitivity 88.89%, specificity 80.00%, p ≤ 0.001.Myoglobin is a muscle protein, and its elevated levels in the blood are found within two hours of muscle damage.Myoglobin has rapid dynamics, which is crucial for early patient monitoring.Furthermore, it serves to monitor the risk for renal damage, and, based on myoglobin levels, we decide when it is necessary to start haemodialysis [56,57].On the other hand, creatinine phosphokinase has a slower release into the bloodstream, and there are no strict guidelines for monitoring of renal failure [57]. In contrast to the biomarkers mentioned above, creatinine is an indicator of kidney organ dysfunction.The kidneys are one of the first organs to respond to sepsis or septic shock.Sepsis-associated acute kidney injury (S-AKI) is a common complication in hospitalized and critically ill patients that increases the risk of developing chronic comorbidities and is associated with extremely high mortality [58].As in other forms of AKI, serum creatinine can be an insensitive indicator of kidney injury.S-AKI is usually defined as AKI in the presence of sepsis without other significant contributing factors explaining AKI or characterized by the simultaneous presence of both Sepsis-3 and Kidney Disease: Improving Global Outcomes (KDIGO) criteria [59].For patients in the ICU, sepsis is found in about 40% to 50% of patients with AKI [60].A prospective cohort study including 1177 patients with sepsis across 198 ICUs in 24 European countries reported a 51% incidence of AKI with an ICU mortality rate of 41% [61].In our cohort, the median baseline creatinine was 139.5 µmol/L in the period 2006-2019 and in the following period, 147 µmol/L, p = 0.48.Comparing deceased patients, the median creatinine was 203 µmol/L in the first period and 272.5 µmol/L in the second period, p = 0.2. Analysis of biomarkers in a subpopulation of non-surviving patients between the monitored periods using ROC curves and determination of AUC showed that the only prognostic marker for the 2006-2019 period was myoglobin.However, in the period 2020-2024, we found that four biomarkers could be used to predict worse outcome or risk of death in patients, namely lactate, myoglobin, PCT and creatinine.This finding offers a new avenue to improve GAS sepsis management in the early hours of a patient's admission to the hospital.The rate of development of a serious course of infection may also affect the suitability of initial antibiotic therapy. The drug of choice for invasive infections caused by S. pyogenes remains the combination of a beta-lactam antibiotic with a protein synthesis inhibitor.The most commonly used protein synthesis inhibitor in the Czech Republic is clindamycin.A protein synthesis inhibitor with activity during the stationary phase of bacterial growth has been shown to decrease the expression and production of group A streptococcal virulence factors and exotoxins [62].However, despite the unique anti-streptococcal properties of clindamycin shown in both in vitro and animal models [62], proof of its effectiveness in humans has been hampered by small sample sizes and low-quality clinical evidence.Observational studies that suggested benefits in patients with necrotising fasciitis [63], streptococcal toxic shock syndrome [64] and any invasive group A streptococcal infections [65] have not accounted for confounding effects by indication (i.e., the selective use of clindamycin [64].In our file, 12 (18.8%)patients received full appropriate initial therapy in 2006-2019 and 10 (34.5%) patients in 2020-2024, p = 0.12. Antibiotic monotherapy with an effect on S. pyogenes was evaluated as semi-appropriate in our study.Considering semi-appropriate therapies as appropriate, 57 (89.1%) patients met this criterion in the first follow-up period and 23 (79.3%) patients in the following period, p = 0.43.Given the significant reduction in the time from admission to death in the period 2020-2024, from a median of 9.5 days to 3 days, the question is whether appropriate therapy is entirely crucial.The rate of progression of the condition in these patients is striking.In one patient, a 64-year-old man, the course was so fulminant that he died early, within an hour of admission, before therapy could begin.The source of infection was an infected defect in the lower extremity.The baseline level of myoglobin was unmeasurable (>35,000 µmol/L), creatinine waS 495 µmol/L, procalcitonin was 102 µg/L and lactate was 11.2 mmol/L.The patient did not suffer from diabetes mellitus or chronic renal insufficiency and was not treated for malignancy as a risk factor for infection. The lower adherence of the therapy in the second monitored period with higher mortality is not significant, nor can the addition of antibiotics with an effect on protein synthesis explain the higher mortality, because these antibiotics are bacteriostatic, and their onset of action is slower than that of beta-lactams [66]. More recently, concerns have arisen that clindamycin may no longer be the adjunctive antitoxin antibiotic of choice due to rising clindamycin resistance [67].However, this trend is not uniform, and there is no increased resistance to clindamycin reported in the Czech Republic [32].In our cohort, resistance to macrolide antibiotics (erythromycin) and lincosamides (clindamycin) was 20.3% (13 strains) in the period 2006-2019 and, in 2020-2024, resistance was only 10.3% (3 strains).So, the question remains why the mortality of our patients has increased so significantly since 2020. The cause of the GAS resurgence observed in multiple countries has been widely debated [68,69].It has been suggested that it might have resulted from non-pharmaceutical interventions that were implemented to limit SARS-CoV-2 transmission, but it also decreased the circulation of GAS.Hence, the population might have a reduced immunity to GAS [70,71].Virulent GAS variants, established by the GAS M surface protein and encoded by the emm gene, have also been suggested to have influenced the high incidence of iGAS in 2022-2023 [72].Since 2010, more invasive forms of GAS infections (such as scarlet fever) have been reported globally.In the UK, the resurgence of scarlet fever has been linked to a sub-lineage M1 UK of the pandemic M1T1 clone, which has an increased expression of SpeA superantigen [73].In December 2022, there was an outbreak of GAS infection in London.Yet, the most prevalent M types were still emm12 and emm1.The severity of GAS infection was associated with the presence of spea and spej superantigen genes [74].In the Czech Republic, there was no change in emm types of S. pyogenes before and after the COVID-19 pandemic.emm1 and emm12 remain the most prevalent types in the Czech Republic (established via personal communication; data will be published).On the other hand, sequencing of GAS is not routinely performed, and, therefore, we cannot exclude the possibility that more virulent GAS is also present in the Czech Republic. Reduced population immunity might have contributed to exceptionally high circulation of GAS and a proportional increase in invasive GAS (iGAS) infections [75].S. pyogenes developed several strategies to evade the host immune response [18,19].With the development of new diagnostic techniques, new immunomodulating enzymes have been identified, although not all of them have a clear role in the development of infection. The possibilities of laboratory evidence of a reduced immune response to streptococcal infection are limited in the Czech Republic and completely unavailable in routine diagnostics.Given the high mortality rate caused by streptococcal infection, investigating the immune response in patients might need to be implemented in our hospital settings.One such marker is vascular endothelial grown factor (VEGF).VEGF play a role in the defence mechanism of GAS infection as it contributes to GAS clearance in endothelial cells.VEGF is an angiogenic factor involved in normal physiological functions, including bone formation, haematopoiesis, wound healing and development [76]. In endothelial cells, the invasive efficacy of GAS was shown to be 5-fold higher compared to epithelial cells, contributing to severe symptoms of GAS that were linked to the breakdown of blood vessels and dissemination of bacteria into the systemic circulation [77].In endothelial cells, the autophagy and lysosomal functions are limited, which leads to a failure of suppression of bacterial proliferation [78].VEGF was proposed as a key factor regulating the susceptibility of endothelial cells to GAS.Interestingly, the levels of VEGF are high at the sight of local infection, while, in patients exhibiting severe symptoms such as necrotizing fasciitis, sepsis and bacteraemia, the serum VEGF levels are low.In vitro, administration of VEGF increased the survival of GAS-infected mice.Furthermore, it was found that VEGF supresses GAS proliferation by enhancing lysosomal action and xenophagy in endothelial cells [17].Given these findings, VEGF serum levels could potentially be used as a marker of disease severity as well as in the treatment of bacteraemia in the form of exogenous injections. Due to the important role of VEGF in the angiogenesis of oncological diseases or eye diseases (especially macular degeneration of the retina) and the increase in use of VEGF inhibitors (e.g., Bevacizumab) in the treatment of these diseases, the question arises whether these patients, in addition to other undesirable drugs related to the administration of anti-VEGF (cardiotoxicity), will not be at increased risk of the onset and development of invasive GAS infections [79,80].To understand the relationship between VEGF and GAS infection, further studies are needed. One of the main limitations of our study is that it was retrospective and single-centre.Another limitation is the lack of data on M-protein typing of S. pyogenes (emm types), but due to the focus of the study on real practice, these data are not available because this is not routinely examined. On the other hand, the cohort of 93 patients is unique in its characteristics of isolates obtained only from blood cultures.Based on the data obtained and the study of the patient records, we designed a unified algorithm for the laboratory examination of a patient admitted with suspected sepsis. Algorithm of Laboratory Examination of Patient with Suspected Sepsis Within 1 h of the patient's admission, a blood count with a differential count is determined, and Intensive Care Infection Score (ICIS) score, neutrophil-lymphocyte ratio (NLR), C-reactive protein, procalcitonin, lactate, iontogram, creatinine, urea, alanine aminotransferase (ALT), aspartate aminotransferase (AST), bilirubin, albumin and myoglobin are also determined in the urgent regimen.According to the results of the laboratory examination, together with the evaluation of the clinical condition, the most appropriate treatment is chosen. Conclusions In our retrospective study, we showed that there was a higher mortality rate in septic patients from 2020-2024.We demonstrated the importance in this period of procalcitonin and lactate as prognostic biomarkers of worse outcome or risk of patient death.Cutoff values of PCT > 35.1 µg/L and lactate > 5 mmol/L are indicative of a worse patient outcome.Furthermore, myoglobin was shown to predict worse outcome in patients in both tested periods.The result of our analysis is the design of a unified examination algorithm for patients with suspected sepsis in routine practice to initiate timely and appropriate antibiotic therapy.The combination of penicillin with a protein synthesis inhibitor remains the drug of choice in the Czech Republic.Based on these results, we are preparing a multicentre study to confirm the validity of our conclusions. croorganisms 2024 ,Figure 1 . Figure 1.Incidence of S. pyogenes (GAS) bacteraemia in General University Hospital in Prague tween January 2006 and March 2024.The red dotted line shows the increase in mortality rate re sented by linear regressio. Figure 1 . Figure 1.Incidence of S. pyogenes (GAS) bacteraemia in General University Hospital in Prague between January 2006 and March 2024.The red dotted line shows the increase in mortality rate represented by linear regressio. Figure 2 . Figure 2. Kaplan-Meier survival curve of overall survival (in days) among patients with group A streptococci (GAS) bacteraemia hospitalised in General University Hospital, Prague, stratified by periods during which the patients were admitted to the hospital; 2006-2019 (in blue) and 2020-2024 (in green).The Kaplan-Meier curves were compared by Logrank test; p value < 0.05 was used as a cutoff. Figure 2 . Figure 2. Kaplan-Meier survival curve of overall survival (in days) among patients with group A streptococci (GAS) bacteraemia hospitalised in General University Hospital, Prague, stratified by periods during which the patients were admitted to the hospital; 2006-2019 (in blue) and 2020-2024 (in green).The Kaplan-Meier curves were compared by Logrank test; p value < 0.05 was used as a cutoff. Figure 4 . Figure 4. Comparison of receiver operating curve (ROC) analysis for lactate (A), myoglobin (B), procalcitonin (PCT) (C), creatinine (D), C-reactive protein CRP (E) and neutrophil-lymphocyte ratio (NLR) (F) to predict the worse outcome or the risk of death for patients admitted to General University Hospital in Prague between two periods, 2006-2019 (in blue) and 2020-2024 (in green).The pairwise comparison was used to compare the ROC between the two periods; p value < 0.05 was considered as significant.The * represents individual patients.The diagonal (red) line represents the theoretical ROC curve of random prediction.This diagonal line can be used as a reference for improving the data-based ROC curve versus random prediction. Table 1 . Demographics, clinical characteristics and outcomes of patients in the follow-up periods 2006-2019 and 2020-March 2024. Table 2 . Accuracy and cut off values of individual biomarkers for predicting worse outcome or death of patients admitted to General University Hospital, Prague, in the follow-up periods 2006-2019 and 2020-March 2024.
2024-05-19T15:38:09.931Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "bc9e7b4f86e639fe79a36721d79c6f2f8a161e33", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2607/12/5/995/pdf?version=1715775039", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4d6d30d98c206c50424b4d09b77f1027c7b54d3e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
19072961
pes2o/s2orc
v3-fos-license
Pulsar TeV Halos Explain the TeV Excess Observed by Milagro Milagro observations have found bright, diffuse TeV emission concentrated along the galactic plane of the Milky Way. The intensity and spectrum of this emission is difficult to explain with current models where gamma-ray production is dominated by hadronic mechanisms, and has been named the"TeV excess". We show that TeV emission from pulsars naturally explains this excess. In particular, recent observations have detected"TeV halos"surrounding pulsars that are either nearby or particularly luminous. Here, we show that the full population of Milky Way pulsars will produce diffuse TeV emission concentrated along the Milky Way plane. The total gamma-ray flux from TeV halos is expected to exceed the hadronic gamma-ray flux at energies above ~500 GeV. Moreover, the spectrum and intensity of TeV halo emission naturally matches the TeV excess. If this scenario is common to all galaxies, it will decrease the contribution of star-forming galaxies to the IceCube neutrino flux. Finally, we show that upcoming HAWC observations will resolve a significant fraction of the TeV excess into individual TeV halos, conclusively confirming, or ruling out, this model. Observations by the Milagro telescope have detected bright, diffuse TeV γ-ray emission emanating from the Milky Way galactic plane [1]. Early analyses considered the region of interest (ROI) 40 • < < 100 • and |b| < 5 • , finding a diffuse γ-ray flux of (6.4 ± 1.4 ± 2.1) × 10 −11 cm −2 s −1 sr −1 at a median energy of 3.5 TeV. This exceeds the diffuse γ-ray flux predicted from local cosmic-ray measurements by nearly an order of magnitude [2,3], and has thus been dubbed the "TeV excess" [2]. Subsequent observations detected this emission at the even higher energy of 15 TeV, subdividing the ROI into several sub-regions along the galactic plane in a smaller latitude range |b| < 2 • [4]. This excess was not observed at lower energy in subsequent measurements by the ARGO-YBJ collaboration [5], a finding that is only consistent with the Milagro measurement if the spectrum of diffuse hadronic γ-ray emission at TeV energies becomes significantly harder than the α = -2.7 spectrum found at GeV energies. Two classes of models have been posited to explain the Milagro emission. The first utilizes standard cosmic-ray production models, which are dominated by primary cosmic-ray protons accelerated by supernovae. Fitting these models to the TeV excess requires modifying cosmic-ray propagation. Work by the Galprop and Milagro teams noted that, by relaxing constraints from local cosmic-ray measurements and instead normalizing the Milagro flux to the observed Energetic Gamma Ray Experiment Telescope (EGRET) excess [6], the majority of the TeV excess could be explained by hadronic diffuse emission [4]. However, three subsequent observations have challenged this interpretation. First, the Fermi-LAT showed that the EGRET excess was an instrumental artifact [7], removing the impetus for renormalizing the diffuse γ-ray spectrum. Second, AMS-02 measurements of the local cosmic-ray proton spectrum have strongly constrained any hardening of the local cosmic-ray proton spectrum [8]. Third, ARGO-YBJ observations did not find a significant excess at lower energies, which would necessitate a sharp (and unphysical) break in the hadronic γ-ray spectrum [5]. An alternative method of tuning cosmic-ray diffusion employed spatially variable cosmic-ray diffusion models to avoid constraints from the locally observed cosmic-ray density [9]. In these models, the energy index of the diffusion coefficient increases with galactocentric radius. This hardens the spectrum of the hadronic γ-rays near the Galactic center, while keeping the local cosmic-ray spectrum consistent with observations. In addition to explaining the TeV excess [10], it is argued that this model provides a better fit to the low-energy diffuse γ-ray emission observed by the Fermi-LAT [9]. However, standard models of cosmic-ray diffusion also explain the diffuse GeV γ-ray emission to within systematic errors [11]. Thus, while this model can explain the Milagro data, it is best thought of as a fit to the TeV excess, rather than a result that is strongly motivated by external theory or observation. The second class of models produces additional TeV emission from a population of individually sub-threshold point sources, which may have a harder TeV γ-ray spectrum than the diffuse emission [2]. This new component would exceed the intensity of the soft-spectrum diffuse γ-ray flux at high energies, while remaining subdominant at GeV energies. However, up until this point, no source class has been uncovered that can reasonably produce both the spectrum and intensity of Milagro observations. We show that TeV γ-ray emission from young and middle aged pulsars must produce TeV emission with the intensity and spectrum observed by Milagro. This work builds upon existing observations by Milagro, the High Altitude Water Cherenkov (HAWC) telescope [12], and the High Energy Stereoscopic System (H.E.S.S.) [13]. Each telescope has observed bright, spatially extended, emission coincident with energetic pulsars. We show that this emission is typical of all pulsars, and that the total emission from the population of individually sub-threshold pulsars produces a diffuse γ-ray emission matching the intensity of the TeV excess. Moreover, the very hard spectrum of TeV emission from pulsars makes Milagro observations compatible with constraints from ARGO-YBJ and the Fermi-LAT. We show that future observations by the HAWC telescope will resolve many of the sources responsible for the TeV excess, allowing us to confirm or rule out this model in the near future. Observations of TeV Halos -Recent observations by Milagro [14], HAWC [15], and HESS [16] have found a number of TeV γ-ray sources coincident with bright Australian Telescope National Facility (ATNF) pulsars [17]. These sources have several key features. First, they have a hard γ-ray spectrum (∼E −2.2 ) consistent with γ-ray emission generated by the inverse-Compton scattering of the same e + e − population that produces the synchrotron emission observed in x-ray pulsar wind nebulae (hereafter, PWN) [18,19]. Second, they are bright, with a γ-ray intensity which indicates that ∼10% of the pulsar spin-down power is converted into e + e − pairs [19]. Third, they are large, with a radial extent that increases as a function of the pulsar age and typically extends to > ∼ 10 pc for pulsars of ages > ∼ 100 kyr. This final observation is difficult to explain in the context of PWN, whose size can be accurately modeled by calculating an equilibrium distance between the energy density of the relativistic pulsar wind (which decreases as a function of the pulsar age), and the interstellar medium. Observed PWN have radial extents of ∼1 pc, filling a volume three orders of magnitude smaller than the TeV emission. The scale of the TeV emission requires a new physical model, and these sources have thus been termed "TeV halos" [20]. Because the rotational kinetic energy of the pulsar is the ultimate power-source of all PWN and TeV halo emission, the high luminosity of TeV halos place strong constraints on every phase of γ-ray generation. First, pulsars must efficiently convert a significant fraction of their total spin-down power into e + e − pairs. Second, high-energy ( > ∼ 10 TeV) e + e − electrons and positrons must lose a significant fraction of their total energy to synchrotron or inverse-Compton cooling before exiting the TeV halo. Third, inverse-Compton scattering must constitute a reasonable fraction of the total e + e − energy loss rate. In the case of the well-studied Geminga pulsar, the best fit models indicate that between 7-29% of the total pulsar spin-down energy is converted into e + e − pairs, that e + e − with energies > ∼ 10 TeV lose more than 85% of their total energy before leaving the TeV halo, and that approximately half of that cooling proceeds via inverse-Compton scattering [19]. At present, TeV halos are observed from only a handful of pulsars that are either highly energetic, or relatively nearby. However, preliminary evidence strongly suggests that TeV halos are a generic feature of all young and middle-aged pulsars. Momentarily constraining our analysis to pulsars older than 100 kyr, in order to remove any contamination from supernova remnants, we note that the ATNF catalog lists 57 pulsars with reliable distance estimates that overlap the HAWC field of view. Assuming that the TeV halo flux of each system scales linearly with the pulsar spin-down energy and is inversely proportional to the square of its distance, we can produce a ranked list of the expected TeV halo flux. Five of the seven brightest systems are currently detected by HAWC, while none of the dimmer systems have an observed TeV association [20]. This is compatible with the expected flux of each system, calculated by normalizing the TeV halo efficiency to Geminga. If TeV halos are, in fact, a generic feature of young and middle-aged pulsars, the ensemble of these dimmer systems is expected to provide a bright TeV γ-ray flux which cannot (at present) be separated into individual TeV halos. Models for the Hadronic Gamma Rays -To determine whether TeV halos can produce the diffuse γ-ray emission observed by Milagro, we use a background model for the diffuse γ-ray emission from standard astrophysical processes. Specifically, we utilize the ensemble of 128 models developed by the Fermi-LAT collaboration to explain the diffuse GeV γray flux [11]. These models employ the cosmic-ray propagation code Galprop, which physically models the production, propagation, and diffuse emission of cosmic-rays throughout the Milky Way [21,22]. The primary cosmic-rays injected in these models are dominated by protons accelerated by supernovae, and thus we refer to this emission as a "hadronic background", even though it includes some emission from primary and secondary leptons. As opposed to the models developed by [9,10], these models are not tuned to explain the TeV excess, making them a natural choice to investigate the TeV halo contribution to the Milagro data. Additionally, these models span a wide parameter space of reasonable diffusion parameters, diffusion halo heights, molecular gas temperature models, and supernova injection morphologies. However, it is worth noting that, because these models are tuned to fit the diffuse emission spectrum observed by the Fermi-LAT, and include no additional physical features at TeV energies, they produce fairly similar predictions for the diffuse γ-ray flux. While the output from all 128 Galprop models is publicly available, these simulations were terminated at a γ-ray energy of 1 TeV. We re-run each Galprop model, extending the maximum cosmic-ray proton energy to 10 PeV, and the maximum γ-ray energy to 100 TeV. The maximum energy in our model does not affect our results. As there is no evidence (outside of the TeV excess) for new cosmic-ray physics at TeV energies, this provides a straightforward extrapolation of our understanding of GeV cosmic-ray physics to TeV energies. For each Galprop model, we calculate the average hadronic γ-ray flux in both Milagro ROIs, finding that they significantly underproduce the observed signal. Models for the TeV Halo Flux -Because Galprop produces a physical model for the diffuse γ-ray emission from cosmic-ray propagation, Galprop self-consistently calculates the Milky Way supernova rate in the Milagro ROI. This provides a normalization for the pulsar birth rate, which we use to calculate the TeV halo formation rate. We assume that all supernovae produce pulsars. This is a mild overestimatebut it affects the TeV halo intensity linearly and is degenerate with several assumptions in this study. While pulsars obtain a significant natal kick due to asymmetries in the supernova explosion [23], a typical kick of ∼400 km/s moves a pulsar only ∼40 pc over the 100 kyr period where a pulsar produces most of its TeV halo emission. We ignore this effect. The injected cosmic-ray proton luminosity in our Galprop models lies between 0.69 -1.2 × 10 40 erg s −1 . Assuming that each supernova injects 10 51 erg in kinetic energy and 10% of this energy is emitted in cosmic-ray protons, this corresponds to a supernova rate in the Milagro ROI of 0.0021-0.0037 yr −1 , and a total Milky Way rate of ∼0.015 yr −1 . This matches observations that find the Milky Way supernova rate to be 0.019±0.011 yr −1 [24]. , ARGO-YBJ [5] and Milagro [1]. The background (blue) corresponds to the predictions of 128 Galprop models of diffuse γray emission [11], and the diffuse contribution from TeV halos (red) is described in the text. TeV halos naturally reproduce the TeV excess observed by Milagro, while remaining consistent with ARGO-YBJ observations. The dashed red region indicates additional uncertainty due to our ignorance of low-energy e + e − from TeV halos. We utilize Monte Carlo methods to produce a steady-state pulsar population normalized to the supernova rate and morphology of each Galprop model. Each model is, in turn, normalized to the observed distributions of OB stars [25], pulsars [26][27][28], or supernova remnants [29]. We calculate the γray luminosity of each TeV halo following [30]. Specifically, we pick an initial period at time t = 0 following a Gaussian with µ p = 0.3s and σ p = 0.15s, and an initial magnetic field following a Gaussian in log-space with log 10 (µ B /1 G) = 12.65 and σ B = 0.55 [31]. We then pick a random pulsar age between 0 and 10 Myr, and spin the pulsar down with a characteristic timescale [32]: where we assume I=10 45 cm 2 g and R=15 km. The period of the pulsar evolves following P(t) = P 0 (1+t/τ ) 1/2 . The spindown energy of the pulsar is calculated following [32]: We assume that 10% of the spin-down power is transferred into e + e − pairs above 1 GeV. This is consistent with our model of Geminga, which indicates that between 7-29% of its total spin-down power is transferred into e + e − pairs [19]. We adopt an e + e − injection spectrum following a power-law with an exponential cutoff, allowing the parameters α and E cut to vary to fit the Milagro data. We find best-fit values of α ∼ 1.7 and E cut ∼ 100 TeV. These results are consistent with our model of Geminga, where we found best-fit values of 1.5 < α < 1.9 and 35 TeV < E cut < 67 TeV, as well as models of the TeV Halo contribution to the HESS galactic center γ-rays, where we obtained best fit values of α = 2.2 and E cut = 100 TeV [30]. We stress that these spectra all provide similar fits to the data, and these results are affected by many uncertainties, e.g: the source-to-source variation in the e + e − injection spectrum, the cooling efficiency of TeV halos, and the systematic uncertainties in γ-ray energy reconstruction between different telescopes. These electrons are subsequently cooled through a combination of inverse-Compton scattering and synchrotron emission. TeV halos cannot significantly contribute to either the magnetic field energy density or interstellar radiation field (ISRF) energy density over their ∼10 pc extent [19,20], and thus we adopt values typical of the Milky Way plane: specifically a magnetic field strength of B = 3 µG (0.22 eV cm −3 ), and an interstellar radiation field energy density of 1.56 eV cm −3 . We subdivide the ISRF into a CMB energy density of 0.26 eV cm −3 with a typical photon energy of 2.3×10 −4 eV, an infrared energy density of 0.6 eV cm −3 with a typical photon energy of 1.73 × 10 −3 eV, an optical energy density of 0.6 eV cm −3 with a typical photon energy of 0.43 eV, and a UV energy density of 0.1 eV cm −3 with a typical photon energy of 1.73 eV [30]. Unlike the case of individual TeV halos, where e + e − below ∼10 TeV can potentially escape before losing their energy [19], the diffuse e + e − population from an ensemble of TeV halos is further cooled while propagating through the interstellar medium. Assuming a standard diffusion constant of D 0 = 5×10 28 cm 2 s −1 at 1 GV and a Kolmogorov diffusion index δ = 0.33, e + e − travel only 0.38 kpc (E −0.33 /1 GeV) before losing energy, implying that e + e − with initial energy above ∼50 GeV fully cool before leaving the galactic plane. Thus, we assume that the e + e − population is in steady state and fully cooled. This assumption fails only at very low energies where the TeV halo contribution is highly subdominant to the hadronic background. Using this cooled electron spectrum, we calculate the inverse-Compton scattering γ-ray spectrum and intensity, taking into account the Klein-Nishina suppression of inverse-Compton scattering [30,33]. Finally, we note that our Monte Carlo model could (theoretically) produce a single very bright, or very nearby, TeV halo that would dominate the total emission. However, this scenario is ruled out by Milagro, which would have resolved such a source. Milagro barely resolved the extended TeV emission from Geminga [14], thus we conservatively exclude contributions from any individual TeV halo with a γ-ray flux exceeding Geminga, (4.27×10 −9 erg cm −2 s −1 ). We note that our model indicates that only ∼1 source brighter than Geminga should exist in our region of interest, and thus eliminating this source is consistent with Poisson fluctuations. The Gamma-Ray Emission from the Galactic Plane -In Figures 1 and 2 we show the key result of this paper. At energies exceeding ∼500 GeV, the diffuse γ-ray flux from leptonic TeV halos becomes significantly brighter than the diffuse γray flux from hadronic processes. In particular, at the energies of 3.5 TeV and 15 TeV (probed by Milagro) the flux from TeV halos exceeds the standard γ-ray background by factors of ∼3 and ∼8, respectively. Additionally, the hard spectrum of TeV halos simultaneously fits both the bright Milagro TeV emission and with the dimmer ∼400-1700 GeV diffuse γ-ray flux observed by ARGO-YBJ. This is intriguing, as an unphysical break in the TeV proton spectrum would be necessary to produce any such feature with hadronic γ-ray emission. For clarity, we do not show relevant (but less sensitive) results from Whipple [34], HEGRA [35], TIBET-II or TIBET-III [36], noting that our model is consistent with the upper-limits of each study. PeV γ-ray constraints from CASA-MIA [37] and KAS-CADE [38] may be relevant if we did not include an exponentially suppressed e + e − spectrum above 100 TeV. We note that this cutoff is both physically motivated by PWN acceleration models [32] and preferred by our fit to the Geminga data [19]. At GeV energies, our model fits the diffuse γ-ray flux observed by the Fermi-LAT. This is expected, as hadronic emission dominates the diffuse GeV flux, and we use Galprop models that have been tuned to Fermi data. To calculate the diffuse GeV γ-ray flux, we analyze 8.5 yr of Fermi data using standard analysis cuts. We calculate the flux from the Pass 8 diffuse emission model in a binned analysis of the region 40 • < < 100 • , and |b| < 5 • , allowing the normalization of all 3FGL sources and the intensity of the diffuse and isotropic components to vary in 0.1 • angular bins and five energy bins We normalize our results to 7 TeV [15], and assume that individual TeV halos convert their spin-down luminosity into 7 TeV γ-rays with an identical efficiency as Geminga. Vertical lines correspond to the TeV halo flux of Geminga, and the projected 10 yr HAWC sensitivity. Results are shown for the total γ-ray flux (F dN/dlog10(F), black, left y-axis), which indicates that a reasonable fraction of the total γ-ray intensity stems from the brightest TeV halos, as well as for the source count (dN/dlog10(F), blue, right y-axis), which indicates that 10 yr HAWC observations are expected to observe ∼50 TeV halos in the ROI. For illustrative purposes, in this plot we show the contribution from TeV halos with individual fluxes exceeding Geminga, predicting the existence of only ∼1 such system. per decade spanning the range 189 MeV -47.5 GeV. The statistical errors on the diffuse flux are small, and we instead show a 30% systematic error band corresponding to uncertainties in the effective area and energy reconstruction of the Fermi-LAT [11]. In the smaller ROI, we find that the limited latitude range makes this analysis difficult, and we instead renormalize the results from our larger ROI based on the relative diffuse emission intensity at 1 GeV in both models. To fit the γ-ray data, we utilized a power-law electron injection spectrum α = 1.7 with E cut = 100 TeV, which is slightly harder than that required to fit HAWC observations of Geminga [19] or the diffuse galactic center γ-ray emission observed by HESS [30]. The necessity for a harder e + e − injection spectrum is entirely driven by Milagro observation of bright diffuse emission at 15 TeV in the smaller ROI, which is hard to fit with an e + e − spectrum that is exponentially suppressed at ∼50 TeV. We note, however, that the e + e − injection spectrum in our model is degenerate with both the efficiency of electron cooling in the Milky Way plane, as well as the strength of the interstellar magnetic field, which provides the only alternative cooling pathway for high-energy electrons. Additional observations of TeV halos with well-known spectra will be necessary to determine the range of reasonable values for the TeV halo electron injection spectrum. In addition to calculating the total diffuse intensity, our model can also determine the individual fluxes of TeV halos that contribute to the Milagro excess. In Figure 3 we show the differential contribution to both the total TeV halo number density and the total TeV halo flux as a function of the individual γ-ray flux of individual TeV halos. We show results for the larger 40 • < < 100 • , |b| < 5 • ROI. Because we care about the emission from individual objects, in this case we do not fully cool the e + e − spectrum, but instead show the differential flux at an energy of 7 TeV, corresponding to the quoted energy scale of TeV sources listed in the 2HWC catalog [15]. The fluxes of each individual TeV halo are calculated assuming that each source converts the same fraction of its total spin-down power into 7 TeV γ-ray emission, utilizing the observed flux of Geminga as a template [19,20]. We note three critical results. First, our model correctly predicts that O(1) TeV halo as bright as Geminga should exist in the Milagro ROI. In fact, three TeV sources brighter than Geminga are observed by HAWC in this region: 2HWC J2031+415, 2HWC J2019+367, and 2HWC J1908+063. All three sources are best fit by spatially extended γ-ray templates and overlap the positions of known ATNF pulsars. These observations indicate that they are TeV halo candidates [20]. We note that the latter two sources are coincident with very young pulsars where TeV γ-ray emission may also be produced by the supernova remnant. Second, we find that 10 yr HAWC observations are likely to definitively prove this correlation, finding ∼50 individual TeV halos in the Milagro ROI. Third, we find that a significant fraction of the total diffuse TeV halo intensity is produced by systems that individually exceed 1% of the Geminga flux. Our model thus provides a clear and testable hypothesis: a significant fraction of the TeV excess will be resolved into individual TeV halos by future HAWC observations. Conclusions-In this paper, we have assumed that the TeV emission observed from Geminga is typical of young and middle-aged pulsars in the Milky Way. This hypothesis is supported by the observation of O(10) TeV halos with characteristics similar to Geminga, and the lack of any observations which rule out TeV halos in systems where they are expected. In particular, we have assumed that all pulsars younger than 10 Myr convert ∼10% of their spin-down power to relativistic e + e − pairs, which subsequently cool via inverse-Compton scattering of the ambient interstellar radiation field. These pulsars naturally produce a population of individually sub-threshold TeV halos that power a bright diffuse TeV γ-ray flux. Intriguingly, the total contribution of TeV halos to the diffuse γ-ray flux exceeds the total contribution of hadronic cosmic rays from supernovae at energies exceeding ∼500 GeV. Moreover, the intensity and spectrum of this emission closely matches the observed TeV excess found in the Milagro data, removing the tension between the soft proton spectrum measured by local cosmic-ray experiments and the hard proton spectrum required by TeV γ-ray observations [10]. Since extragalactic γ-rays are highly attenuated at TeV energies, our results imply that the TeV γ-ray sky is expected to be dominated by leptonic processes. This result has important implications for the source of the very-high-energy neutrinos observed by IceCube [39,40]. Rapidly star-forming galaxies (SFG) are a leading source candidate for the IceCube neutrinos [41][42][43][44][45][46][47][48][49][50][51][52]. Since no very-high energy emission has been observed from SFGs, their veryhigh energy neutrino flux must be extrapolated from GeV γ-ray observations, an extrapolation which assumes a purely hadronic model. Our model indicates that a significant fraction of the TeV γ-ray flux in SFGs is produced by leptonic TeV halos, which do not produce neutrinos. Thus TeV halos necessarily decrease the predicted SFG neutrino flux. However, the TeV halo contribution at GeV energies is unknown, and multiple uncertainties exist, including: the low-energy spectrum and intensity of TeV halos, the "calorimetry" of cosmic-ray protons in rapidly star-forming galaxies, and the energy-scale scale that statistically dominates the γ-ray spectral determination from SFGs. However, we note that current models have found a best-fit SFG γ-ray spectral index of α = -2.3, which already implies that SFGs contribute less than ∼10% of the PeV neutrino flux [49,52]. Even small γray contributions from TeV halos will continue to make SFG interpretations of the IceCube neutrinos more untenable. Finally, we note that this model is imminently testable. In particular, our analysis indicates that most TeV γ-ray sources are TeV halos [20] -and we predict that 10 yr HAWC observations will observe O(50) TeV halos coincident with radio pulsars [20]. Furthermore, these observations will resolve a significant fraction of the TeV excess into individual TeV halos, clearly confirming, or ruling out, the TeV halo origin of the Milagro excess.
2017-07-06T18:00:01.000Z
2017-07-06T00:00:00.000
{ "year": 2018, "sha1": "4ee0bf47a46aae9d413a704a43732c06d319f204", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevLett.120.121101", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "4ee0bf47a46aae9d413a704a43732c06d319f204", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
56193102
pes2o/s2orc
v3-fos-license
Reflections on end-of-life dialysis ABSTRACT The world population is aging and diseases such as diabetes mellitus and systemic arterial hypertension are increasing the risk of patients developing chronic kidney disease, leading to an increase in the prevalence of patients on dialysis. The expansion of health services has made it possible to offer dialysis treatment to an increasing number of patients. At the same time, dialysis survival has increased considerably in the last two decades. Thus, patients on dialysis are becoming more numerous, older and with greater number of comorbidities. Although dialysis maintains hydroelectrolytic and metabolic balance, in several patients this is not associated with an improvement in quality of life. Therefore, despite the high social and financial cost of dialysis, patient recovery may be only partial. In these conditions, it is necessary to evaluate the patient individually in relation to the dialysis treatment. This implies reflections on initiating, maintaining or discontinuing treatment. The multidisciplinary team involved in the care of these patients should be familiar with these aspects in order to approach the patient and his/her relatives in an ethical and humanitarian way. In this study, we discuss dialysis in the final phase of life and present a systematic way to address this dilemma. The world population is aging and diseases such as diabetes mellitus and systemic arterial hypertension are increasing the risk of patients developing chronic kidney disease, leading to an increase in the prevalence of patients on dialysis. The expansion of health services has made it possible to offer dialysis treatment to an increasing number of patients. At the same time, dialysis survival has increased considerably in the last two decades. Thus, patients on dialysis are becoming more numerous, older and with greater number of comorbidities. Although dialysis maintains hydroelectrolytic and metabolic balance, in several patients this is not associated with an improvement in quality of life. Therefore, despite the high social and financial cost of dialysis, patient recovery may be only partial. In these conditions, it is necessary to evaluate the patient individually in relation to the dialysis treatment. This implies reflections on initiating, maintaining or discontinuing treatment. The multidisciplinary team involved in the care of these patients should be familiar with these aspects in order to approach the patient and his/her relatives in an ethical and humanitarian way. In this study, we discuss dialysis in the final phase of life and present a systematic way to address this dilemma. In more advanced stages, usually when the prognosis is of less than six months, the patient may enter a phase in which there is a need for care in rear units or intensive home care, often followed by death and mourning. In the dialysis treatment onset phase, at least three variables must be considered: the patient; the family, the caregiver or the legal guardian; and the multiprofessional team. Each of these variables can present conflicts, whether personal, between family members, between caregivers and the legal guardian, or even among members of the multiprofessional team. These three vertices of the problem can exert influences on each other, turning decision making even more complex and difficult ( Figure 1). What tO cOnsider When dialysis treatment is prescribed When a patient goes to start chronic dialytic treatment, some considerations must be made. Perhaps the main and most important is the answer to a question that involves medical and ethical aspects: will dialysis increase the patient's time and quality of life or simply prolong the death process? The problem is not only limited to the beginning of dialysis. Often, discontinuing dialysis can be as difficult a decision as initiating treatment. So, a similar reasoning can be applied to the patient who is already undergoing dialysis, whose clinical evolution is not being adequate: is dialysis increasing the patient's time and quality of life or simply prolonging the death process? Therefore, it is necessary to analyze and balance the concepts of quality and quantity of life -which are subjective and can vary over time, especially when the degree of recovery provided by the treatment instituted is inadequate. Often, an acute event or other serious illness is necessary to not indicate or stop dialysis. assessment Of the risk Of death On dialysis Several instruments have been proposed and used to evaluate the risk of death in patients undergoing dialysis. One of the simplest, most effective and most used is based on the answer to the question: "Would you be surprised if that patient died in the next twelve months?" Two answers fit the question: 1) yes, I would; and 2) no, I would not be surprised. 2 The study by Moss et al., published in 2008, involved 147 patients on hemodialysis in three different units, showed that the risk of death was 3.5 times higher when the answer to the question was "no, I would not be surprised" (p = 0.01). 2 Cohen et al, in a study published in 2010, encompassing 512 patients undergoing hemodialysis at five clinics, also using the negative response to the question: "Would you be surprised if this patient died in the next six months? ", they showed that the answer" no, I would not be surprised "increased the risk of death by 2.7 times. 3 Therefore, the simple answer to this question can be a powerful tool to evaluate the risk of mortality in hemodialysis. However, the question poses a high degree of subjectivity, since the answer will depend on the observer's own experience. Other variables such as age 4 , serum albumin, 5 Charlson's comorbidity index, 6 and Karnofsky's performance scale 7 have also been used to assess mortality risk in dialysis patients. In the study by Moss et al., patients, for whom the answer to the question was "no, I would not be surprised", had significantly higher age and lower Charlson comorbidity index and serum albumin and Karnofsky's performance scale than the patients on whom the answer was: "yes, I would be surprised." 2 Cohen et al. demonstrated in their study that not only the surprise question, but also the reduction of serum albumin and increased age, in addition to the diagnoses of peripheral vascular disease and dementia, were associated with an increased risk of mortality. 3 These observations confirm the usefulness of the surprise question in assessing the mortality risk. However, depending on the observer's experience, false-positive and false-negative results will be common, causing variations in the sensitivity and specificity of this instrument to assess the risk of death on dialysis. In 2009, Couchoud et al. published a clinical score to predict the six-month prognosis in elderly patients over 75 years of age and chronic kidney disease, initiating dialysis. 8 The index was composed of nine risk factors: body mass index (≥ 18.5 or < 18.5 kg/m 2 ), presence or absence of diabetes mellitus, congestive heart failure, peripheral vascular disease, arrhythmias, active neoplasia, severe behavioral disorder, total dependence for locomotion and the context of dialysis onset (planned or unplanned). The authors reported that, in the index validation population, patients with scores 0, 1, 2, 3 to 4, 5 to 6, 7 to 8 and ≥ 9 had mortality rates of 8, 10, 17, 21, 33, 50 and 70%, respectively. In patients with index ≥ 7, the dialysis suspension was the cause of death in 15% of the cases. 8 Nowadays, calculator websites are available to predict the mortality risk of hemodialysis patients, providing information such as age, serum albumin, with or without dementia or peripheral vascular disease, and the answer to the surprise question. The program estimates the expected survival for 6, 12 and 18 months (touchcalc.com/calculation/sq). In this review, more important than an index that effectively evaluates the risk of death, is an index that enables establishing a strategy regarding the preparation of the patient, family, caregiver, legal guardian and multiprofessional team in relation to future events, and indicate how to conduct each case within the established treatment planning. tReAtment optIons foR AdvAnced chRonIc kIdney dIseAse From stage 4 chronic kidney disease, glomerular filtration rate less than 30ml/min, the clinician or family physician in association with a nephrologist physician are authorized to begin planning for future renal replacement therapy. Two conditions are possible: 1) maintenance of the conservative treatment to the end of life, and 2) renal function replacement therapy through dialysis or renal transplantation. In any option, for patients at higher risk of death, it is necessary to begin the preparation of an advanced care plan, a palliative care plan and an end-of-life care plan that includes planning in the terminal care phase. the "gOOd death" cOncept The concept of good death is broad. Generally, death is considered a good death when it happens without pain, brief, in peace, without avoidable suffering for the patient, the family and the caregiver, in the company of the loved ones and in the place that the patient chose to die. Within this process, the local medical, cultural and ethical standards must be respected. [9][10][11][12][13] The main barriers to a good death are: inadequate control of pain and other symptoms; emotional stress for the patient and the family; lack of attention to family dynamics; lack of knowledge of the patient and the family regarding end-of-life care and lack of an advanced care plan. 14,15 It is therefore possible to conclude that a bad death is accompanied by unnecessary suffering, at odds with the wishes of the patient and the family, having a feeling that the norms of decency have been faced. In that sense, an advanced care plan helps reduce possibilities for the patient to have a bad death. advanced care plan The advanced care plan aims to establish a process of communication between the patient, the family, the health care team, and other important people regarding the patient's wishes for end-of-life care. The main goal is to enable patients to have control over their health care, preparing both, the patient and the family, for a good death. 15,16 The advanced care plan implementation should be initiated when the health care team answers that they would not be surprised if the patient died in the following 12 months. Before the advanced care plan is implemented, it is critical to evaluate the patient in relation to eventual cognitive impairment. In addition, conditions such as anxiety, depression and fear tend to lower the pain threshold, and cause some confusion as to when to start the advanced care plan. 15 The main care plan has as main attributes: 1. Expand the patient's and family's knowledge about terminal chronic kidney disease in relation to end-of-life aspects and care options; 2. Identify the patient's priorities for end-of-life care and develop a plan of action; 3. Identify the person who will take over and participate in medical decisions in case of patient incapacity; 4. Help the person in charge to understand his/her importance; 5. Prepare the patient and family for death; 6. Enable the patient to have control over his/her health care; and 7. Relieve the burden on loved ones by strengthening interpersonal relationships. palliative care plan The palliative care plan aims to improve the quality of life of the patient and family in the face of a fatal illness. 17,18 This is done by preventing and alleviating suffering resulting from early identification, evaluation and treatment of pain and as well as physical, psychological and spiritual problems. It is highly recommended that palliative care be extended to caregivers and remains active during mourning. 19 The palliative care plan does not exclude the presence of an active treatment. In the specific case of chronic end-stage renal disease, it should be made available for patients who have chosen conservative treatment, those who have decided to stop dialysis, and those who have decided to maintain dialysis treatment. 15 Palliative care can be offered in the hospital, clinics or backup hospitals, or at the patient's home. The main objectives of a well-structured palliative care plan are: 1. Relieve pain and other distressing symptoms; 2. To regard life and death as a normal and natural process; 3. Do not hasten or delay death; 4. Integrate psychological and spiritual aspects in patient care; 5. Guarantee support in the family process of coping with the illness and the period of mourning; 6. Provide a multiprofessional team to meet the needs of the patient and the family, including the mourning period; 7. To improve the quality of life, seeking to positively influence the course of the disease; and 8. Understand and better manage distressing clinical complications, either alone or in combination with other conventional or non-conventional treatments. The palliative care plan should be developed by a palliative care physician using trained providers to specifically assist patients who need this type of care. In the absence of these professionals, caregivers who are not specialized in the field may offer the service after receiving adequate training and instruction. end-Of-life care and terminal care plan The end-of-life care plan aims to provide patients with progressive, incurable diseases, such as terminal chronic kidney disease, care that will enable them to live as well as possible until death. 15 This action should be complemented with a plan for end-of-life care that aims to offer the patient, in the last few days or weeks of his life, comfort and symptom relief, enabling the family and patient to bid farewell. 15 The primary objective of the end-of-life care plan and the terminal care plan is to prepare and offer a good death. 18 suppOrtive and spiritual care Supportive care is non-medical care aimed at helping patients cope with the diagnosis of chronic end-stage renal disease, so that they can express and understand their emotions. This measure enables the patient to be strengthened through the power of control and choice. 15,18 Often, especially in developing countries, it is necessary to include financial support for the patient and the family. Finally, spiritual care should be offered to meet the needs of the patient, helping them deepen their faith regardless of religious belief. 15,18 reflectiOns fOr the physician On the death prOcess It is critical for the multiprofessional team, and particularly for the attending physician, to understand that caring for those who are dying is an integral and important part of health care, 9 which should involve and respect the patient and all who are close to him. Undoubtedly, for the physician to offer the patient who is dying a good end-of-life care, it is necessary to have interpersonal skills, clinical knowledge, technical support, information based on scientific evidence, personal and professional values and experience. In this sense, changing the vision and culture of an organization is a great challenge, but often a condition for individual change. Therefore, healthcare professionals have a special responsibility to educate themselves in the processes of identification, management and discussion about the final phase of a fatal illness. 9 More comprehensive studies in the future will be needed to expand clinical, cultural and organizational knowledge, as well as to develop learning that will incorporate different practices that can minimize the suffering of those who are dying. The burnout syndrome refers to a condition of physical and mental exhaustion, with a depressive aspect to it, closely related to professional life, which mainly affects healthcare professionals, such as doctors, nurses, physiotherapists, social workers and nutritionists. 20 Daily coexistence with the suffering of others generates a kind of defense mechanism, and the professional tends to become less sensitive to physical and spiritual pain. However, there cannot be absolute insensibility, since it is not in accord with the primary function of medicine. cOnservative treatment in chrOnic kidney disease In stage 5 chronic kidney disease, a glomerular filtration rate of less than 15 ml/min, is indicative of the need to initiate renal replacement therapy, conservative treatment is defined as the set of actions and care offered to the patient that do not include dialysis or renal transplantation. The decision not to initiate replacement renal therapy may be made by the patient himself, when cognitive conditions enables such a decision, or by a family member or legal guardian previously vested with that authority. Conservative treatment is a holistic planning centered on the patient with stage 5 chronic kidney disease, which actions aim at: Conservative treatment of stage 5 chronic kidney disease should begin early, when it is intended to provide quality treatment to patients who have not benefited from dialysis or have not opted for it. 21 The team involved in providing these services should be multiprofessional, composed of: physician nephrologist, family doctor, nurse, social worker, psychologist, nutritionist and a religious and spiritual support service. All staff members must have training, expertise, and availability for care in the hospital, the back-up hospital, nursing homes, or the patient's home. Experience has shown that after the introduction of a structured plan for conservative treatment of chronic kidney disease, the number of hospital admissions, visits to emergency units and ICU admissions decreases; hospitalizations are being carried out more in back-up hospitals; the 30-day rehospitalization rate is lower; the number of deaths in intensive care units is lower; and consequently reduces treatment cost. 1,22 Finally, it should be noted that, in the case of patients with absolute indication for initiating dialysis, the median survival time in conservative treatment is approximately 6 to 7 months. 23 In this period, renal treatment should be continuously adjusted according to patient evolution, and the multiprofessional team should initiate the advanced care plan followed by the palliative care plan, as previously established. dialytic treatment Of chrOnic kidney disease Initiating or discontinuing dialysis treatment should be a shared decision involving the medical staff, the multiprofessional team, the patient, the family, the caregiver and, if appropriate, the legal guardian. Therefore, over time, it is necessary to establish a physician-patient relationship that enables shared decision-making. 24,25 For the patient and others involved, being adequately informed is fundamental. In this sense, every patient with stages 4, 5 and 5D chronic kidney disease should be informed about the diagnosis and treatment options and, particularly, for patients in stages 5 and 5D, a prognostic estimate should be offered according to the current clinical condition. 16,24,25 Creating an environment conducive to shared decision-making in association with a fully informed patient will enable an advanced care plan. It will always be possible to consider not initiating or discontinuing dialysis in the treatment of chronic kidney disease when: 1. The decision-making patient voluntarily refuses dialysis or requests that it be discontinued; 2. A patient who, although at a certain moment of evolution does not have full capacity to make decisions, has previously, orally, preferably written, refused to start dialysis or asked to discontinue it; 3. A patient who, without decision-making ability, has adequately indicated a legal guardian who refuses or requests that the dialysis be discontinued; and finally, 4. The patient with irreversible and profound neurological damage that is unconscious or does not show signs of sensitivity, intentional behavior and self-awareness and that of the environment. The decision not to initiate or discontinue dialysis can be made easier for patients who have a very poor prognosis or for whom dialysis cannot be offered safely. [24][25][26] This is the case for patients with inability to understand (advanced dementia, those pull the needles or the dialysis catheter); those with very unstable hemodynamic condition (severe hypotension); those in need for sedation to perform the dialysis procedure; with non-renal terminal disease (consider that some patients in this condition may benefit from choosing to undergo dialysis); and, finally, patients over 75 years of age with chronic kidney disease who have two or more of the following criteria: 1. Negative answer to the surprise question ("no, I would not be surprised if the patient died"); 2. Charlson comorbidity index ≥ 8; 3. Acute functional disability with Karnofsky index ≤ 40; and 4. Severe malnutrition with serum albumin < 2.5 g/dl. Therefore, the use of some additional instruments may help in deciding whether to offer dialysis to a particular patient. An assessment o estimate the presence and degree of depression, the degree of cognitive impairment, the degree of comorbidities (Charlson index), the degree of functional disability (Karnofsky's index), the frequency and severity of symptoms during the dialysis sessions, and a mortality predictor in the next six months, can and should support this decision. Despite the use of these instruments, there will always be cases where there will be no consensus on what should be done. When this occurs, consideration should be given to providing a limited dialysis time for the patient who presents an uncertain prognosis or for whom a consensus decision has not been made. 24,25 This means that it is necessary to establish an action plan for conflict resolution when there is no agreement as to what decision should be made in connection with the dialysis. This plan should include the use of a uniform approach among those involved in the communication about the diagnosis, prognosis, treatment options and objectives. Planning, initiating, and discontinuing dialysis Every patient with stages 4 and 5 chronic kidney disease should have a prognostic evaluation and an estimate of quality of life with and without dialysis. Whatever the outcome of this evaluation, conservative treatment should be offered to the patient and family, regardless of whether they chose not to initiate, or discontinue dialysis. In patients with evident clinical worsening, despite dialysis, clinical follow-up will allow to recognize the imminent or immediate need for end-of-life care, regardless of whether or not clinical worsening occurs in the presence of a catastrophic acute event. Therefore, it is imperative to maintain a frequently updated record about supportive care, especially for patients with a life expectancy of less than one year, with the register of comorbidities, functional condition, evidence of malnutrition, cognitive status in cases of advanced age and answer the surprise question. Outlining the advanced care plan, especially for patients who have chosen conservative treatment or those who are worsening despite dialysis, is essential in order to standardize posture and conduct among multiprofessional team members, the patient and the family, the caregiver, and the legal guardian. Discontinuing dialysis treatment is one aspect to be included in the advanced care plan and a decision to be made within a life-long care plan. This decision should always be implemented in a multidisciplinary environment, involving the patient, the family, the caregiver, the legal guardian, the nephrologist and the family doctor. Deciding not to start or discontinue dialysis is ethical and clinically acceptable, as long as the process is supported by a shared decision. Conditions that may influence this decision, such as depression, physical pain, and potentially reversible social factors, should be evaluated and controlled. It is prudent and fundamental to emphasize that the decision not to initiate or discontinue dialysis can only be implemented after careful evaluation to exclude diagnoses of depression or burnout syndrome in any of those involved. After dialysis discontinuation, the patient, the family, the caregiver and the legal guardian must be guaranteed continuation of supportive care and/or palliative care. In end-of-life care, good communication, symptom relief, psychological and spiritual support, tailored to the needs of the patient and family, and, where possible, patient and family care at the place of their choice are actions to address the issue. In addition, it is important to offer a culturally appropriate grief service to the family, caregiver, and legal guardian after the denouement. Finally, shared decision, advanced care, palliative care, end-of-care and terminal care plans must be updated at least annually or more frequently if necessary. In these plans, the patient should always be properly informed that he has the right to refuse dialysis, even if the medical staff is not in agreement with the decision, while the medical staff must be aware that they also have the right to refuse dialysis when the benefits do not justify the risks, even when the patient or family requests treatment. audit tOOls Several particularities discussed in this review involve ethical and legal aspects, 27 therefore, it is appropriate to establish audit parameters that properly evaluate the results and protect the multiprofessional team. 16 Different indexes may be used to monitor the program's performance, such as: fInAl consIdeRAtIons Many patients with chronic kidney disease may be kept on conservative treatment, without initiating dialysis, for their best interest. On the other hand, dialysis patients may also benefit from access to supportive care at the outpatient ward, home, back-up hospitals or respite care. However, in any case, for the patient approaching the end of life, offering palliative care becomes essential. The layman will always have the idea that end-oflife dialysis refers to situations involving, especially, elderly patients. This is not true. Regardless of age, in any individual with end-stage kidney disease, who is progressively getting worse and/or in life-threatening clinical worsening, the aspect of end-of-life dialysis can and should be addressed. When dialysis is not initiated or stopped, conservative and palliative care programs emerge as strategies for managing chronic dialysis dependence. When such care is not offered to patients with end-stage kidney disease, there is significant suffering, which generates psychosocial burnout to caregivers, the family, and the community. Nephrologists should be familiar with supportive and palliative treatment options, understanding them as part of their professional responsibility. Physicians have a duty to provide the patient and all decision-makers with sufficient information about treatment options. This means explaining all the treatment modalities available, with their benefits and harms, and the types and consequences of dialysis and alternatives, such as renal transplantation and non-dialytic conservative treatments. The discussion should also include potential physical, psychosocial and socioeconomic consequences of each choice. In addition, the patient and family should have time to consider options and clarify doubts, especially before making critical decisions, such as initiating or discontinuing dialysis. At the same time, those involved should be aware that these decisions are open and can be reviewed at any time. Therefore, initiating or discontinuing dialysis should not be considered irrevocable decisions; however, those involved in the initial decision should be advised that this may impose potential limitations on future treatment options. To establish a minimum threshold of benefits to be achieved by dialysis, below which the sacrifices of initiating or maintaining dialysis are disproportionate or even unacceptable within the sociocultural context, can aid in decision-making. Training in communication and making ethical decisions about offering end-of-life care can help the doctor. It must be borne in mind that futile treatment imposes financial cost and undermines efforts to provide health care to all who need it. Very young or very old patients, those with multiple comorbidities, patients who reach the medical care already in the terminal stage of the disease, individuals with low educational level and socially and culturally marginalized groups may present barriers to participate in the decision making process. Other limitations include impairment or cognitive immaturity and lack of information on the prognosis of treatment in specific groups of patients. Therefore, initiating or discontinuing dialysis involves decisions that go beyond the specific action of the medical act. Clinical decision guidelines, especially regarding discontinuation of dialysis, resuscitation orders, and limited time trials of dialysis should be developed to assist the physician and multiprofessional team in facing such situations, without going beyond the limits of responsibility and ethics. Nephrologists should refer the patient to a supportive service whenever they feel unable to make decisions or to provide for adequate support. For the nephrologist, this implies education and knowledge about shared decisions, advanced care planning, end-of-life counseling, and specific end-of-life medical care. At the same time, other dialysis unit professionals should also be trained to make clinical decisions in a shared manner, including other teams only indirectly involved in dialysis.
2018-05-25T23:38:05.998Z
2018-05-17T00:00:00.000
{ "year": 2018, "sha1": "464cb60d0584f0c924ec3bd03c88b9f83577a524", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/jbn/v40n3/2175-8239-jbn-3833.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "464cb60d0584f0c924ec3bd03c88b9f83577a524", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
268494168
pes2o/s2orc
v3-fos-license
Distributed energy technologies, decentralizing systems and the future of African cities Within global debates, decentralization with respect to renewable electricity is positioned as pivotal in overlapping calls for decarbonization, energy access and just transitions. The promise is that technologies that are smaller and distributed will deliver a range of benefits, including enabling infrastructural innovations and expanding sustainable access, notably by empowering local authorities. While shifting to low-carbon systems is imperative, in this paper we call for attention to the complexity, opportunities and risks of commonly celebrated experiments in distributed electricity technologies when applied to African contexts. We draw on the case of Uganda, unpacking a series of four innovative electricity projects currently under way. For each case, we look at the actors involved and the imagined relationship between the projects and the incumbent grid. From these cases, we argue that, in this African context characterized by contested urban governance and fragmented networks, careful attention to supporting urban scale institutional and infrastructural development is necessary, although in many cases bypassed. As global systems respond to the existential threat of climate change, the transition to renewable energy is positioned as a lever for development, decarbonization and urban futures for Africa.The potential urban impact arises from the promise and promotion of 'decentralization', which is often identified as an essential component of this energy transition. (1)he promise, apparent in much of the discourse on power system decentralization -academic, policy and popular -is that technologies that have materially smaller components overcome the myriad challenges associated with large network expansion. (2)Moreover, the material reconfiguration towards smaller, modular, distributed and hybrid systems and technologies is purported to enable -or even compel -a rescaling of the governance configurations, away from national and towards sub-national and sub-urban levels, with a wide range of administrative, political and economic benefits. (3)The rise of municipal authorities as critical actors in decarbonization and energy policy (4) As a result of this promise in the sub-Saharan African context, the rescaling of energy systems is positioned at the intersection of urban development and the sustainable energy transition. (5)A plethora of external and local actors operate in this space, driving agendas, producing policies and plans, financing projects and capacitating various modalities of implementation.For example, in recent years a wide range of mayoral programmes have been developed related to the just transition.However, as we show in this paper, celebratory narratives about the benefits of distributed technology, and the concomitant programmes for developing capacity and infrastructure, require considered engagement with Africa's empirical conditions and potential futures. (6)In this context, the assumption that distributed technology will strengthen urban governance and development in Africa needs further interrogation. Interrogating this proposition with grounded insights, this paper draws specifically on the case of Uganda.The country has seen a significant rise in the deployment of distributed solar photovoltaic (PV) technologies, under the auspices of the energy transition.This has resulted in increasing energy access, as well as a range of other new opportunities for public infrastructure, such as street lighting and the electrification of transport.The empirical contribution of this paper hinges on a set of four distinct illustrative examples of distributed energy technologies.While these are specific to Uganda, similar initiatives are being implemented in other African countries.The first two examples discussed are mini-grids, one of which (the Kiwumu example) is extending the urban network of the capital city of Kampala; and the other (Lolwe Island) is creating a standalone network.The other two examples explored are embedded within the urban context of Kampala.The first of these examines e-mobility efforts driven by start-ups, and the second focuses on how the Kampala Capital City Authority (KCCA) is attempting to utilize new technologies to support its operations.These illustrations are not a comprehensive characterization of the country's electricity sector, nor do we aim to provide a normative assessment of the most appropriate interventions in Uganda or elsewhere.Instead, we focus on the relationship between distributed technologies and centralized networks; the actors involved in the projects; and the risks and potential that live in these projects. Although these are important projects with good intentions, we show overall that there is a tendency with distributed technology to bypass local governments, and by extension to reduce their ability to contribute to the energy transition or to plan infrastructure effectively at the urban scale.In contrast to discourses that celebrate city leadership, urban and local governance systems in Uganda (and much of Africa) are not empowered to significantly shape the emerging infrastructure configurations, nor to harness its purported benefits.Instead, a network of foreign governments, private sector and national actors are driving these hyper-local projects.These projects have a mixed relationship with the grid, some supporting it while others may undermine it.Opting for an approach which acknowledges that Africa's urban energy transitions are deeply uncertain, we draw attention to the multiple, unfolding and complex potential of technologies in African contexts. II. MetHoDoLoGy: A sItuAteD, MuLtI-cAse stuDy of tecHnoLoGIcAL DIstrIbutIon This research formed part of a three-year project which focused on the relationship between on-and off-grid energy and water in Uganda and Sierra Leone. (7)The insights for this paper, while contextualized within the wider project, are specifically focused on electricity in Uganda, and reflect the focused efforts of the authors to understand the governance dimensions of distributed technological innovations.This approach is inspired by Science and Technology Studies (STS) with its attention to the ways that technologies are political and embody particular future imaginaries. (8)The empirical content for the paper draws heavily on a more descriptive and policy-focused working paper entitled 'The Cable is Coming: Distributed energy technologies, decentralizing systems, and the future of African cities', written by the authors. (9)Similar to the working paper, the overarching methodology deployed for this research is a multicase study, drawing on four unique instances of technological innovation in energy provision.The cases are not intended to be comparative in nature (and thus are not 'like for like').Rather, each case is treated as a unique example, shedding light on the overall phenomenon of electricity technology in Uganda. The paper is empirically driven, and the authors conducted field research in Uganda in January 2022.This included 15 interviews and two focus groups with experts and practitioners involved in the electricity sector.They included representatives of donor groups, mini-grid companies, non-governmental organizations (NGOs), start-up companies and the distribution utility, as well as city officials.For each of the case study projects, field trips were undertaken to the project sites to better understand the technology and engage with its materiality and uses.In the case of Lolwe, the authors attended the launch of the mini-grid on the island as participants, documenting the experience and analysing the event.During field visits to the four case study sites, end users were selectively engaged (for example, by speaking informally to people who took loans to purchase electrical appliances, motorcycle drivers using electric vehicles, and local community members who were waiting to be connected to recent electricity access projects). In terms of data analysis, the interview transcripts, photos, field notes and supplemental material (for example, project reports, promotional material, monitoring and evaluation documents) were analysed within the conceptual framework and key questions specific to this paper.In this analysis, the paper departs from the approach of the working paper and incorporates more scholarly insights and contributions.The focus here was on identifying the impulses behind the innovations, the imagined energy futures that each technological experiment held within it, and finally the prefigurative implications of such approaches to solving energy challenges. To place these cases in the context of Africa's sustainable energy transitions and debates, insights from the Ugandan case were triangulated with the authors' experience. (10)This triangulation is further bolstered with a selective review of policy and institutional work related to the energy sector, urban governance, sub-national finance and distributed infrastructures over the past 10 years.The authors recognize that, since the time of conducting this research (2022), the Ugandan context -and the wider African energy landscape -has evolved, and many of these projects and processes may look different today.However, the specific illustrations that anchor this paper reflect common expressions of the decentralization transition in the urban African context, thus retaining their conceptual and empirical utility. III. frAMInG tHe enerGy trAnsItIon In (urbAn) AfrIcA Across the African continent, electricity poverty is a persistent problem; close to 600 million people lack electricity and energy access remains a chronic challenge. (11)Starting from a low base of only 0.6 per cent of global renewable energy investment (US$ 434 billion in 2021), there are various estimates of the investment requirements needed to address infrastructure and access deficits.Among them, the International Energy Agency (IEA) estimates a need for US$ 190 billion annually from 2026 to 2030 for all climate finance, with two-thirds going to sustainable energy. (12)etworked infrastructure is a critical enabler of electricity access for households, as well as for commercial and industrial users.However, investment in strengthening and expanding electricity networks has been limited, particularly for transmission and distribution infrastructure.As a result, transmission and distribution networks are often limited and fail to adequately support the economic development aims of governments.In African countries, limited distribution networks (many unable to keep up with urban expansion, and excluding peri-urban and informal settlements) prevent households and businesses from accessing expanded electricity generation, and even in areas that are network connected, the age and state of networks may present challenges such as blackouts and surges.Combined with the need to decarbonize generation, this is the 'problem' that modular, small-scale renewable energy options aim to solve.Descriptors such as 'sustainable', 'low-carbon', 'climate-compatible', 'resilient' and 'green' all and often interchangeably characterize this localization of global sustainable energy agendas in Africa. (13)Added to this, there is increasing interest in the 'justness' of this energy transition, with attention to equitable energy access, the ownership of industrial value chains and networks, global geopolitics of finance and development, job creation and local democratization. (14)iterature on the global sustainable energy transition foregrounds cities around the world as critical to this multi-pronged effort. (15)Within this, the potentially concomitant rescaling of electricity infrastructure and its governance, ownership, operation and financing is characterized as 'decentralized', 'polycentric' or 'localized' governance. (16)Africa's accelerating urbanization and persistent energy need provide an enticing context onto which this potential is projected.However, since much of the empirical and speculative work to characterize and shape the energy transition emanates from the global North, (17) it often overlooks fundamental differences that require attention and consideration. (18)o more accurately understand the relationship between distributed technologies (particularly with respect to small-scale renewable generation), decentralized governance and the role of cities in African contexts, it is important to understand existing urban governance arrangements in Africa. (19) government though decades of decentralization reforms, (20) local government remains institutionally and fiscally weakly capacitated in many if not most African countries.Exceptions do exist, for example South African metros.A wide range of diverse actors -from centralized authorities to multilateral lenders -are at play in African urban spaces, creating complex and hybrid governance arrangements. (21)In terms of energy, city authorities in most African countries, with a notable exception of South African local governments, often have minimal capacity to shape urban systems, lacking as they do the mandate or finances to invest in infrastructure networks.Local governments in Africa most commonly also have no electricity-related mandates beyond the provision of street lighting, and often play a minimal role in national electricity systems.Power sectors on the continent are mostly centralized national utilities and agencies, international finance actors and private companies in global supply chains, (22) and national power generation has tended to take the form of very large projects, facilitated by these networks of actors.Enduring underdevelopment of and underinvestment in national energy systems, and the electricity systems that constitute part of these systems, across the majority of African countries are the result of various interconnected dynamics, including extractivist investment strategies by foreign public and private actors (for example, power generation to support mining for the export of unprocessed commodities), and blanket characterization of the continent as a whole as high-risk for investment.With pressure on national governments to nonetheless attract investment, drive national development projects and interact with global markets, development and financial actors and interests, urban networks can, as a result, suffer from perpetual underinvestment.Additionally, the centralization of governance and interface with external actors often means that there is limited capacity for local authorities to support just transitions. On the ground, many African urban energy systems reflect what Jaglin (23) and others refer to as "electrical hybridity" (or "heterogenous configurations" in other texts). (24)These phrases capture the diverse material practices, networks and arrangements that animate this space.Practically, deficits in electricity access, affordability and reliability are supplemented by users with biomass and liquid fuels, acquired through formal and informal markets.There is also an increasing penetration of private generation (large-and small-scale) and energy intermediaries in some countries, as well as donor-driven distributed renewable energy access projects.Small-scale renewable energy options include very small equipment like clean cook stoves and solar lamps.They also cover solar home and micro-and mini-grids; the latter combining generation and distribution (and possibly storage) to serve a group of energy users.Some operate where networked infrastructure is not in place at all (for example the use of mini-grids in rural areas); others supplement intermittent, insufficient or unaffordable electricity. (25)Most of the funding for offgrid energy in sub-Saharan Africa is still flowing into the more basic solar options like lamps and very basic home systems. (26)These technologies contribute to energy hybridization, allowing overlapping and multiple pathways to energy access.These land in contested urban governance contexts, which shape their adoption and impacts. (27)With this context in mind, we take a closer look at Uganda's systems. IV. uGAnDA's eLectrIcIty systeM Uganda provides an interesting context to explore issues of rescaling electricity and the relationship between decentralized governance and distributed technology.The East African country is landlocked, but it includes large lakes, or sections of lakes, such as Lake Victoria, and a number of inhabited islands within them.The country has had the same president, Yoweri Kaguta Tibuhaburwa Museveni, since 1986, and is classified as a 'Least Developed Country' (LDC), enabling it to access various forms of aid and concessional finance.According to the World Bank, (28) Uganda is urbanizing at a rate of just over 5.3 per cent per year.Its urban population is over 12 million, around 25 per cent of the total population.Kampala, the capital city, is upward of five times larger than the next largest city, Jinja.In 2011 the affairs of Kampala were brought under the direct supervision of the central government through the Ministry for Kampala Capital City and Metropolitan Affairs.The Kampala Capital City Authority (KCCA) was also established as a separate corporate entity in charge of the management and development of Kampala.It has no significant electricity mandate. Uganda's Vision 2040 national plan asserts the critical role of electricity in the socioeconomic transformation of the country.Uganda is one of a minority of African countries to have liberalized and reformed its electricity sector in the 1990s. (29)The Uganda Electricity Board (UEB) was unbundled, creating separate generation, transmission and distribution industries: Uganda Electricity Generation Company Ltd (UEGC), Uganda Electricity Transmission Company Ltd (UETCL) and Uganda Electricity Distribution Company Ltd (UEDCL).Generation and distribution have been liberalized.The Ministry of Energy and Mineral Development (MEMD) sets the sector policy agenda, and all actors are regulated by the Electricity Regulatory Authority.This concession is not being renewed, however, and generation, transmission and distribution are being reintegrated. Between 2000 and 2020, the sector's generation capacity expanded from 400 megawatts (MW) serving 180,000 grid-connected customers, to 1237.49MW serving more than 1.5 million customers. (30)Currently, total electricity demand on the grid, including residential, commercial and industrial use, peaks at 650 MW.There are several factors limiting Ugandan electricity demand, including relatively limited industrialization, affordability barriers and limited distribution networks.Distribution infrastructure is concentrated in and near the capital, with the urban network operated by Umeme, the largest of eight private distribution players, under a concession agreement. Until recently, extending the distribution network was the main mechanism for driving electricity access.Alongside biofuels, various distributed technologies are being developed to fill the gaps in the network, attending to areas that are yet to be reached, and supplementing grid access where it is unable to deliver a consistent supply.A household survey by the Uganda Bureau of Statistics indicates that 38 per cent of households are connected to off-grid electricity solutions. (31)This uptake has been supported through central master planning, which identifies some areas as suitable for grid extension and others where mini-grids will be deployed instead. (32)Despite many challenging dynamics, investment in off-grid renewables, including mini-grids, reached US$ 39.3 million between 2007 and 2019. (33) with generation capacity totalling 56.8 MW. (34) It is against this backdrop that we look at a range of innovations in this space. V. froM MInI-GrIDs to e-MobILIty: four exAMpLes of DIstrIbuteD enerGy projects This section explores several cases of distributed energy services in the context of Uganda's centrally planned national electricity system.The first two cases are mini-grids, aimed at extending the urban network (Kiwumu) or creating a standalone network in the absence of distribution network connectivity (Lolwe Island).The third looks at the creation by start-ups of distributed energy solutions in the city, and the fourth at how the city government itself is engaging with distributed energy solutions (KCCA).Together, these cases show the wide diversity of ways in which material reconfiguration is taking place, and the absence of an urban scale of governance for this reconfiguration. a. peri-urban connectivity: the utilities 2.0 twaake project at Kiwumu Umeme faces a critical challenge in expanding the grid to peri-urban areas.Not only are there the challenges of covering grid extension costs and last-mile connections, (35) but also in facilitating electricity use and payment for this service.In new extension areas, people most often do not own electricity-consuming appliances (such as kettles or washing machines).This creates risks for the utility when extending to new areas, as the costs are high and the demand, at least in the short term, is low.Umeme's expansion risks cannot be mitigated with increased tariffs, as high capital costs mean that a cost-reflective tariff would be well beyond affordable for most peri-urban residents and businesses and would further disincentivize off-take.As one Umeme official noted "we can ask the government to assist to invest in these areas, but people won't even be able to use this electricity without some preparations." The Kiwumu mini-grid has attempted to solve these problems, mobilizing a range of actors to both develop the project and support the 'productive use' of this power (creating both demand and ability to pay).In Mukono District in Kiwumu, just beyond the municipal boundary of Kampala, the 40 kilowatts peak (kWp) solar PV mini-grid is a short walk from the peri-urban residential area.The town has some small businesses, including small shops and private pharmacies.At the time of this research, in January 2022, the project was seven months old, and was the sole electricity option for the 300 local households and 60 microenterprises.Its first 'anchor' customer was a maize mill with the ability to mill five tonnes of maize a day, which accounted for around 60 per cent of the electricity consumption. The Kiwumu project, like most mini-grids, is driven by foreign nonprofit and for-profit actors.It is being implemented as part of the Twaake Pilot under the Utilities 2.0 project.The Twaake Pilot is an initiative of Power for All, a not-for-profit and NGO registered in the United States of America.This NGO focuses on distributed renewable energy solutions to the global energy access challenge and is funded by the Rockefeller The ambition of the project (still not realized at the time of writing) is to be the country's first successful mini-grid interconnection with the national grid, linking the network in Kampala to the surrounding districts.A wide range of measures -from tariffing design to mechanism alignment -aim to ensure that there will be no compatibility issues between the two systems.At the same time, EnerGrow -the 'productive use' partner -is tasked with stimulating electricity demand in the area by providing access to appliances, such as televisions or refrigerators, which could be used for income-generation.The start-up provides financing, awareness-building and training that focuses on customers who will use the appliances to improve their business operations -such as clinics, food shops, micro-industries and entertainment spaces."We assess the business potential, source the product, manage transportation to the community, and track repayments . . .over time, the community will become more financially viable", noted an enthusiastic EnerGrow employee.These customers are necessary for ensuring the viability of the expansion.Given the low capacity to pay in the area, more than half the appliances purchased are bought on credit (notably increasing by a significant degree the overall cost of productive electricity use).Microfinance in Uganda is expensive, and EnerGrow reports interest rates around 35 per cent (reportedly only achievable because of donor support to the operations of the company).EnerGrow, and many other project partners, have received grants to support this pilot, on the assumption that the project will provide a viable model for future mini-grids. As the current phase of the Kiwumu project draws to a close, the practicability of the pilot as a replicable model for demand stimulation and grid extension is being tested.In an interview with the Uganda Director of Power for All, she explained the complexity of the project and the incredible work that went into navigating a complex institutional environment, working through unclear energy and mini-grid regulations, convening disparate partners and mobilizing research partners to develop an evidence-base for the oft-recited 'energy-for-development' narrative motivating much of the donor involvement.She explained that the future of the project is also uncertain.As she shared with us in the interview, the project team identified four options: (1) all the infrastructure is handed over to the utility, with compensation for future anticipated income, and the developer starts again in the next location; (2) the developer hands over the distribution network and applies for a small power producer licence and keeps generating and selling power; (3) the developer hands over generation and operates the distribution under a licence; or (4) the developer keeps generating and distributing power under two licences.Under the first option -and here the modularity and mobility of this solution come into play -the developer works with Umeme to physically move the small power plant to the next spot on the utility's long list of unserved areas.The fragmented institutional landscape is changing professional/sites/24/ Energy-Transition-Investment-Trends_ Free-Summary_Jan2021.pdf.35.African Development Bank Group (2021), "Uganda launches last-mile connectivity to increase electricity access to rural communities", available at https://www.afdb.org/en/news-and-events/pressreleases/uganda-launcheslast-mile-connectivity-increaseelectricity-access-ruralcommunities-45797. again with the president's pronouncement of power sector reforms.Consequently, Umeme's future is unclear and thus the implementation mechanisms for Uganda's electrification strategies are too. At the time of this research, the future of the project was unclear.However, the Kiwumu mini-grid is a compelling example of a peri-urban project that works as a supplement and complement to distribution grid extension, preparing the local residential and business community, and convening a range of both global and local actors in the project.Local government authorities are engaged as project stakeholders and must provide land-use permissions, but they do not play a steering role.Neither the KCCA nor the Mukono District authorities play significant a role in the project.While a metropolitan vision for Kampala underpins these expansion projects, the absence of the local government in the configuration of relevant actors is notable and important.However, this case shows how, even where networked infrastructure is prioritized and strengthened in an energy access project, local governance may be functionally bypassed in its execution. b. Isolated mini-grid: Lolwe Island While the previous illustration showcased the option for grid interconnection, the Lolwe Island mini-grid is an example of the more commonly implemented 'isolated' grid for areas where the likelihood of grid connectivity is very low.Lolwe Island is situated in Lake Victoria, off the coast of Jinja, the second largest city in Uganda.Lolwe is inextricably connected to Jinja's urban economy and governed by cascading levels of local authority, few of which hold significant power in the development of the island.The island is one of nine that can support people within Namayingo District, and 15,000 people live on it.Sixteen islands on the lake together comprise three-quarters of the district's land.Until 2022, Namayingo's geography had kept it beyond the reach of the country's expanding electricity distribution networks. Lolwe Island's future changed dramatically between 2021 and 2022 with the construction of the solar PV mini-grid -or "a fully integrated multi-utility, going beyond electricity towards holistic service delivery" (a quote from slides presented at the launch event).The Lolwe Island mini-grid is the result of a partnership between the foreign-owned Equatorial Power and the French multinational utility company Engie.The two companies have formed a joint venture -Engie Equatorial -to undertake this project, the first of its scale and design.At the launch of the mini-grid on a sunny January day, the CEO of Equatorial Power assured everyone present that there are more than 20 other sites that can look forward to similar electricity generation projects across Uganda and other parts of Africa."I will work directly with the president and the Ministry of Energy to make this possible, there will be light on the islands", he announced to the crowd, before breaking into dance and song on the crowded stage.(Notably, he references his relationship with the president.)While the local government representatives were present at the launch, their speeches demonstrated little knowledge of the project (and some even rejected or questioned the project publicly, citing a catalogue of other pressing issues, such as the rising sea levels and the child-eating crocodiles in Lake Victoria). At the time of the research, none of the 3,783 potential electricity users (3,026 households and 757 businesses) was connected to power.Instead, showcased at the launch was the medium-and low-voltage distribution network that connects the power plant (600 kWp solar PV and 600 kilowatt hours [kWh] battery storage) to a productive use off-taker.The core off-taker is an anchor consumer and an 'integrated productive hub'.Developed by the mini-grid company itself, this core consumer comprises an ice-making and fish-drying facility.Fishers pay a fee for the ice and the drying.In addition to the ice and fish, EnerGrow, also the productive use partner at the Kiwumu site, planned to support 200 entrepreneurs to develop local 'productive usages of energy', providing loans for small-scale appliances which could support the growth of local businesses.The stated intention of the project was to extend electricity access to all households and businesses; however, there was no clarity on how this could be funded.At the time of the research, and evident in many of the interviews and speeches, there remained considerable confusion in terms of how the last mile of residential connection would be covered for households and small businesses, the structuring of the tariff that people would pay, and the future of grid extension.While the much-anticipated network is live, the social and economic impacts are yet to be seen. The Lolwe case, while specifically bringing electricity to an island off the coast of Jinja, provides unique insights into the sorts of energy futures imagined by global companies involved in the development of mini-grids.Unlike in the case of Kiwumu, where collaboration is with the private distribution utility, Umeme, to extend the network, Lolwe, with its project partners and electricity supply-side group of international investors and developers, imagines a future where communities are served by isolated systems, even in the absence of a funded and clear plan to ensure affordable access.In developing both the local network and the related economic centres, the mini-grid comes to be more than just an energy provider, but also drives a particular kind of economic development based on its business model and projected financial return. c. e-mobility: urban service retrofit Mini-grids are not the only place where the promises of the green transition in Africa are driving distributed electricity technologies.A key frontier of this transition is the movement of vehicles from diesel to electric batteries -also called e-mobility.In the context of Kampala, these calls for electrification of urban mobility centre around the (in) famous motorcycle taxi (boda boda) sector, mirroring efforts in other East African cities, such as Nairobi.Motorcycles dominate movement in Kampala, particularly for shorter trips around the city, and last-mile logistics.Several companies are attempting to shift boda bodas from diesel dependence towards rechargeable batteries.These companies argue that such a shift would not only improve the livelihoods of motorcycle taxi operators (lowering the operational cost of providing their services), but also contribute to a range of climate objectives. There are two important and complementary Kampala-based companies that have made it their mission to transition the boda boda sector: Bodawerk is a Ugandan start-up that focuses developing rechargeable lithium-ion 'smart' batteries.These batteries can be installed into the existing motorcycles used by the vast majority of Kampala taxi operators.A modified Bajaj Boxer -an Indian-designed, Chinesemanufactured, and now Ugandan-retrofitted -motorcycle enables riders to reduce both their emissions and operational costs.Bodawerk is complemented by another start-up, Zembo, with headquarters in France.Zembo imports electric motorcycles (frames and batteries) and establishes charging stations where riders can exchange batteries for a fee.Zembo's vision is to develop charging stations all over Kampala, using solar charging wherever possible, to supplement the energy provided by the utility.The digital mobility debates across Africa include ongoing discussion about the most viable 'model'; a plethora of variations have been developed to test different ownership, finance and management configurations.Both Bodawerk and Zembo have attracted the attention of international donor organizations using pitch decks that estimate the future value and impact of e-mobility innovations.Donors have funded much of the research and development of both organizations, hoping that the programmes will grow to attract larger funding from venture capital or development finance institutions. Both organizations have also attempted to partner with the government: Zembo has sold four bikes to the KCCA, and Bodawerk has been allocated a share in one of Kampala's industrial parks.However, working with the state is not the primary aim.Instead, they have ambitious hopes of scaling their operations.For Zembo, scale would mean a city full of charging stations, with riders able to recharge (at an affordable rate).Despite its name, Bodawerk's scalable innovation is not limited to (or even focused on) the boda boda sector.They are focused on the development of batteries, which can be used not only in bikes, but for many other systems (such as home solar systems).As the director articulated "getting batteries right is so important for getting people off fuel . . .for bikes, but also home generators.Ideally, we can also localize production and create jobs." Both Zembo and Bodawerk are working to expand the ways in which the existing electricity grid is used, creating a new consumer base for the existing system.The electrification of mobility increases local electricity applications and demand within the dense networked areas of urban agglomeration.While aiming to attract green finance under the banner of decarbonization, their real value for end users lies in the (hopefully) lower operational costs (with electricity more affordable than fuel).With smart batteries capturing the data of riders -to be used for their own planning and risk management -there is further value for the companies involved in these 'start-ups'.These companies in Kampala, and many more across the continent, challenge linear notions and concepts of decentralization. While they have little engagement with local government, these innovations do not seek to weaken the centralized grid or diminish its importance.Rather, these companies rework the grid, through materiality extensions, network supplementation and augmentation of demand.Without romanticizing e-mobility or ignoring the profitdriven competition between companies, the case showcases yet another way in which the electricity system (on supply and demand sides) is being transformed, and how such transformations may enrol informal economies in large technical systems in new and unexpected ways. d. KccA: Moving city buildings off the grid Focusing even more closely on the city, the last illustration we look at enrols city authorities in the development of embedded generation, largely for their own consumption.These projects, common in Africa, aim to 'green' city authorities, by making their own operations less reliant on the national grid (see, for example, the municipal building retrofit programme under the Covenant of Mayors in sub-Saharan Africa).In terms of electricity, the majority of Kampala is covered by the Umemeoperated distribution network.However, as the city has grown and densified, funding for upgrades to and maintenance of transmission and distribution investments has not been consistent.Many parts of the city experience regular power failures.Consequently, households and business that can afford it create backup systems to supplement their reliance on the grid.The same is true for the city authority. As discussed earlier, Kampala is under the jurisdiction of the KCCA.The KCCA interfaces with the electricity system as a consumer.While some energy-intensive urban functions (such as water treatment) are not managed by the KCCA, the infrastructure which the KCCA does operate requires energy.For example, the KCCA office and government buildings, the street lighting, the traffic lights that mediate intersections and crossings, and the schools and clinics that fall under city jurisdiction, all require electricity services.As new projects come online (such as the proposed Bus Rapid Transit system), the authorities may find themselves requiring more energy to run and manage these services.The KCCA's functions are subject to the prevailing grid conditions.According to interviews with KCCA engineers, the expenditure on electricity is also an operational burden at more than US$ 50,000 per year, and delayed transfers from the national government have contributed to large outstanding debts to the utility company. An expressed desire to lower the cost of electricity to the city, improve the consistency of services, and work with enthusiastic French funding partners has resulted in a series of interventions in the city, supported by a € 70 million loan from the Agence Française de Développement (French Development Agency).These include installing solar-powered streetlights; upgrading school buildings to support rooftop solar panels; and purchasing the four Zembo motorcycles to be used by the KCCA (discussed earlier in the e-mobility case).The hope is that these minor improvements will reduce the operational costs carried by the city, creating savings that go towards repaying the loan that funds the project.Interviews with the KCCA indicate some challenges.Not only has it been hard to get the programme moving, but the quality of the solar panels initially installed was sub-par, and they functioned only minimally.Pointing to the panels on the building, officials told us "these were meant to last many years, but just a few years in they are already not working well.Supplying us little."Officials bemoaned the lack of regulation related to solar products (a concern which not only impacts the KCCA, but all households and firms that decide to supplement grid connectivity with panels, lamps and other hyper-local technologies).Lack of regulation leads to faulty products and unnecessary costs for cash-strapped local governments or already strained households. While the KCCA, and some of the political actors involved in urban governance, are indeed interested in the energy transitions under way, the role they play in transforming the city's electricity system is minimal.Even as a relatively powerful African urban authority, the KCCA is implementing small-scale projects, largely focused on changing its own electricity use.This begs the question: if the KCCA is not a key player shaping the energy transitions in Africa, what does this mean for other (less supported and resourced) city governments?What of the global North narrative and the development agency plans that place cities at the centre of interconnected climate and energy planning and mobilize various forms of technical assistance to further this external agenda?This final illustration takes us full circle, reminding us that city authorities, while central to the just energy transition in the global North, are currently not positioned to play a direct and energy-specific role in a meaningful way in many African cities.This does not rule out any role, but requires particular context-specific engagement and locally-driven, demand-led support for city governments, which is at odds with much in the current sustainable energy transition development modalities. VI. refLectIons AnD concLusIon: conceptuAL AnD poLIcy IMpLIcAtIons In this paper we offered four vignettes examining differing distributed electricity projects in Uganda that were being implemented during 2022.These projects are still in the making, unfolding as we write.We looked at innovations within Kampala's dense urban fabric, its meandering peripheries, and in spaces noticeably disconnected from urban infrastructure and economy.These cases challenge us to engage critically with distributed electricity infrastructure in relation to the governance of urban spaces and the future of urban networks in African contexts, and more broadly. From a governance perspective, the projects display distinct relationships between project leads, their funders (whether donors or commercial backers/employers), project teams, different government actors, utility staff and other international development, foreign government, non-governmental and commercial actors.The international sustainable energy transition community of actors in Uganda notably includes a host of foreign for-profit energy companies of different sizes that operate, access donor funding and mobilize public spending.What is notably evident across the cases is that the local government is often excluded or plays a marginal role in these governance configurations.This absence challenges many of the prevailing discourses related to the central role of African cities and local government in just transitions. In terms of the material network, each project has a different relationship to existing networks of distribution, transmission and generation infrastructure.These vignettes clearly illustrate how distributed investments can extend the grid (Kiwumu), supplement the grid (KCCA and e-mobility projects) and create new grids (Lolwe).These cases suggest that a complex relationship is emerging between existing and new technology.There are many cases that cannot be easily grouped as on-or off-grid but reflect heterogeneity.Additionally, the extension, supplementation, supplanting and 'operating in place of' existing networks will not necessarily add up to long-term viable configuration, especially when implemented in an ad hoc way.Thus, the incipient electricity infrastructural palimpsest may lock in particular path dependencies and exclude other trajectories without being able to predict where the transition will lead. From a policy perspective, more attention is needed to chart alternative urban transition pathways and planning, combining growing evidencebased sense-making with explicit speculative reasoning.Programmes related to energy access, industrial development, climate mitigation and climate adaptation are growing in the portfolios of development finance institutions (DFIs) and multilaterals. (36)They remain, much like the global development agenda at large, informed by inappropriate framing and are commonly formatted to serve the epistemic, economic and geopolitical interests of global North actors, rather than attend directly to the needs and risks of African contexts. (37)The argument that distributed renewable energy technologies will produce a rescaling of electricity governance, with multiple associated benefits, does not match the reality of electricity governance, nor of urban governance, in Uganda and many other African countries.As we have shown, a proliferation of spatially and time-bound energy projects may have significant upsides, but -in the absence of strong local authority and clear energy mandates -relationships formed between foreign partners almost exclusively from the global North, national agencies and hyper-local communities are a testament to the risk of institutional bypassing. is a concrete instantiation of this discourse.The motivation -as advanced by decentralization proponents pages 134-140; Burke, M J and J C Stephens (2017), "Energy democracy: goals and policy instruments for sociotechnical transitions", Energy Research and Social Science Vol 33, pages 35-48; De Pascali, P and A Bagaini (2019), "Energy transition internationally -clearly links distributed technologies and systems to improved governance and urban outcomes. Despite attempts at strengthening local By 2019, the number of mini-grids reached 34, Foundation.Project partners include the Africa Minigrid Developers Association; the Collaborative Labeling and Appliance Standards Program (CLASP); CrossBoundary Energy Limited, an investment company that invests in renewable energy projects in Africa; East African Power; EnerGrow (a green energy tech start-up); Equatorial Power; NXT Grid (a Dutch solar energy equipment supplier); the Rocky Mountain Institute, the University of Massachusetts Amherst; Duke University; and Makerere University.This wider pilot has two sites in Uganda.
2024-03-17T17:28:54.613Z
2024-03-10T00:00:00.000
{ "year": 2024, "sha1": "39225cbcc9e58376f964134a95b65995198a536a", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/09562478241226782", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "d06704d3577b9ea5653d7d0505f9480efed1d95b", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Political Science" ], "extfieldsofstudy": [] }
134020884
pes2o/s2orc
v3-fos-license
Assessment of Modern Technology Influence in the Transport Industry to Reduce Carbon Dioxide Emissions The modern ecological direction of development of all industries leads to the creation and introduction of new technologies, in particular in the field of transport. At present, the introduction of wireless technologies as well as information and automated systems is observed in transport systems. This causes the creation of automated passenger cars. The article describes the analysis of previously completed studies that are aimed at creating models of logic and driver assistance systems. The results of researches, which were analyzed, showed that the implementation of fully automated systems in traffic conditions, where the largest share of cars is driven by people, is inefficient. Therefore, researchers consider connected systems, which include V2X technology. It includes the possibility of interaction between vehicles, its connection with the infrastructure, pedestrians and various devices. This eliminates the factor of “human error” in the case of driving a vehicle. In this study, an assessment of the impact of vehicle proportion, which is equipped a V2V system, and the maximum desired speed on the average delay time was carried out using simulation modeling. As a result, the authors found that an increase in the first factor, respectively, contributes to a decrease in the output indicator. The introduction of vehicles equipped with a V2V system reduces its fuel consumption and carbon dioxide emissions. Introduction At present, the introduction of technologies that are aimed at improving the environmental situation is a priority for the development of all branches in the Russian Federation. This is due to global processes that are associated with a reduction in harmful substance emissions into the atmosphere, in particular greenhouse gases. According to the forecasts of the International Energy Agency, the growth and maintenance of the existing state of harmful substance emissions into the atmosphere will lead to an increase in the global average temperature by more than 2°C by 2050, and consequently, to climate change and an increase in the number of natural disasters. At the same time, an unfavorable ecological situation also affects the health of the population and it causes an increase in the number of congenital diseases. This leads to the development of modern sustainable cities that have an ecologically friendly, safe and comfortable environment for living [1]. When the urban area is planned, industrial zones are moved to the boundaries of the area, freight traffic is limited, the use of alternative energy is considered for energy supply of objects, pedestrian zones that are free from cars, are allocated, electric transport and technologies associated with it are developed. In the city, where there are no industrial centers, road transport is one of the largest sources of environmental pollution. In the Russian Federation for six years the amount of harmful substance emissions from cars increased by 14%. At the same time, during 2017 the increase is small and it amounts to 2.5%. Its total number is almost 14.5 million tons and varies depending on the substances contained in the exhaust gases of cars. For example, the amount of carbon monoxide, nitrogen dioxide, sulfur dioxide, ammonia and carbon black increased by 9-16%, while methane decreased by half. The intensive growth of air pollution from road transport is due to an increase in the number of vehicles. So during the period under review, its number increased by 13%, and passenger cars increased by 15%. At the same time, only 13% of all considered vehicles complies with Euro-5 standards, and 54% are vehicles older than 10 years. The development of the automotive industry, which is aimed at improving the environmental friendliness of vehicles, allows reducing emissions of harmful substances into the atmosphere. In the world at present, transport using alternative fuels, in particular electric energy, gas, and biofuel is actively developed [2,3]. According to the data of the international energy agency, the number of electric vehicles in the world exceeds 3 million, and by 2020 its number will be 13 million. In the Russian Federation, the development of these vehicles is considered to be 6-7 years behind the world processes. This is due to the lack of a developed charging infrastructure and services, as well as state support and incentives for citizens to buy environmentally friendly cars. However, the simultaneous development of driver assistance systems and vehicle interaction systems also have a significant impact on reducing emissions of harmful substances into the atmosphere by deciding on the optimal driving mode. In the Russian Federation, at present, intensive growth of the vehicle fleet and the lack of advanced development of the road network cause an increase in vehicle delay time. Historical development of the territory with a street-road network, which has a low traffic capacity, and a large number of traffic lights are additional negative factors. In this case, the optimization of the traffic light cycle on intersections is considered as one of the methods for increasing the traffic capacity of the road network. At the same time, the capacity also depends on the delay time between turning on the green signal of the traffic light and the first car crossing the "stop line", as well as on the distance between vehicles. The main characteristics of the behavior of drivers such as reaction time, prediction and concentration of attention affect the above indicators in the calculation of throughput. In this case, maintaining an equal distance between vehicles and the absence of a "human factor" allows increasing the crossing capacity [4]. Also, the introduction of vehicle interaction systems, both among themselves and with infrastructure, contributes to this. It covers almost all aspects of decision making by drivers and it helps make safe and reliable decisions in the shortest possible time. Increased intersection capacity will reduce vehicle delay time as well as reduce vehicle travel time. It also contributes to the reduction of harmful substances total emissions into the atmosphere. Thus, the purpose of this article is to reduce emissions of harmful substances into the atmosphere with exhaust gases of vehicles while increasing the capacity of the road network by introducing vehicles equipped with systems of interaction between themselves and with the infrastructure. Methodology Modern cars are equipped with the vehicle to everything system, which includes several components, namely, the system of interaction between vehicles, with electronic devices, pedestrians, infrastructure. The implementation of these systems is aimed at helping the driver to make a decision, which leads to the replacement of the existing human driving logic with autonomous vehicles. Initial models of decision-making processes when driving vehicles were considered taking into account acceleration behavior and using technologies such as adaptive cruise control [5,6]. Then the automated road system (this set of designated lanes along which autonomous vehicles move), where the possibility of controlling the flow and overloads was considered, was developed and modeled. The introduction of adaptive cruise control in this system and the structure of its modeling has been studied in the paper [7]. As a result of obtained data analysis, the conclusion was drawn that the workload of the automated road network is a function of the strategy for the movement of vehicles and the decisions made by the flow control center. Therefore, the use of macroscopic modeling and adaptive cruise control with the use of a neural network controller makes it possible to avoid congestion of the road network. Thus, these studies and the concept of an automated road system led to the creation of driver assistance systems. Early versions of driver assistance systems used only on-board sensors and had the ability to adjust the vehicle speed based on data from the vehicle in front. However, during the development of the automotive industry, the observed introduction of wireless technologies leads to the improvement of existing systems. At present, vehicle interconnection technologies V2V and V2I infrastructure are developed. This allows you to create a system of interactive adaptive cruise control, which is able to automatically adjust the speed of the vehicle based on the data on the behavior of all vehicles in the stream (i.e., data from both the vehicle ahead and the following cars). For the first time, data management systems logic was proposed by Bart van Arem [8]. To make a decision, they used both the speed of the vehicle relative to the vehicle in front, and the deceleration, acceleration of flow and spacing. Later, the paper [9] proposed logic, which is obtained on the basis of a model for predicting the behavior of a stream, where each car uses information from the vehicle in front. The modeling of these systems was studied by Zhao and Sun using the VISSIM software package [10]. In the course of this research, the influence of the group size and the share of vehicles with adaptive cruise control systems and the interacting adaptive cruise control on throughput were analyzed. As a result, an analysis of the study showed that an increase in the share of cars equipped with an interacting adaptive cruise control system causes an increase in throughput. However, according to the data presented in [11], the share of these vehicles is small and the speed of implementation is low. When these vehicles are in operation, the interacting cruise control assumes the presence and transmission of the required parameters of each vehicle moving in the stream. Therefore, at present, connected systems that can read and select motion signals embedded in its controller, based on a special V2X connection, are the most common. This leads to the possibility of effective use of vehicles with a plug-in system in a traffic flow, where cars are driven by people. [12]. The ability to reduce speed fluctuations, increase safety and fuel economy of cars by creating driving conditions with a minimum number of braking and acceleration in a stream of vehicles driven by people is an advantage of these systems. The study of the influence of the behavior of people-driven vehicles on the performance of a connected vehicle with a V2X system in actual operating conditions is presented in the paper [13]. In the course of this study, an experiment was conducted to objectively evaluate data on a covered site and public roads. As a result, the connected cruise controller, which was able to receive information from vehicles that was not within the line of sight, was developed and experimentally tested. It was able to obtain information when the geometry of the road was difficult. At the same time, smoother braking and acceleration than with a automated system were observed in the case of operation of the connected vehicle system while driving. Thus, the authors confirmed the assumption that plug-in cruise control can have a positive impact on throughput, even in the absence of the formation of a traffic stream of automated cars. This prevents the formation of traffic jams. The effect of mixed traffic, which consisted of conventional vehicles and vehicles equipped with adaptive cruise control, on its stable was considered Bose and Ioannou [14]. They found that the introduction of vehicles with an adaptive cruise control system leads to an increase in driving stability, a reduction in emissions of harmful substances and an increase in fuel efficiency [15,16]. However, the influence of V2X connected systems in vehicles on the throughput of multi-lane roadways and, consequently, on the change in emissions of harmful substances in the case under consideration is not sufficiently studied at present. Methodology of research Evaluation of the impact of connected vehicles with V2V technology on the capacity of a multi-lane carriageway was performed using simulation modeling in the PTV Vissim 8.0 software package. Modeling of traffic flows was carried out on the basis of data obtained in the morning, evening and daytime on weekdays, as well as on daytime weekends. Melnikaite street of Tyumen was the object of observation. This is a main street, which has 6 lanes for the movement of vehicles in total in cross section for both directions. In some parts of the street, oncoming flows are separated by a dividing strip of 5m in width. The width of the carriageway is 10.5 meters, and the length of the street exceeds 8km. Also, sections of highways that transport traffic to the modeling area were modeled. It was 6 major intersections. The collection of parameters (composition of traffic flow, traffic intensity in directions, indicators of traffic light regulation, transport demand and intensity of pedestrian traffic) was made for each object of observation. The collection of materials was carried out by means of video fixation, in particular: it was a continuous video filming in the period from 7:00 to 20:00 at the entrance of traffic flows to the simulation area; it was video shooting of traffic flows on the object under study from two points simultaneously; it was assessment of transport demand before the intersection with an interval of 2 cycles of traffic light regulation. As a result, simulation modeling made it possible to carry out calculations of the road traffic parameters at the objects in question, taking into account the share of vehicles being introduced with V2V technology. Modeling vehicle data was carried out by setting the driving style (which included the average distance when stopping the vehicle, the distance of visibility, the limit of rear visibility, acceptable deceleration, the factor for reducing the safety distance, cooperative change of lane, maximum deceleration for general braking, distance, the minimum speed in a straight line with heavy traffic, the time between the change of direction, the modes of operation of traffic lights and the rules of priority riding intersections). The obtained data allowed us to calculate the share of reducing the time delay of vehicles and reducing the amount of carbon dioxide emissions into the atmosphere when introducing vehicles with V2V technology. Results and discussion To create a behavior model for the vehicle ahead, which is typical for vehicles equipped with a V2V system, the parameters presented in Table 1 were set additionally in the program. Table 1. Additional initial parameters for the creation of vehicles equipped with V2V technology. Parameter Parameter value Realizable distance between two vehicles at a stop, m The assessment of the impact of the share of vehicles equipped with a V2V system on the capacity of a section of the road network was considered for different values of the average speed of a traffic flow. Therefore, we set the maximum desired speed, which varied from 40 to 70 km / h. The choice of value limits is due to the speed limit of vehicles in populated areas according to the rules of the road, operating in the territory of the Russian Federation. The average delay time of vehicles, which was estimated with an increase in the share of vehicles equipped with a V2V system, from 0 to 100%, was used as an indicator of throughput when the values of the maximum desired speed were 40, 50, 60 and 70 km / h. The data obtained allowed us to estimate the effect of the proportion of increase in vehicles and the maximum desired flow rate on its average delay time. The influence of the share of vehicles equipped with a V2V system in the flow on the average delay time at the maximum desired speed of 60 km / h is presented in Figure 1. In this case, an increase in the share of vehicles equipped with a V2V system in the stream leads to a decrease in the average delay time on the multi-lane carriageway by 34.4%. At the same time, when the share of the studied vehicles in the stream is up to 33%, the effect of its number on the average delay time is not observed. It is manifested at the maximum desired speed of 60 km / h. During the search for such dependencies for speeds of 40, 50 and 70 km / h, the decrease in the average delay time is observed even when the value is less than 33%. The effect of the maximum desired speed on the average delay time, when the share of vehicles equipped with a V2V system is 66%, is shown in Figure 2. When the maximum desired vehicle speed increases, an maximum point equal to the maximum value of the average delay time is observed. This value is observed when the maximum desired speed is 60 km / h. Deviation from this speed leads to a decrease in the average delay time by 15.1%. The complex effect of the maximum desired speed and the share of vehicles equipped with a V2V system on the average delay time is presented in Figure 3. The increase in the share of vehicles equipped with the V2V system leads to a decrease in the average delay time of 34.0-39.7%, when the maximum desired speed is from 40 to 70 km / h. An increase in the desired speed leads to a decrease in the weight of the influence of the share of vehicles equipped with a V2V system. So when the maximum desired speed is 40 to 50 km / h, an increase in the argument leads to a decrease in the output indicator by 36.4-39.7%, and when this value is 70 km / h, it is 34.0%. Reducing the time delay of vehicles will lead to a reduction in carbon dioxide emissions during downtime due to an increase in the average speed of the traffic flow. The data accepted for calculation are presented in table 2. Table 2. Baseline data for calculating the reduction of harmful substance emissions from exhaust gases. Name of the indicator Value of the indicator Fuel consumption of cars when idle, l / h 0.9 Share of decrease in vehicle delay time when implementing a V2V system 0.34 -0.40 Number of vehicles in the simulated area, units / h 14.392 According to the presented source data, the introduction of vehicles equipped with a V2V system allows reducing the total fuel consumption by 4.403 l / h (3.346 kg / h), and consequently, carbon dioxide emissions are reduced by 10.489 kg / h. This causes the environmental friendliness of the introduction of modern technologies in transport, in particular, V2V systems. Conclusions The introduction of modern technologies in transport, in particular V2V systems, leads to a decrease in the time of vehicle delays and an increase in the speed of traffic. This helps to reduce fuel consumption and, consequently, decrease emissions of harmful substances and greenhouse gases, in particular, carbon dioxide..
2019-04-27T13:12:54.899Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "aa0dbdd1cc5f8dbda7149e73a185b5297039813b", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/224/1/012050", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "39da242ada8463491c40d816e138e3313367a9a9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Environmental Science" ] }
255187285
pes2o/s2orc
v3-fos-license
The influence of interaction between orthogonal magnetic fibers on the capture of Fe-based fine particles by each fiber In this work, an existing method of capturing Fe-based fine particles by magnetic fiber is improved, and a weaving method for the fiber filter material is further determined. Different combinations of magnetic fields could form around the magnetic fibers, which change the interaction between orthogonal magnetic fibers when a uniform magnetic field is applied along the X-, Y-, and Z-axes. Therefore, the process of particle capture by the orthogonal magnetic fibers under three configurations was compared using the computational fluid dynamics-discrete phase model (CFD-DPM) and a special user-defined function (UDF) of the magnetic force. The results show that the interaction between orthogonal magnetic fibers could either inhibit or promote the capture of Fe-based fine particles by adjacent magnetic fibers. In industrial production, the magnetic filter material is suitable for the weaving method for the alternate use of magnetic and traditional fibers. When a uniform magnetic field is applied along the X-axis, this weaving method makes the capturing performance of orthogonal magnetic fiber best. Moreover, the magnetic characteristics, flow characteristics, and combination sequence of magnetic fields should be considered. This study provides scientific researchers with new insights for the development of novel high-efficiency fibrous filters to reduce particulate pollutants emissions. Introduction The particulate pollutants generated by industrial production processes, especially by the iron and steel industries (e.g. sintering, ironmaking, and steelmaking), must be removed from the exhaust before being discharged to prevent the particle release from polluting the environment. The crushing of iron ore raw materials and steam condensation under high-temperature conditions by iron and steel industries emit particles that contain numerous metal components. The prolonged exposure of workers to metal dust and smoke leads to repeated respiratory system stimulation, lung function impairment, and chronic obstructive pulmonary disease (COPD). 1 Even it harms the central nervous systems. 2 From the comprehensive consideration of economy and collection performance, fiber filtration is currently the most common technology for removing particles from flue gas compared with electrostatic, 3 cyclone, 4 and wet dust removal. 5 The fiber filtration efficiency can be improved by changing the fiber cross-sectional shape, 6 fiber diameter, 7 fiber orientation distribution, 8 and fiber arrangement 2,9 ; loading nanofibers on the surface of the fiber layer 10,11 or changing the structure of filter media. [12][13][14][15] However, traditional fibers have a "breakthrough" in the PM 2.5 range, 16 especially for particles with a size of 0.5-1.0 μm, which are not easily filtered using conventional filters. 2,8 It makes compliance with increasingly stringent environmental regulations for pollutant emissions from flue gas a challenge. Fortunately, in the iron and steel industries, the Fe content of metal particles produced during the technical production process far exceeds that of other elements, 17,18 and Fe element exists in the form of Fe 2 O 3 and Fe 3 O 4 . 19 These Fe-based particles are easily magnetized in a magnetic field and removed by magnetic forces. The study found that, for fine particles with less than 2.0 μm, the filtration efficiency of magnetic filter material improved by 20% higher than traditional filter material. 20 Compared with the electret fibers produced and put into industrial application, the magnetic fibers are still in the stage of theoretical research. However, magnetic fibers now have a complete production process and will play an essential role in emission control of the iron and steel industries in the future. 20 In addition, high-gradient magnetic separation (HGMS) technology can use the external magnetic field to saturate the magnetic media and enhance the magnetic field strength around it. 21,22 Combining the two technologies can help the magnetization of magnetic filters reach the saturation state. However, the fibers in magnetic filters exhibit strong interactions through the magnetic field. When Fe-based particles are captured by magnetic fibers, these are inevitably affected by the magnetic field of adjacent magnetic fibers. This paper studies the influence of interaction between orthogonal magnetic fibers on the capture of Fe-based fine particles by adjacent fibers. The captured particle trajectories and filtration efficiency of each fiber under the interaction between orthogonal magnetic fibers were calculated and compared with those without interaction to determine the degree of influence. The process of PM 2.5 capture by traditional fibers (gas-solid two-phase) was simulated by different methods: computational fluid dynamics-discrete phase model (CFD-DPM) method, 2,23 Monte Carlo method, 24 Lattice Boltzmann method (LBM), 9 and computational fluid dynamics-discrete element method (CFD-DEM), 13,14 or the use of OpenFOAM to simulate particle capture process. 25 The simulations of the magnetic filtration process considered the fiber interactions when the research object changed from traditional to magnetic fibers. The particle trajectories and filtration efficiency were calculated using the commercially available software ANSYS-Fluent with a discrete phase model (DPM) and a special user-defined function (UDF) of the magnetic force. Because the orthogonal magnetic fibers are perpendicular to each other, the directions of the external magnetic field that enable magnetic fibers to reach the magnetic saturation state include the longitudinal, transversal, and axial directions. 26 Different combinations of magnetic fields formed around the two magnetic fibers when a uniform magnetic field was applied along the X-(H x ), Y-(H y ), and Z-axes(H z ). Therefore, particle capture by orthogonal magnetic fibers in the three configurations was modeled and derived to write different UDF programs. This study aims to reveal the influence of the interaction between orthogonal magnetic fibers and determine the weaving method for magnetic filter media in industrial production. It attempts to provide the theoretical foundations for the development of high-performance fibrous filters. Flow equation In the Stokes flow regime, the pressure drop is caused by viscous forces as air flows past the fiber. The flow field through a given domain is determined using the following conservation of mass and momentum equations 11 : where p is the gas pressure (Pa); µ is the dynamic viscosity of the flow field (Pa s); v x , v y , v z are the component velocities in the x, y, and z directions, respectively (m/s). Particle motion balance equation The particle motion governing equation 2 : where m p is the particle mass (kg); v p is the velocity vector of the particle (m/s); v is the velocity vector of the flow field (m/s); µ is the dynamic viscosity of the flow field (Pa s); d p is the diameter of the particle (m); g is the acceleration of gravity (m/s 2 ); ρ p is the density of the particle (kg/m 3 ); ρ is the density of the flow field (kg/m 3 ); and G i is a random number chosen from a normal distribution with a zero mean and unit variance. C c , Kn p , and S 0 are the Cunningham correction factor, particle Knudsen number, and corresponding spectral intensity of the noise, 27 respectively. These are calculated via the following equations: Kn d where λ is the mean free path of gas molecules (m); k B = 1.38 × 10 −23 J/K is the Boltzmann constant; T is temperature of gas (K). F other includes negligible forces (N), for example, the pressure gradient force, Basset force, Magnus force, and Saffman lift force. 28 F M is the magnetic force added in the magnetic field through a UDF (N). The formulas for the radial and tangential magnetic forces are calculated as follows 29 : where µ 0 = 1.256 × 10 −6 N/A 2 is the magnetic permeability of vacuum; χ p is the magnetic susceptibility; H is the external magnetic field intensity (T); M is the saturation magnetization of the magnetic fibers (A/m); r and θ are the polar diameter and polar angle in polar coordinates, respectively; a is the radius of the magnetic fibers (m), and b is the radius of the particles (m). In polar coordinates, the radial force and tangential force acting on Fe-based particles are F mr and F mθ , respectively. Moreover, the magnetic field force of the magnetic fiber on the Fe-based fine particles is perpendicular to the magnetic fiber. The magnetic fiber doesn't exert magnetic field force on the particles along the axial direction. Therefore, the magnetic field force along the axis of the magnetic fiber was not considered in the numerical simulation. Through the transformation of the coordinate system, taking No.1 magnetic fiber as an example in Cartesian coordinates, when a uniform magnetic field was applied along the X-(H x ), Y-(H y ), and Z-axes(H z ), respectively. The corresponding magnetic field force of No.1 magnetic fiber acting on the Fe-based fine particles are as follows: Calculation model and boundary conditions During simulation, the inlet velocity, air density, particle density, and dynamic viscosity were set at 0.1 m/s, 1.225 kg/m 3 , 2500 kg/m 3 , and 1.83245 × 10 −5 Pa s, respectively. Figure 1 presents the calculational model and boundary conditions. The airflow was assumed to enter the filter domain through a velocity-inlet and leave through a pressure-out boundary condition, the surround of the computational domain imposed the symmetry boundary conditions, and the fibrous surface assumed the no-slip boundary conditions. 8,23,30 The length (l), height (h), and width (w) were set at 250, 80, and 80 μm, respectively. The diameter of the magnetic fibers was 20 μm, and the distance between the two magnetic fibers was 10 μm. Moreover, "Situation 1" represents the external uniform magnetic field along the X-axis (H x ); "Situation 2" represents the external uniform magnetic field along the Y-axis (H y ); "Situation 3" represents the external uniform magnetic field along the Z-axis (H z ). The Fe-based particles emitted by iron and steel industries characterization test was performed shown in Figure 2. The SEM images revealed that the Fe-based fine particles were approximately spherical particles and uniformly distributed (Figure 2(a)). The Nano Measurer software test results showed that the particle diameter range of Fe-based fine particles was 0.52-6.32 μm, and the average particle diameter was 2.56 μm (Figure 2(b)). In the simulation calculation, the diameter range of the particles was chosen to be 0.5-2.5 μm. Moreover, it can be seen from the hysteresis loop of converter ash and refined ash in iron and steel industries, when the external magnetic field reached 0.5 T, the saturation magnetization of the converter and refined ash were 22.5 and 3.32 emu/g, and the magnetic susceptibility were 0.1363 and 0.01989, respectively (Figure 2(c)). Therefore, the magnetic susceptibility of the Fe-based particles in the magnetic field was 0.025 in a reasonable range. Magnetic field distribution The magnetic pole form of magnetized magnetic fiber is shown in Table 1. When the external magnetic field passes through the orthogonal magnetic fibers, the two magnetic fibers are magnetized. In micromagnetism, based on a macrospin model, the electron spins in hundreds of tiny elementary magnets can spontaneously aligned in a small area to form a spontaneous magnetization region, called magnetic domain. When a uniform magnetic field is applied, the internal magnetic domains are aligned neatly and in the same direction. Therefore, the magnetic fibers are magnetized and magnetically enhanced. From a macroscopic perspective, the formation of south and north poles on magnetic fiber was assumed. However, all with their orientation depending not only on the external magnetic field, but also on the superposition with magnetocrystalline and shape anisotropy. To simplify the study, the effects of magnetocrystalline and shape anisotropy are ignored. Moreover, the combination of the magnetic fields around two vertical magnetic fibers is presented in Table 2, which was calculated by Comsol software. When the external uniform magnetic field along the X-axis (H x ), the magnetic field generated by No.1 magnetic fiber is perpendicular to No.1 magnetic fiber, and the magnetic field generated by No.2 magnetic fiber is perpendicular to No.2 magnetic fiber. When the external uniform magnetic field along the Y-axis (H y ), the magnetic field generated by No.1 magnetic fiber is parallel to No.1 magnetic fiber, and the magnetic field generated by No.2 magnetic fiber is perpendicular to No.2 magnetic fiber. When the external uniform magnetic field along the Z-axis (H z ), the magnetic field generated by No.1 magnetic fiber is perpendicular to No.1 magnetic fiber, and the magnetic field generated by No.2 magnetic fiber is parallel to No.2 magnetic fiber. Mesh independence verification In the simulation, the fibrous structure used hexahedral mesh and refined the mesh on the fiber surface. The mesh dependence verification of the six sets of mesh numbers was carried out to eliminate the influence of the mesh number on the accuracy of the numerical simulation calculations. The corresponding results of mesh density analysis are shown in Table 3. With the increase in the mesh number, the pressure drop of the fibrous structure increased first and then leveled off. When the mesh number of the fibrous structure risen from 1.397 to 2.713 million, the pressure drop changed by 3.627%. The mesh numbers for other meshes such as Mesh1, Mesh2, Mesh3, and Mesh5, increased to 2.713 million were altered by 24.84%, 18.34%, 9.801%, and 1.635%, respectively. Under consideration of the mesh numbers and relative errors, the mesh number of the fibrous structure was set at 1.397 million. Figure 3 shows the comparison of the pressure drop between the numerical simulation and empirical formula. The pressure drop was linearly proportional to the inlet velocity. The difference in the pressure drop increased with increasing inlet velocity, which was consistent with the results of the Darcy equation calculated using four empirical formulas. [31][32][33][34] The pressure drop in the numerical simulation was between the Darcy equation values calculated using the Kuwabara and Happel empirical formulas. When the inlet velocity was 0.1 m/s, the error between the numerical simulation and Darcy equation calculated with the empirical formula of Kuwabara was 8.5%. Filtration efficiency verification The filtration efficiency of a filter medium can be obtained in terms of its thickness (L), solid volume fraction ( α ), fiber diameter (d f ), and total single fiber efficiency (SFE). The filtration efficiency of a fibrous filter can be calculated as follow 2,8,35 : where the total SFE, E ∑ , is the sum of the SFEs due to Brownian diffusion (E D ), interception (E R ), 11 and inertial impaction (E I ). 9 The filtration efficiencies obtained from the numerical simulation and empirical formulas are compared in Figure 4. It was found that the filtration efficiency increased as the number of inlet particles increased. When the number of inlet particles was 500, the numerical simulation of the filtration efficiency was consistent with the empirical formulas. Figure 4 also shows that the filtration efficiency increased with the increase in particle diameter when d p > 1.0 μm. Despite some deviations in the numerical simulation, the errors between the numerical simulation and empirical formula were within 10%. Moreover, this research made a quantitative comparison to verify the UDF of the magnetic field force, which was within the acceptable range. 36 At the same time, the locations of the magnetic attraction zones and the magnetic repulsion zones on the magnetic fiber in simulation results were the same as those previously reported. 21,22 Results and discussion The interaction between orthogonal magnetic fibers under Situation 1 Figure 5 shows the interaction between orthogonal magnetic fibers under Situation 1. No.2 magnetic fiber inhibited the capture of Fe-based fine particles by No.1 magnetic fiber, and the degree of reduction in filtration efficiency was approximately 12%-20%. No.1 magnetic fiber also inhibited the capture of Fe-based fine particles by No.2 magnetic fiber, and the degree of reduction in filtration efficiency was approximately 0%-10%. These results show that the inhibitory effect of No.2 magnetic fiber is greater than that of No.1 magnetic fiber. According to the trajectory diagram of Fe-based fine particles, there are two magnetic attraction zones and two magnetic repulsion zones around No.1 magnetic fiber. The magnetic attraction zones are located on the windward and leeward sides of the magnetic fiber. Due to the magnetic fields around the two magnetic fibers are the same, the force fields generated are also the same. The rightward magnetic attraction force generated by No.2 magnetic fiber enhanced the magnetic force acting on Fe-based fine particles on the windward side of No.1 magnetic fiber, which caused disorders in particle trajectory before being captured by magnetic fiber. However, the rightward magnetic attraction force generated by No.2 magnetic fiber also enhanced the flow of the Fe-based fine particles, which shortened the period at which Fe-based fine particles were captured by No.1 magnetic fiber. Therefore, No.2 magnetic fiber has an inhibitory effect on No.1 magnetic fiber. Meanwhile, No.2 magnetic fiber has a stronger inhibitory effect than No.1 with the same magnetic field. The reason is that after the particles were captured by No.1 magnetic fiber, the number of particles decreased. When the particles were captured by No.2 magnetic fiber, the number of particles affected by No.1 magnetic fiber reduced. The number of particles affected by No.2 magnetic fiber was more than that of No.1 magnetic fiber, so the inhibition was more prominent. Figure 6 shows the interaction between orthogonal magnetic fibers under Situation 2. When the saturation magnetization of the magnetic fibers was 79,500 A/m, No.2 magnetic fiber inhibited the capture of Fe-based fine particles by No.1 magnetic fiber, and the degree of reduction in filtration efficiency was approximately 27%-36% ( Figure 6(a)). When the saturation magnetization was 7950 A/m, the degree of reduction was approximately 0%-34% (Figure 6(a)). The degree of inhibition was affected by the saturation magnetization of the magnetic fiber. As shown in Figure 7(a), the inhibition of particle capture may be attributed to the leftward magnetic repulsion force generated by No.2 magnetic fiber, which weakened the capture When the saturation magnetization was 79,500 A/m, No.1 magnetic fiber inhibited the capture of Fe-based fine particles by No.2 magnetic fiber at interval I but promoted capture at interval II ( Figure 6(b)). As shown in Figure 7(b), the inhibition of particle capture is attributed to the leftward magnetic attraction force generated by the No.1 magnetic enhanced the magnetic repulsion force generated by No.2 magnetic fiber on the windward side. In addition, the magnetic attraction force acting on the Fe-based particles was small as the particles passed through the attraction zones of No.2 magnetic fiber. The small particles had a good following in the flow field, which increased the difficulty of capture by No.2 magnetic fiber. The promotion is due to the magnetic field force acting on the particles increases. The leftward magnetic attraction force generated by No.1 magnetic fiber could offset a portion of the flow drag force acting on Fe-based fine particles as the particles passed through the attraction zones of No.2 magnetic fiber. The longer duration of passage through the attraction zones of No.2 magnetic fiber reduced the difficulty in capturing particles. At a saturation magnetization was 7950 A/m, the interaction between orthogonal magnetic fibers was not apparent (Figure 6(c)). Figure 7 shows the trajectory diagram of Fe-based fine particles captured by magnetic fibers when a uniform magnetic field was added along the Y-axis. Figure 7(a) shows the magnetic attraction zone around No.1 magnetic fiber. The interaction between orthogonal magnetic fibers shifted the trajectories of the Fe-based fine particles captured by No.1 magnetic fiber and reduced the deposition area. Moreover, the trajectories of the larger particles on the windward side of No.1 magnetic fiber were also more disordered. The magnetic field around No.2 magnetic fiber greatly influenced the magnetic field of No.1 magnetic fiber. The reason is that the magnetic field generated by No.2 magnetic fiber is in the same plane as the external uniform magnetic field. The two magnetic fields could be superimposed vectorially on a plane perpendicular to the magnetic fiber axis to strengthen the surrounding magnetic field. Figure 7(b) shows the two magnetic attraction zones and two magnetic repulsion zones around No.2 magnetic fiber. Regardless of the particle diameter, the interaction between orthogonal magnetic fibers did not change the trajectories of the Fe-based fine particles captured by No.2 magnetic fiber and the location of the deposition points. Therefore, the shape of the "cavity," (i.e. the particle-free area formed around the magnetic fiber by the repulsive force as the particles passed through the magnetic fiber) did not change. The magnetic field generated around No.1 magnetic fiber weakly influenced No.2 magnetic fiber. This is because the magnetic field generated by No.1 magnetic fiber and the external uniform magnetic field could not be superimposed vectorially on a plane perpendicular to the magnetic fiber axis. Figure 8 shows the interaction between orthogonal magnetic fibers under Situation 3. When the saturation magnetization of the magnetic fibers was 79,500 A/m, No.2 magnetic fiber inhibited the capture of Fe-based fine particles by No.1 magnetic fiber (Figure 8(a)). The filtration efficiency of the Fe-based fine particles captured by No.1 magnetic fiber was 0 when d p ⩾ 1.5 μm. As shown in Figure 9(a), the rightward magnetic attraction force generated by No.2 magnetic fiber changed the shape of the "cavity" and the location of the deposition points. At the same time, the rightward magnetic attraction force enhanced the flow of Fe-based fine particles and reduced the duration for the passage of Fe-based fine particles through the attraction zone of No.1 magnetic fiber. Therefore, the filtration efficiency of the Fe-based fine particles captured by No.1 magnetic fiber was reduced. When the saturation magnetization was 7950 A/m, No.2 magnetic fiber had the same effect on No.1 magnetic fiber despite the small saturation magnetization of the magnetic fiber (Figure 8(b)). The reason is that the attraction force in the attraction zone of No.1 magnetic fiber is perpendicular to the direction of the flow field and the small deposition area of No.1 magnetic fiber, which is susceptible to the influence of the flow field. Moreover, the rightward magnetic attraction force generated by No.2 magnetic fiber enhanced the flow of the Fe-based fine particles. The interaction between orthogonal magnetic fibers under Situation 3 When the saturation magnetization of the magnetic fibers was 79,500 A/m, No.1 magnetic fiber promoted the capture of Fe-based fine particles by No.2 magnetic fiber, and the degree of increment in filtration efficiency was approximately 3.5%-34% (Figure 8(c)). When the saturation magnetization was 7950 A/m, the degree of increment was approximately 0%-25% (Figure 8(c)). The magnitude of increase was affected by the saturation magnetization of the magnetic fiber. As shown in Figure 9(b), the rightward magnetic repulsion force generated by No.1 magnetic fiber enhanced the capture of the Fe-based fine particles by No.2 magnetic fiber on the windward side. Moreover, the decrease in the number of particles captured by No.1 magnetic fiber increased the probability of capture by No.2 magnetic fiber. Figure 9 shows the trajectory diagram of Fe-based fine particles captured by magnetic fiber when a uniform magnetic field was added along the Z-axis. Figure 9(a) shows the two magnetic attraction zones and two magnetic repulsion zones around No.1 magnetic fiber. The interaction between orthogonal magnetic fibers changed the "cavity" and the location of the deposition points. The change in the "cavity" was more evident with the increase in the particle diameter, preventing the deposition of particles on the fiber surface. Figure 9(b) shows the magnetic attraction zone around the No.2 magnetic fiber. The interaction between orthogonal magnetic fibers changed the trajectories of the Fe-based fine particles captured by No.2 magnetic fiber and increased the deposition area. The strength of the magnetic field by No.2 magnetic fiber was smaller than that of No.1 magnetic fiber. In contrast, the magnetic field around No.2 magnetic fiber greatly influenced the magnetic field of No.1 magnetic fiber, contrary to the conclusion in Figure 7. This is because the degree of influence was not only affected by the strength of the magnetic field but also by the flow field. The filtration efficiency of orthogonal fibers Figure 10 shows an efficiency comparison between magnetic fiber-traditional fiber and traditional fiber-traditional fiber. It can be seen that the alternative use of magnetic fiber and traditional fiber could improve the ability of orthogonal magnetic fibers to capture Fe-based fine particles. When H = 0.5 T, M = 7500 A/m, the magnetic fiber (H x ) and the traditional fiber were used alternately, the filtration efficiency could be improved 10% and 9.0%, respectively. However, the process of magnetic fiber (H z ) capturing Fe-based fine particles was greatly affected by the flow field, and the magnetic fiber intensity of H y was weak. 29 The alternative use of magnetic fibers (H y or H z ) and traditional fibers can improve the capturing efficiency of Fe-based fine particles, which requires high magnetic saturation magnetization. When H = 0.5 T, M = 75,000 A/m, the magnetic fiber (H y ) and the traditional fiber were used alternately, the filtration efficiency could be improved 8.5% and 8.4%; When H = 0.5 T, M = 75,000 A/m, the magnetic fiber (H z ) and the traditional fiber were used alternately, the filtration efficiency could be improved 14% and 10%. Moreover, the magnetic fiber (H z ) combined with the traditional fiber could significantly improve the filtration efficiency in the range of 0.5-1.5 μm. Conclusions This study investigated the influence of the interaction between orthogonal magnetic fibers on the capture of Fe-based fine particles by adjacent fibers. When a uniform magnetic field was added along the X-axis, the two perpendicular magnetic fibers affected each other by inhibiting the capture of Fe-based fine particles. The inhibitory effect of No.2 magnetic fiber was greater than that of No.1 magnetic fiber. When a uniform magnetic field was added along the Y-axis, No.2 magnetic fiber inhibited the capture of Fe-based fine particles by No.1 magnetic fiber. However, No.1 magnetic fiber could have different effect on the capture of Fe-based fine particles by No.2 magnetic fiber depending on the particle diameter and saturation magnetization. When a uniform magnetic field was added along the Z-axis, No.2 magnetic fiber inhibited the capture of Fe-based fine particles by No.1 magnetic fiber, whereas, No.1 magnetic fiber promoted the capture by No.2 magnetic fiber. Regardless of the combination sequence and combination type of magnetic fields, the magnetic field around No.2 magnetic fiber greatly influenced the magnetic field of No.1 magnetic fiber. Therefore, the magnetic filter material is suitable for the weaving method for the alternate use of magnetic fibers and traditional fibers to reduce the effects of the interaction between magnetic fibers on the capture of Fe-based fine particles. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
2022-12-29T14:07:30.973Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "d018a4503fad2d6483bc9fa0c86f79f37f2f6a0f", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/15589250221093030", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "d018a4503fad2d6483bc9fa0c86f79f37f2f6a0f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
219452789
pes2o/s2orc
v3-fos-license
Isolation and characterization of Pyricularia oryzae isolated from lowland rice in Sarawak, Malaysian Borneo Aims: Rice blast disease caused by Pyricularia oryzae is one of the major biotic diseases of rice in Sarawak, Malaysian Borneo. This study aims to isolate and characterize rice blast fungus obtained from infected leaf collected from four different divisions in Sarawak, viz, Miri, Serian, Sri Aman, and Kuching. Methodology and results: Twelve succeeded isolates were pre-identified as P. oryzae by morphological characteristics of spores, followed by verification through (internal transcribed spacer) ITS sequencing. The isolates were evaluated for morphological characteristics, growth rate and sporulation rate, which were grown on two types of media, (filtered oatmeal agar) FOMA and (potato dextrose agar) PDA. Morphological characterization showed that the colony surface of the different isolates varied from smooth and fluffy to rough and flattened mycelia; some were with the present of concentric rings, and some with aerial mycelia. The growth rate and sporulation rate of each isolate varied based on types of media used. Most of the isolates grew faster on PDA than on FOMA but produced higher number of spores on FOMA as compared to PDA. Conclusion, significance and impact of study: This preliminary study showed that there were variations observed based on morphological and physiological characterization for the different isolates collected in Sarawak, Malaysian Borneo. This study is the first step towards understanding variation in the population of P. oryzae from Sarawak. INTRODUCTION Rice (Oryza sativa L.) is a crucial food crop that is widely grown to feed half of the world's population. More than 90% of rice is cultivated and consumed in Asia (Khush, 2005;Talbot and Wilson, 2009;Global Rice Science Partnership, 2013). In Malaysia, the rice production caters approximately 65% of the population demands. As a result, Malaysia still depends on imported rice to meet the total demand. Rice production in Malaysia needs to be increased to reach the status of self-sufficiency and to meet the demand of the rapid growing population (Abdul Rahim et al., 2017). Increasing the rice production is always challenged by rice diseases. One of the diseases is rice blast. Rice blast is recognized as one of the major biotic stresses that could lead up to 10% and 30% significant yield losses each year, globally (Skamnioti and Gurr, 2009;Zhou, 2016;). In Malaysia, rice yield loss due to rice blast can reach up to 50% (Gianessi, 2014;Elixon et al., 2017). Rice blast disease is caused by filamentous ascomycete fungus, Magnaporthe oryzae (T.T. Hebert) M. E. Barr (anamorph P. oryzae Sacc.) (Silva et al., 2009;Talbot and Wilson, 2009). This fungus can infect all stages of rice development and different parts of rice plants; leaves, stems, nodes and panicles (Talbot and Wilson, 2009). The lesion is typically a diamond shape with grayish center and brown margin. Under favorable conditions, the lesions can enlarge rapidly and tend to coalesce, leading to plant death (Wang et al., 2014). Breeding blast resistant varieties is a promising method in rice blast management (Ashkani et al., 2015). However, the resistance might eventually be overcome by P. oryzae due to their genetic diversity and their ability to recombine (Scheuermann et al., 2012). For example, rice blast resistant cultivar MR219 (Hussain et al., 2012) in Peninsular Malaysia had its resistance breakdown which led to rice blast disease outbreak (Abed-Ashtiani et al., 2016). It is a constant challenge for breeders in Peninsular Malaysia to breed for new resistant varieties. It is expected that the same phenomenon will be observed in Sarawak (Malaysian Borneo), which is a new 'rice bowl' state to increase rice production in Malaysia. Surveys from 2009 until 2012 in Sarawak showed that more than 50% of the surveyed rice field had moderate to high disease severity (Lai and Eng, 2011;Lai and Eng, 2013;Lai, 2016). The knowledge on the genetic variations of P. oryzae could aid in managing rice blast disease. High genetic variations in a population will allow higher genetic recombination. Consequently, the breaking down of disease resistance will be rapid (Scheuermann et al., 2012). In Peninsular Malaysia, there are already four reports on the variations of P. oryzae (Abdul Rahim et al., 2013;Mat Muni and Nadarajah, 2014;Hasan et al., 2016;Abed-Ashtiani et al., 2016). Unfortunately, there is yet study on P. oryzae from Sarawak, neither their genetic variations nor pathogenicity. This paper provides a preliminary study on the variation of rice blast fungus isolated from selected rice fields in Sarawak based on their morphological characteristics. Samples collection The rice blast infected leaf samples were collected from different rice fields (smallholders) in four different divisions in Sarawak: Miri, Kuching, Serian and Sri Aman during planting seasons of 2012 until 2016. Sampling points were decided based on the size of rice fields. In a one-hectare rice field, five sampling points were designated covering the field. If different rice landraces were planted by smallholders in one field, infected leaf samples were collected separately from each rice landrace. Fungal isolation Spore drop method modified from a method described by Choi et al. (1999) was used. Each rice blast lesion on an infected leaf was cut in half with each half of the lesion having a section of healthy part on one end. The specimens were surface sterilized with 1% commercial bleach (Clorox ® ) containing 5.25% sodium hypochlorite for 1 minute and rinsed 3 times with sterilized distilled water (each lesion was treated separately). Each piece of a lesion was then attached onto the upper-lid of a Petri dish containing water agar [WA; 2% agar (agar stick) w/v] with adaxial part facing towards the medium. Then, the plates were incubated in a humidity box at room temperature and observed daily for single spore colony of P. oryzae under a light microscope (ECLIPSE E100LED MV R). Each single spore colony was then picked and transferred onto oatmeal agar (OMA; 15 g of instant oatmeal and 7.5 g agar stick/500mL). The plates were incubated under dark condition for 5 days and light condition for the subsequent days. Alternatively, if spores were observed on leaf segment but not on WA, the spores were dislodged with 250 µL of sterilized distilled water and spread on a new plate of WA. It was then incubated under light condition at room temperature. Each single spore colony of P. oryzae was picked and cultured as described above. Molecular identification Universal primer pair Internal Transcribed Spacer (ITS)-1 (5'-TCCGTAGGTGAACCTGCGG-3') and -4 (5'-TCCTCCGCTTATTGATATGC-3') was used for colony PCR (White et al., 1990). The PCR solution comprised of distilled water (ddH2O), 10x PCR Buffer with Mg 2+ (EasyTaq®), 25 mM MgCl2, 10 mM dNTPs, 10 µM ITS-1 and ITS-4, Taq DNA Polymerase (EasyTaq ® ), and pinch of young fungal mycelium (culture age ranged from 5 to 10 days ). PCR amplification was performed using T100™ Thermal Cycler (Bio-Rad Laboratories, USA) with the following profile: initial denaturation at 94 °C for 2 min; followed by 35 cycles of denaturation at 94 °C for 30 sec, annealing at 50 °C for 30 sec , and extension at 72 °C for 1 min; final extension at 72 °C for 5 min. The PCR products were visualized on 1% agarose geland purified by using QIAquick® Gel Extraction Kit and sent for sequencing at Apical Scientific Sdn. Bhd. company. The sequences were used to blast (Blastn) against the sequences in gene bank (NCBI). The species verification was determined through percentage of identity and expectation value (E-value). Morphological characterization Morphological characterization method described by Mohammadpourlima et al. (2017) was adapted and modified. There were two types of media used for morphological characterization, viz, filtered oatmeal agar (FOMA; 15 g instant oatmeal and 7.5 g agar stick/500 ml) and potato dextrose agar (PDA; brand MERCK). FOMA was prepared in a similar manner as for OMA except that the oatmeal flakes and clumps were filtered out (for ease of scoring). Both media were then sterilized in an autoclave at 120 psi for 15 min. For all isolates, 10 days old cultures were used for subculture. An inoculum plug (5 mm Ø) was cut from the edge of actively growing mycelia and transferred (mycelium-side down) onto the center of a Petri dish containing 20 mL medium. For each medium, FOMA and PDA, there were ten replicates per isolate. The Petri dishes containing fungal plugs were then incubated in the dark for 5 days, followed with light condition for the subsequent days, at room temperature. Morphological characteristics of the colonies (form, elevation, margin, color and surface) were described (Microbiology, 2014.). The colony growth was measured daily until the 10 th day and their growth rate was calculated. Then, sporulation rate was recorded for each isolate based on three randomly selected plates. Statistical analyses were carried out using SPSS software. ANOVA (analysis of variance) followed by Tukey HSD post hoc test was used to compare the difference in growth rate and sporulation rate between isolates in each medium at p < 0.05 significance level. Mann-Whitney test was used to compare the growth rate and sporulation rate of each isolate on different media at p < 0.05 significance level. This analysis excluded isolates POMI2, POSA3 and POS1 which did not produce spores on both media. Isolation and identification In the rice fields of this study, the rice blast symptom (Figure 1a) observed on leaf was recognized as typical blast disease symptom, diamond shape with grayish center and brown margin (Wang et al., 2014). In total, there were 12 isolates of P. oryzae successfully isolated from four divisions (Table 1). There was one isolate from Miri, three from Serian as well as Sri Aman, and five from Kuching. Preliminary identification of the isolates was done based on the morphology of spores before they were verified through molecular based method. The spores (Figure 1b) obtained were pear-shaped with narrowed apex and broad basal, hyaline in color, two septa and three celled. The spores were borne along the conidiophore (Figure 1c) with the basal of the spore attached at the tip of branches of conidiophore. In average, one conidiophore can hold more than 10 spores (n=3). The characteristics of the observed spores for 12 isolates obtained ( Figure 2) were in agreement with descriptions from previous studies (Ou, 1987;TeBeest et al., 2007) and these characteristics allowed the preidentification of the different isolates as P. oryzae. ITS amplification was successful for eight isolates ( Table 2). The ITS amplicon size (bp) for the six isolates was approximately 500 bp, while for POK3 and POK4 isolates with the amplicon size (bp) of approximately 300 bp. BLASTn search verified the eight isolates as P. oryzae based on the percentage of identity and expectation value (E-value). The results (Table 2) showed 99-100 % of the query sequence homolog to the target sequence of M. oryzae in the database with E-value less than or equal to zero. Morphological characterization The morphological characteristics of all isolates are summarized in Table 3. The isolates were grouped based on morphological similarities (form, elevation, margin, and surface), excluding the criteria of color. There are 7 and 6 groups for FOMA and PDA, respectively. The colony surface of the different isolates varied from smooth and fluffy to rough and flattened mycelia; some were with the presence of concentric rings, and some with aerial mycelia. All isolates showed raised elevation (viewed from side way) in both media except for three isolates (POMI2, POS1 and POSA3) which had raised elevation on one medium but flat elevation on the other. Despite of media used, all isolates had circular form and entire margin. The front colony was grey and light grey colored for most isolates on both media except for four isolates. While, the reverse colony was black colored for most isolates on both media except for three isolates ( Table 3). The differences in morphology of the isolates in this study are similar to those reported by Srivastava et al. (2014) and Asfaha et al. (2015). In short, morphological variations were observed among the 12 isolates from four different divisions. Such variations may suggest that the isolates were genetically different from each other. It might also be possible that the 12 isolates may have different pathogenicity capability. This assumption is based on the variations of dark pigmentation observed between the colony surface of the 12 isolates, and it has been reported that dark pigmentation correlates with pathogenicity (Lujan et al., 2016;Oh et al., 2017). The dark pigmentation of P. oryzae is crucial for the pathogen penetration into the host and to exhibit pathogenicity (Woloshuk et al., 1980;Chida and Sisler, 1987;Wheeler and Greenblatt, 1988). The correlation of pathogenicity with pigmentation intensity of P. oryzae could be an interesting area for further research. The morphological variations of isolates from different locations were random. There was no particular variation which is specific to one location that is the isolates with similar morphology were grouped differently despite the origin. For instance, the morphology of isolate POMI2 from Miri (northeastern Sarawak) was similar to that of an isolate from Serian division (southwestern Sarawak) on OMA. On PDA, the morphology of the Miri isolate was similar to three isolates originated from three different divisions in southwestern Sarawak, respectively. This finding agrees with that by Srivastava et al. (2014). However, this does not conclude that isolates from the same location are genetically unrelated, because positive correlation between molecular data and geographical origin of different isolates of P. orzyae was reported in Peninsular Malaysia (Abed-ashtiani et al., 2016). Figure 1: (a) The arrow points out the rice blast symptom, diamond shape with grayish center and brown margin (b) conidia (spore) (c) conidiophore. Growth rate and sporulation rate Growth rate and sporulation rate were tabulated in Table 4. Growth rate between isolates in each medium varied significantly. Isolate POS1 significantly had the fastest growth rate on FOMA with the mean value of 0.38 cm/day. On PDA, however, POS2 grew significantly faster than the other 11 isolates. Both isolates POS1 and POS2 were approximately 1½ days faster in growth as compared to isolate POS3 which was one of the slowest growing isolate on OMA and the slowest on PDA. Comparison of the growth rate of each isolate between the two media showed that seven isolates rapidly grew on PDA, and one isolate on FOMA. The results in this study are in agreement with that of Vanaraj et al. (2013). Means of sporulation rate revealed the significant differences in sporulation between isolates on each medium. Isolate POS2 produced the highest number of spores on FOMA, whereas isolate POK6 produced the highest number of spores on PDA. Isolate POS3 produced the lowest amount of spores on both media. In general, most isolates seemed to be able to produce higher number of spores on FOMA as compared to PDA. Unfortunately, there was insufficient number of replicates per isolate for each medium to give a reliable statistical analysis. There is a change in the ranking of isolates based on growth rate as well as sporulation rate on the two different media. This suggests that the physiological performance of an isolate may have been affected by the different media used. It was also observed in this study that there were very weak and moderate positive correlation between the pathogen growth rate and sporulation rate, r = 0.03 (FOMA) and r = 0.65 (PDA) respectively. The morphological and physiological variations observed in this study may have associated with the nutrients present in each medium. FOMA was made from oat (Avena sp.), a close relative to rice, and it has been reported that oat is a host to P. grisea (Marangoni et al., 2013), a close relative to P. oryzae. The fact that isolates of P. oryzae exhibited different morphology, growth rate and sporulation rate on FOMA from PDA might suggest that oat contains comparable host material which is needed for P. oryzae for growth and sporulation. The effect of the host material in the media can be seen in the induction of sporulation (Su et al., 2012), where all isolates (with sporulation) produced at least two times higher the number of spores on FOMA in comparison to PDA. Two of the isolates even produced four times higher the spore number on FOMA (no statistical evidence). The effect of media on sporulation of P. oryzae will be studied in the future. CONCLUSION In conclusion, 12 isolates were successfully isolated from infected leaves through spore drop isolation method. This preliminary study showed that there were variations observed based on morphological characters for the different isolates collected in Sarawak, Malaysian Borneo. There were no morphological characters unique to a specific location. The growth rate and sporulation rate of the different isolates varied on different media. This study is the first step towards understanding variations in the population of P. oryzae from Sarawak. It would be interesting to isolate more P. oryzae from different locations to have a better representation of the population. Further study on the genetic variations and pathogenicity are also significant.
2020-05-28T09:09:29.912Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "de614d3642a11df009589fbb9c14a331ac176389", "oa_license": "CCBYNC", "oa_url": "http://mjm.usm.my/uploads/issues/1573/Formatted%20MJM-19-0423_COLOUR.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "69b1ea07235413d3a0eb124dfccb30247a304f97", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
219910177
pes2o/s2orc
v3-fos-license
Brucellar reproductive system injury: A retrospective study of 22 cases and review of the literature Objective We aimed to describe the clinical characteristics and prognosis of 22 patients with Brucella-induced reproductive system injury. Methods We assessed 22 patients with reproductive system injury between 2010 and 2018 at The First Affiliated Hospital of Xinjiang Medical University. Results The disease is predominant in men. Male patients had orchitis, erectile dysfunction, prostatitis, and urethral stricture, while female patients had vaginitis and cervicitis. Some patients had laboratory abnormalities and liver injury. Patients received combination therapy of rifampicin and doxycycline. Doxycycline combined with levofloxacin or moxifloxacin was administered to patients with rifampicin intolerance. All patients had received antibiotic therapy for at least 6 weeks. One patient was lost to follow-up, one patient relapsed because of osteoarthropathy, and one patient had dysuria resulting from chronic prostatitis. The clinical symptoms resolved in the other patients, and the overall patient prognosis was good. Conclusion Clinicians should pay attention to brucellosis-induced reproductive system damage. The two-drug regimen of rifampicin+doxycycline is recommended for these patients. Doxycycline combined with levofloxacin or moxifloxacin should be used in patients with brucellosis-induced reproductive system damage who have rifampicin intolerance. The treatment course should be at least 6 weeks. Introduction Brucellosis is an infectious disease that is caused directly or indirectly by Brucella Gram-negative cocci, and this disease has a different degrees of severity in humans and livestock. 1 It is a systemic infectious disease that can affect any organ or system in the body and show different clinical symptoms. Brucella invades the reproductive system mostly in men, mainly including orchitis, epididymitis, hydrocele, and prostatitis, with orchitis and epididymitis being the most common. Reproductive system injury caused by brucellosis is rare, and it has been reported in only a few cases. [2][3][4][5] This study aimed to describe 22 cases of reproductive system injury that was caused by Brucella and to present a relevant literature review. Study population There were 801 patients with brucellosis who were admitted to the First Affiliated Hospital of Xinjiang Medical University from July 2008 to July 2018. Among them, 22 patients had complications of a reproductive system injury. The diagnosis of brucellosis was based on the Guidelines for Diagnosis and Treatment of brucellosis that was published by the Ministry of Health of the People's Republic of China. 6 There were 21 patients in the acute stage (course <6 months) and one patient in the chronic stage (course >6 months). Reproductive system complications are mainly diagnosed through clinical signs and symptoms; in men, these include orchitis that mainly manifests as testicular redness in men, and in women, this includes hypogastrium pain. The diagnosis is confirmed by imaging. A diagnosis of either brucellosis diagnosis or a reproductive system injury could be diagnosed as brucellosis reproductive system injury disease. The exclusion criteria were as follows: (1) Patients with a history of immune system disorders or tumor; (2) patients with an immunity disorder; (3) patients with a history of serious diseases or dysfunction of other systems such as respiratory, cardiovascular, kidney, liver, nervous; and (4) patients who used immunomodulators, immunosuppressive drugs, and corticosteroids for a long time or for nearly 3 months. The review board at our hospital provided an exemption for this study, and all patient data were de-identified. Patients provided verbal consent for the use of their data. Clinical assessment and definitions We used a self-designed brucellosis patient case questionnaire to retrospectively analyze the data from 22 patients with reproductive system injury. Data collected from the enrolled patients included demographic characteristics, clinical features, laboratory tests, imaging findings, treatment, and prognosis, and quality control of the investigation was also performed. Descriptive epidemiological methods were used for the retrospective analysis, and summary statistics were presented including the number (%) and the mean AE standard deviation (SD). Patient characteristics Among the 22 patients who were enrolled into this study, 21 patients were male and one was female. The average age (AESD) of the patients was 41.91AE10.52 years (range, 22-56 years). Most of the patients (n¼19, 86.4%) were farmers who had cows and sheep and one (4.5%) patient denied a history of contact with cattle or sheep. In two (9.1%) patients, the cause was unknown. Detailed epidemiological characteristics are shown in Table 1. Table 2 shows the clinical characteristics in the 22 patients, including common signs and symptoms. The most common symptoms in the patients were fever, sweating, anorexia, and weight loss. In men with orchitis, testicular epididymitis, prostatitis, or urethral orifice stricture, the main manifestation was testicular swelling and pain and dysuria. One woman had from vaginitis and cervicitis. The main manifestations in women were increased leucorrhea, irregular menstruation, and hypogastrium pain. Laboratory tests In 21 patients, the Rose Bengal Plate Test (RBPT) results were positive, and the standard tube agglutination test results were positive (!1:100), among which the highest antibody titer was 1:800. Nineteen patients underwent blood culture examination. Among them, Brucella Malta was cultured in four patients. There were 18 (81.8%) patients with an elevated erythrocyte sedimentation rate (ESR) and 16 (72.7%) patients with increased C-reactive protein (CRP), CRP, ESR, and c-glutamyl transpeptidase levels, which were significant abnormalities. Table 3 shows the results of laboratory examinations. Imaging examination All patients underwent abdominal ultrasonography, and among them, five (22.7%) patients had hepatomegaly, while the remaining patients had a normal sized liver. Six patients underwent lymph node ultrasonography, and four (66.7%) patients showed lymph node enlargement, mainly in the neck, groin, and axillary nodes. Treatment and prognosis The average length of patient stay in the hospital was 9.68 AE 4.20 days (range, 2-22 days). Patients received rifampicin and doxycycline or doxycycline combined with levofloxacin or moxifloxacin in patients with rifampicin intolerance. One patient underwent spermatic cord block, and one patient underwent right testicular mass resection. All patients had received antibiotic therapy for at least 6 weeks. One patient was lost follow-up, two patients had a poor prognosis, one had recurrence because of osteoarthropathy, and one had urinary dysfunction resulting from chronic prostatitis. The clinical symptoms in other patients resolved and the overall prognosis was good. Discussion Brucellosis is a zoonotic allergic disease caused by Brucella, which is a natural focus of an infectious disease. 7 Human brucellosis can cause multiple types of organ damage after infection. Reproductive system involvement is one of the most common local manifestations of human brucellosis. 8 The most common reproductive complication of brucellosis is testicular epididymitis, which accounts for 2% to 20% of brucellosis patients. 9 Through this retrospective analysis, we found that the incidence of reproductive complications in brucellosis was 2.7% (22/ 801), which is consistent with the data in the literature (1.4% to 25%). 10,11 However, the incidence rate is lower than that reported in the Chinese literature, 4,5 which may be related to the characteristics of the population that was investigated, the diagnostic criteria that were used, the diagnostic procedures that were used, and the types of research that were performed (such as prospective or retrospective analysis). Humans are generally susceptible to the disease, which mainly occurs in occupational exposure groups, such as herdsmen, veterinarians, laboratory workers, and slaughterhouse workers. In these data, although 19 patients had a clear history of contact with cattle and sheep, this information is often neglected in the process of epidemiological data collection, which may lead to misdiagnosis and missed diagnosis. A published study 3 showed that even in endemic areas, it is difficult for doctors to diagnose brucellosis when the patient only shows orchitis. Through this study, we found that most of the patients came from farming and pastoral areas. The number of male patients was higher compared with that of female patients (there was only one female patient). Age was predominantly between 40 and 56 years. Cases of brucellosis are more common from May to August each year, and the peak incidence occurs in July, which is consistent with the peak of non-reproductive system injury. 12 Clinical manifestations and complications include fever, which is most typical, and sweating, fatigue, joint pain, anorexia, and weight loss, which are also common in reproductive system complications. Headache, cough, and muscle soreness also account for a certain proportion. In this study, the incidence of orchitis was 40.9%, which was higher compared with 31.8% of the results reported by Erdem et al. 13 The incidence of right orchitis was higher compared with left orchitis. The incidence of unilateral and bilateral testicular and epididymis involvement was 36.4%, which was lower compared with 58.0% that was reported by Erdem et al. 13 The incidence of orchitis with hydrocele was 22.7%, and the incidence of right orchitis combined with hydrocele was higher compared with left orchitis. This study also found that brucellosis affects the male reproductive system in addition to the presence of orchitis, and erectile dysfunction accounts for 68.2%. Prostatitis and urethral stricture can also be involved, and they are mainly manifested as an enlarged prostate, tenderness, inflammatory or purulent secretions that can be seen after compression, and urination that is not smooth. Female patients mainly have vaginitis and cervicitis, which manifests as increased leucorrhea, irregular menstruation, and hypogastrium pain. These are important clinical features for differentiating between brucellosis and reproductive complications. CRP and ESR were more commonly increased in the laboratory test results, and some extent of liver injury can be seen, which was mainly increased by glutamyl transpeptidase (81.8%). Among the 22 patients, five patients had splenomegaly, six patients underwent lymph node ultrasonography, and four patients had lymphadenopathy. The main manifestations were enlargement of cervical, inguinal, and axillary lymph nodes. All 22 patients underwent chest X-ray examination, there was one case of bronchitis and one case of slight inflammation in the lower lingual segment of the left upper lobe. In this study, we found that testicular B-mode ultrasonography mainly manifested unilateral or bilateral testicular enlargement, soft tissue edema, and thickening of the hydrocele and spermatic cord. The testicular parenchyma echo was uneven, and some patients had bilateral epididymis enlargement. All patients had received antibiotic therapy for at least 6 weeks. Antibiotic treatment included doxycycline combined with rifampicin, and rifampicin intolerant patients were treated with doxycycline combined with levofloxacin or moxifloxacin. One patient underwent spermatic cord occlusion, and one patient underwent right testicular mass resection. Among the 22 patients, one patient was lost follow-up. Among the remaining 21 patients, eight patients were followed for >5 years, 12 were followed for >2 to 3 years, and one was followed for 10 months. Two patients had a poor prognosis, one of whom had recurrence because of osteoarthropathy and one of whom had urinary dysfunction resulting from chronic prostatitis. The clinical symptoms in the other patients resolved and their overall prognosis was good. Conclusion The clinical manifestations of brucellosis with reproductive system complications were different, and the incidence was low. Clinicians should take into consideration brucellosis in patients with unknown fever, testicular enlargement, prostatitis, urethral stricture, and cervicitis in endemic areas. Blood biochemistry and other related examinations should assessed a timely diagnosis, early treatment, and rational drug use should be achieved to avoid a misdiagnosis or a missed diagnosis. The aim is to improve the cure rate of brucellosis in reproductive system injury and to reduce the disability rate. Consent This study received verbal consent from the patients. Declaration of conflicting interest The authors declare that there is no conflict of interest. The review board at our hospital provided an exemption for this study. Data and details used in this study have been deidentified such that the identity of the patients may not be ascertained in any way. Funding This study was supported by Key Research and Development Projects from the Xinjiang Uygur Autonomous Region (No. 2016B03047-1). Supplemental Material Supplemental material for this article is available online.
2020-06-20T13:06:39.283Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "cfd049c7ad91ada192cb4a1570042b06d0177fe6", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0300060520924548", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f7baf0b9b22b38a154da04ec15219c6f1b55fdd1", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
18316010
pes2o/s2orc
v3-fos-license
Heterogeneity of transcription factor binding specificity models within and across cell lines Complex gene expression patterns are mediated by the binding of transcription factors (TFs) to specific genomic loci. The in vivo occupancy of a TF is, in large part, determined by the TF's DNA binding interaction partners, motivating genomic context-based models of TF occupancy. However, approaches thus far have assumed a uniform TF binding model to explain genome-wide cell-type–specific binding sites. Therefore, the cell type heterogeneity of TF occupancy models, as well as the extent to which binding rules underlying a TF's occupancy are shared across cell types, has not been investigated. Here, we develop an ensemble-based approach (TRISECT) to identify the heterogeneous binding rules for cell-type–specific TF occupancy and analyze the inter-cell-type sharing of such rules. Comprehensive analysis of 23 TFs, each with ChIP-seq data in four to 12 different cell types, shows that by explicitly capturing the heterogeneity of binding rules, TRISECT accurately identifies in vivo TF occupancy. Importantly, many of the binding rules derived from individual cell types are shared across cell types and reveal distinct yet functionally coherent putative target genes in different cell types. Closer inspection of the predicted cell-type–specific interaction partners provides insights into the context-specific functional landscape of a TF. Together, our novel ensemble-based approach reveals, for the first time, a widespread heterogeneity of binding rules, comprising the interaction partners within a cell type, many of which nevertheless transcend cell types. Notably, the putative targets of shared binding rules in different cell types, while distinct, exhibit significant functional coherence. Transcriptional regulation is mediated by the binding of transcription factors (TFs) to specific DNA elements in the genome (Jacob and Monod 1961;Busby and Ebright 1994). While the in vitro binding specificity of many human TFs has been determined, it is well recognized that the in vitro binding specificity of a TF is not sufficient to explain its condition-specific in vivo binding (Zinzen et al. 2009;Yáñez-Cuna et al. 2012). This realization has spurred investigations of additional determinants of in vivo binding, such as heterogeneity of a TF's binding motif , broader sequence context and interposition dependence (Mathelier and Wasserman 2013), homotypic clusters of binding sites (Dror et al. 2015), cooperative binding of the TF with its partners (Wang et al. 2006;Liu et al. 2016), condition-specific chromatin context (Wang et al. 2006;Heintzman et al. 2009;Gheldof et al. 2010;Kumar and Bucher 2016), and local DNA properties (Dror et al. 2015;Kumar and Bucher 2016). While, overall, both local genomic and epigenomic features are deemed important in determining in vivo occupancy of a TF, recent reports suggest that in vivo binding of a TF can be accurately predicted based solely on the genomic signatures near the binding site without relying on the epigenomic context (Arvey et al. 2012;Dror et al. 2015); this is consistent with additional recent reports, showing that the epigenome itself is encoded by the genomic context (Benveniste et al. 2014;Whitaker et al. 2015). Prior models of in vivo TF binding have shown that, counterintuitively, the genomic context of a binding site effectively encodes the condition-specific in vivo binding specificity (Arvey et al. 2012;Mathelier and Wasserman 2013). This can be explained by the substantial plasticity of a TF's interaction with other TFs and the modular nature of TF binding co-operativity (Frietze and Farnham 2011). The availability of specific combinations of interacting TFs can then guide in vivo binding to specific loci where the binding sites of the interacting TFs are present in close proximity to each other, along with the availability of corresponding TFs . Previous sequence-based modeling of in vivo TF binding was performed in a cell-type-specific fashion (Arvey et al. 2012;Mathelier and Wasserman 2013). These cell-type-specific models exhibit substantial inter-cell-type heterogeneity, which is expected, given the variation in the availability of the potentially interacting TFs. In particular, Arvey et al. (2012) explicitly modeled potential interactions of the primary TFs with multiple additional cofactors, while general sequence properties were used as features by Mathelier and Wasserman (2013). These previous approaches, however, build a single model for a cell type, thus implicitly assuming a homogeneous cell-type-specific TF binding model. As such, previous models have not investigated intra-cell-type model heterogeneity. Intra-cell-type TF binding heterogeneity is expected for the same reasons as inter-cell-type heterogeneity. Moreover, in many instances, a binding specificity model trained in one cell type can predict a subset of in vivo binding in another cell type (Arvey et al. 2012), suggesting that binding models, or parts thereof, are shared across cell types. The motivation of the current study is to evaluate the heterogeneity of sequence-based, cell-type-specific, in vivo TF binding models and the extent to which binding rules (submodels) are shared across cell types. We have developed an ensemble model-based approach (TRISECT) to reveal both cellspecific and cell-independent rules for the in vivo TF binding. Application of TRISECT to 23 TFs, each with genomewide in vivo binding data in four to 12 cell types strongly suggests that the celltype-specific binding rule for a TF consists of multiple submodels, a subset of which are shared across cell types, and points to shared functional underpinnings. This refinement to our understanding of the genomic context of in vivo binding specificity can facilitate future investigations of transcriptional regulation and its genetic determinants. TRISECT-Ensemble model of TF binding and the clustering of submodels across cell types An illustration of the TRISECT analysis pipeline is presented by Figure 1A, and a brief description of the pipeline is provided below (for additional details see Methods). Overview As the first step, we developed an ensemble model (EMT) to discriminate a TF's in vivo bound genomic loci (foreground) from nonbound sites (background), balancing model complexity (number of submodels in the ensemble) against the cross-validation classification accuracy. Given a set of genome-wide loci, bound by a specific TF, we first identified sets of foreground and background (control) sequences. The foreground set consisted of 100-bp sequences centered at the ChIP-seq peak. For stringent background sequences, as done previously (Arvey et al. 2012), we used 100-bp regions ∼200 bp away from the peak location. We considered a variety of feature sets for discrimination (see below). The EMT model was trained using the Adaboost method where each submodel is a decision tree (Fig. 1B) built from a bootstrap sample (Friedman et al. 2000;Friedman 2002Friedman , 2008. Next, given a TF's EMT models for all cell types, each cell-type-specific submodel was represented by a point in a d-dimensional space, with d corresponding to the number of relevant features. We constructed clusters of the data points for a TF (representing the submodels across all cell types), using k-nearest neighbors algorithm (k-NN). The submodels within a cluster represent binding rules that are similar within or across the cell types. Colors indicate different binding rules or submodels and rows (a-c) represent different cell types. Green, pink, and yellow colors indicate cell-type-specific submodels. Each ensemble model (EMT) is represented by a bucket of submodels (top right). Stars and diamonds with the same color denote corresponding submodels and data points after transformation into reduced feature space, respectively. Each submodel is represented by a decision tree. The submodels across cell types are clustered. Cyan is common between cell types a and b, light brown is common between cell types b and c, and purple is common across all three cell types. (B) An example submodel taken from the Interaction model for CEBPB-GM12878. Each node in the tree is labeled with the TRANSFAC id, corresponding gene name, and the threshold at which the feature is split. Two binding rules are highlighted indicating TF binding and no TF binding. (C,D) Same color is used to denote the models using the same features. (C) Comparison of accuracy between all pairs of feature sets. Nodes are labeled with feature type and mean accuracy. Edges are labeled with ">" (greater) or "<" (less) sign and two-sided Wilcoxon P-value. (D) Accuracy (ROC-AUC) distribution of EMT for K-mer/K-merRC/Interaction (1 k) and those of kmer-SVM models. EMT feature sets We considered three feature sets for the 100-bp foreground and background sequences. The first feature set, K-mer, was composed of 6-mer frequencies within each 100-bp sequence (total, 4096 features). The second set, K-merRC, consisted of unified 6-mers and their reverse complement frequencies (total, 2080 features). The third feature set included the binding scores for 981 vertebrate TF motifs from the TRANSFAC 2011 database. We defined the models built from the third feature set as the Interaction model, as the features represent potential TFs that might contribute to the binding of the reference TF (the TF for which EMT was built). For Interaction models, we used four thresholds for motif match in the PWMSCAN tool , where a threshold denotes the background match frequency-one hit in every 1 kb, 2 kb, 5 kb, and 10 kb. EMT training We applied TRISECT to 23 TFs, each with ChIP-seq data in four to 12 cell types (a total of 135 TF-cell pair EMTs; Supplemental Table S1). A TF was included in this study if (1) the TF has narrow-peak data for at least four cell lines with at least 4000 bound sites in each cell line, and (2) the TF has an established position weight matrix (PWM) in the TRANSFAC 2011 database. For other information about each TF including family names, see Supplemental Figure S1 for TF web-logos and Supplemental Table S2. EMTs were trained using 75% of the full data set and tested on the remaining 25%. Model details such as the number of submodels, model size, etc., are provided by Supplemental Table S3. Each EMT includes multiple decision trees, and each path from root to leaf in an estimated decision tree submodel captures one binding rule that asserts how a combination of motifs and their binding affinities contribute to the reference TF's binding. As an illustrative example, Figure 1B shows an arbitrarily selected submodel of CEBPB in the GM12878 cell line. Two of the binding rules are "presence of IRF8 with score >2.08 and presence of NFATC4 with score <2.3"-when these rules are met, the reference TF, CEBPB, is likely to bound. Whereas "presence of IRF8 with score >2.08 and presence of NFATC4 with score >2.3" hinders CEBPB binding. Supplemental Note 1 and Supplemental Figure S2 include further interpretation of a sample submodel (decision tree), a summary of how the reference TF's motifs are distributed among the submodels, and a discussion of model robustness for various parameter choices. EMT performance Model accuracy was quantified using area under the receiver operating curve (ROC-AUC) on the 25% test set ( Fig. 1C; Supplemental Fig. S2C). We compared the model performances, using a Wilcoxon test across 135 TF-cell type pairs for the six sets of EMTs (K-mer, K-merRC, and Interaction at four thresholds (i.e., Interaction (1 k), Interaction (2 k), Interaction (5 k), Interaction (10 k)) ( Fig. 1C). We found that K-merRC significantly outperforms the K-mer model (two-sided Wilcoxon P-value 5.3 × 10 −20 ). This is consistent with the fact that TF binding occurs on double-stranded DNA and as such does not have directionality (except in relation with other interacting TFs). Therefore, unifying each k-mer with its reverse complement is more representative of the biological determinants of TF binding. Following this line of reasoning, PWMs can provide an even better abstraction of DNA binding specificity and as expected, the PWM-based models outperform the k-mer-based models (two-sided P-value 4.58 × 10 −6 ), when comparing K-merRC to Interaction (1 k). Therefore, for submodel clustering and other downstream analyses we selected Interaction (1 k)-based EMT (heretofore referred to as Interaction model). Comparison with previous model Next, we compared the EMT model (using K-merRC and Interaction) with a previously published model based on support vector machine (kmer-SVM) (Arvey et al. 2012). In kmer-SVM, the investigators considered both k-mers and their reverse complements of size 8 with minimum matches of size 6. Applying the kmer-SVM pipeline to our data set, the resulting ROC-AUCs for all the TF-cell pairs are listed in Supplemental Table S4. Figure 1D suggests that the Interaction model performs favorably relative to kmer-SVM. TRISECT reveals intra-cell-type heterogeneity and inter-cell-type sharing of binding rules across cell types Given the favorable performance of EMT, and its architectural differences to kmer-SVM, we next assessed whether EMT was better able to exploit the heterogeneous binding rules across the genome, as dictated by different combinations of co-occurring and coregulated (i.e., potentially interacting) TFs. Conceptually, a "binding rule" refers to the specific combination of motifs (along with their importance) aiding in the binding of a reference TF. While a general binding rule may be difficult to state concisely, it can be operationally defined in terms of a collective ensemble of cell-typespecific binding rules. Each decision tree (a submodel) operationally defines a binding rule in terms of presence of specific motifs above/below a certain binding score. Furthermore, in general, the relative importance of features decreases with increasing depth of the node in the decision tree, with the first few levels contributing a substantial portion of the decision. Although a decision tree represents a statistical model for TF binding, by applying strict thresholds for motif scores and considering only the top few layers, in principal, a concise "binding rule" can be derived, albeit with some loss of information. For a specific TF and cell type combination, we captured the binding rules by a set of submodels (decision trees). Then to investigate commonality and uniqueness of binding rules for a TF across cell types, we pooled all submodels from all cell-specific EMTs, represented each submodel by feature importance, and clustered all submodels using the k-NN clustering algorithm. Next, we constructed a cluster-membership matrix mapping the number of submodels originating from different cell types within each cluster. As an example, Figure 2, A and B, shows the cluster-membership matrix for the TF ATF3 for cluster sizes 16 and 20. The matrices show both cell-type-specific ( Fig. 2A, cluster 6) and ubiquitous (Fig. 2B, cluster 20) clusters. Examining the cluster mapping for all TFs (Supplemental Fig. S3), a wide range of patterns emerge. For certain TFs, many clusters tend to map to a single cell type, suggesting cell-type-specific binding modalities of these TFs (EP300, JUN), while other TFs have ubiquitously applicable binding rules, such as YY1 and TBP, suggesting cell-type-independent binding rules and, presumably, function. Importantly, many clusters consist of submodels from multiple, but not all, cell types. We ensured that inter-cell-type sharing of binding rules is not simply due to the shared binding loci across cell types (Supplemental Note 2; Supplemental Fig. S4). Subsequent analyses are based on k = 16; the reason for this choice is discussed in Supplemental Note 3. Previous research (Worsley Hunt and Wasserman 2014) showed that so-called "zinger" motifs are enriched in ChIP-seq regions of several unrelated TFs. We conducted additional analysis to ensure that our clustering results are not affected by the zinger motifs (Supplemental Note 4; Supplemental Fig. S5). Moreover, it is possible that EMT can falsely yield multiple submodels, even in the absence of heterogeneity, and those submodels can be falsely clustered. By looking at the clustering tendency of the submodels, we examined the heterogeneity across submodels and found that it is possible to separate the submodels into distinct clusters (Supplemental Note 5; Supplemental Fig. S6B,C). Next, we assessed the functional underpinning of shared binding rules across cell types (for details, see Methods). Specifically, we assessed whether two coclustered loci from different cell types (i.e., those obeying similar binding rules) are functionally associated relative to loci from the same cell type but belonging to different clusters, indicating that they are obeying different binding rules. We measured a cluster-specific score for each binding sequence and assigned each binding site in each cell type to one or more clusters. As per convention, we assigned each binding site to the nearest gene as a potential transcriptional target; 88% of the target genes were within 50 kb from the binding site (median distance, 4.5 kb) (Supplemental Fig. S6G). To assess functional coherence of clusters, we defined two metrics: expression coherence and pathway coherence. Expression and pathway coherence are measured as the fraction of gene-pairs in a cluster (regardless of cell type) that are respectively coexpressed, or belong to the same pathway. We assessed the significance of coherence using a two-sided Fisher's exact test. As shown in Figure 2C, ∼40% (expression coherence) and ∼18% (pathway coherence) multi-cell-type clusters show significantly higher (P-value <0.05) than the background (expectation is 5%), and 5.5% of the clusters show both significant expression and pathway coherence (called dual coherence). Applying a more stringent P-value threshold (<0.001), these coherent percentages are 35% (expression), 10% (pathway), and 4% (dual). Moreover, the expression and pathway coherence are highly correlated across clusters (Spearman correlation = 0.56, P-value = 0.02). As a negative control, we conducted the same set of tests for random clusters with the same size as the real clusters. In both cases, the coherence was no greater than the null expectation ( Fig. 2C). Taken together, these analyses support the existence of heterogeneous sets of TF binding rules governing the in vivo binding and suggest that a subset of rules are shared across cell types with functional implications. The role of interaction partners in a TF's binding occupancy across cell types By using 981 PWMs for a comprehensive set of vertebrate TFs as the basis for features, EMT implicitly incorporates the contributions of interaction partners in predicting in vivo binding of the reference TF. To quantify the contribution of putative interacting motifs, we repeated the EMT training and testing using only the PWMs corresponding to the reference TF. Individual TFs are represented by multiple motifs in the literature (ranging from one to eight, with a median of three) (Supplemental Table S2), many of which differ substantially from each other, suggesting potential functional implications (Bulyk et al. 2002;Hannenhalli 2008); e.g., 75% of the intra-TF PWM-pairs have <85% PWM similarity, in contrast to 99% of inter-TF PWM-pairs (Linhart et al. 2008). We refer to these motifs as the reference motifs, and, in contrast to the Interaction model, the EMT model utilizing only the reference motifs is referred to as the NonInteraction model. Supplemental Figure S7 shows the prediction accuracies for the Interaction and the NonInteraction models; the diagonal elements represent the cross-validation accuracies within a cell type, while the off- . Rows represent clusters and columns represent cell types. Each element in the matrix denotes the number of submodels in the cluster from each cell type. Some clusters consist of submodels from multiple cells (cluster 20 in B), while some others consist of submodels from a single cell type (cluster 6 in A). (C) Functional and expression coherence of submodel clusters: fraction of multi-cell-type clusters found to be coherent using k-NN. y-axis is the coherence percentage. Among the conditions (x-axis), mapped.targets denotes when genes are assigned to cluster based on TRISECT pipeline, random.targets indicates the clusters consisting of random genes among all targets, and random.genes indicates the cluster consisting of random genes. Here, expression coherence was defined using an expression threshold of log 2 CPM ≥ 1; i.e., a gene is considered expressed when the log 2 CPM ≥ 1. The horizontal line (blue color) denotes the coherence level of 5% of the total multi-cell-types. diagonal elements represent the accuracy when EMT is trained on one cell type (row) and tested on another (column). Comparison of the within-cell-type cross-validation accuracy for the Interaction and NonInteraction models (Fig. 3A;Supplemental Fig. S7) shows that the Interaction models have higher predictive accuracy than NonInteraction models, which is consistent with the expectation that in vivo binding of a TF relies on interactions among several TFs. Next, we conjectured that in the Interaction model allowing for greater numbers of partners enables learning of more complex binding rules, leading to increased binding prediction accuracy. We therefore assessed the effect of the length of the region flanking the binding site on prediction accuracy (see Methods). We note that beyond 100 bp, due to narrowing the gap between the foreground and the background region, the discrimination accuracy is expected to decrease. Despite this, in several cases ( Fig. 3B; Supplemental Fig. S8), the increase in ROC-AUC beyond 100 bp suggests that a larger context may be necessary in these cases to capture the binding rules. Nevertheless, we chose a sequence context of 100 bp to make our model comparable to the previously published kmer-SVM (Arvey et al. 2012). For a given TF, we also quantified the variability of the model accuracy in different cell types (see Methods). We define cross-cell-type prediction accuracy as the performance of a model trained on one cell type and tested on another cell type. For these performance accuracy of models, we expect greater variability for the models relying on cell-type-specific interaction partners than the models only relying on reference motifs. Our analysis supports this expectation (Fig. 3C). However, the small variability in crosscell-type prediction accuracy when using the NonInteraction model is likely due to the heterogeneity of the TF binding motif. We quantified the inter-motif divergence for each TF as either the number of annotated motifs, or the motif-divergence (defined over all motifs-pairs; see Methods). We found that the performance variability of NonInteraction models is positively correlated with both measures of motif divergence (Spearman correlation = 0.63, 0.67; two-sided P-value = 1.2 × 10 −3 , 6.3 × 10 −4 , respectively). In Supplemental Figure S7, the offdiagonal elements for the Interaction model show higher cross-cell-type performance relative to the same elements for the NonInteraction model. This higher performance suggests that the binding "rules" are shared between cell types. We ensured that the high cross-cell-type performance is not simply due to overlaps in the genomic loci used to train and test the model between cell types; i.e., the genomic loci on which the model was trained in one cell type does not substantially overlap with the loci tested in another cell type. Overall, across TFs and cell type pairs, the fractional overlap in genomic loci ranges from 0% to 10%, with a mean and median of ∼4% (Fig. 3D). This suggests that it is the binding rule, independent of specific sequence instances, that is shared across cell types. Furthermore, we found that when using the Interaction model, the cross-cell-type accuracy is symmetric. In other words, a high (low) accuracy in cell type Y using EMT trained on cell type X implies a high (low) accuracy in cell type X using the model learned from cell type Y. To demonstrate this symmetry, we normalized the off-diagonal elements of cross-cell performance matrices by the reference AUC by dividing each row by the corresponding diagonal ROC-AUC. As shown in Figure 4A, the lower and upper diagonal ranks are highly correlated (Spearman correlation of upper and lower triangle of resulting matrices is 0.68, two-sided P-value 9.5 × 10 −53 ), supporting our claim that the interaction-dependent (therefore genomic-context dependent) binding rules are shared across cell types. In stark contrast, there is a lack of symmetry in cross-cell prediction accuracy when the NonInteraction model is used (Spearman correlation = 0.04, two-sided P-value 0.4) ( In summary, our analyses suggest that the cell-type-specific TF interactions play a critical role in determining the cell-type-specific in vivo binding, and EMT reveals some of the interactions underlying the cell-type-specific binding of a reference TF. Overlapped_true denotes correctly and overlapped_false incorrectly classified sequences having at least 50% overlap between training sequences in one cell type and test sequences in another cell type. Nonoverlapped_true denotes correctly classified sequences that do not overlap with any sequence in the training set; nonoverlapped_false, incorrectly. TRISECT reveals putative cofactors providing insights into cell-specific biological roles of a TF Our results so far suggest that cell-type-specific cofactors of a TF are a major driver of cross-cell-type in vivo binding variability. To gain further insights into the functional implications of cell-type-specific cofactors, for each reference TF we identified its cell-type-specific cofactors using the feature importance of the corresponding motif as estimated by the model. To minimize redundancy, we excluded motifs with substantially high co-occurrence frequency with at least one of the reference motifs (see Methods). To further minimize false positives, we assessed the enrichment of motif occurrence within the cell-specific ChIP-seq peaks of the reference TF relative to background and retained only those putative cofactor motifs that were significantly enriched (odds ratio > 1.2 and two-sided P-value <0.05; see Methods). The rationale for choosing 1.2 as the odds ratio threshold is discussed in Supplemental Note 6. Several lines of evidence support TRISECT-identified celltype-specific TF cofactors, referred to as putative cofactors. First, we showed that there exists an enrichment of protein-protein interactions (PPIs) among a reference TF and its corresponding cofactors compared with the PPIs among all motifs (Supplemental Table S7a). Additionally, the putative cofactors are enriched for either heterodimerizing TFs or for the TF family that the reference TF belongs to for ∼70% of all TF-cell pair cases (see Methods) (Supplemental Table S7b,c). The enrichment of the same family as that of the reference TF is consistent with the fact that TFs form dimers with other TFs preferably from the same family (Amoutzias et al. 2008;Dror et al. 2015). We also performed protein domain enrichment analysis (Supplemental Table S8) using the DAVID tool (Huang et al. 2009a,b) and found that >80% of enriched domains are involved in homo-or hetero-dimerization consistent with the findings from Supplemental Table S7. Second, we expect higher expression of putative cofactors in the cell types where they are identified as cofactors by our analysis. For each cofactor (excluding ubiquitous cofactors), we determined the log-fold difference in expression between the cell types where it is identified as a cofactor relative to cell types where it is not (see Methods). The distributions of log fold changes of the cofactors are compared with a control set of fold ratios, as presented in Figure 5A. For most TFs, the cofactors show significantly higher expres-sion in the relevant cell types. This is not true only in five cases: ATF3, USF1, CTCF, NRF1, and GABPA. Among these five cases, CTCF is a known cell-type-independent TF; GABPA and NRF1 exhibit higher cell-type independence than other TFs as shown via an independence test. Third, we assessed whether the relationship between a reference TF and its cofactor is symmetric. For this assessment, we limit the analysis to 23 TFs, as for the current study we have models and associated cofactors only for these TFs. Specifically, we assessed whether a reference motif from one TF appears as a cofactor in the TFs whose reference motifs are also reported as cofactors in the first TF. For all X-Y TF pairs where one TF is deemed cofactor of the other and both TFs have available ChIP-seq data in the same cell line, we found that the correlation between the enrichment score of motif X in the binding sequences of TF-Y and vice versa is 0.41 (two-sided P-value = 5.19 × 10 −14 ). This suggests a degree of codependence among TFs for their DNA binding. Finally, for each TF's cell-type-specific cofactors, we performed biological process (BP) GO term enrichment analysis using the GOrilla tool (Eden et al. 2009) relative to all 981 motifs. We found significant differences in the assigned BP of a TF's cofactors among cell types. Remarkably, the BP can vary across cell types while still being functionally related to the reference TF. As an example, Figure 5B shows the enriched BP (false-discovery rate ≤ 10%) for ATF3 in four cell types. ATF3 is a stress-inducible TF involved in homeostasis regulating cell-cycle, apoptosis, cell adhesion, and signaling (Allen-Jennings et al. 2001;Tanaka et al. 2011). We found that ATF3 cofactors are enriched for cell cycle and proliferation functions in three out of four cell lines. In the stem cell line, the identified cofactors are involved in liver regeneration and inflammatory response, consistent with previous studies showing a direct link between ATF3 induction to liver injury and regeneration in mice (Chen et al. 1996;Su et al. 2002). Furthermore, enrichment of NOTCH and apoptotic signaling among cofactors in the HepG2 cell line is consistent with ATF3's role in glucose homeostasis and other primary liver functions (Allen-Jennings et al. 2001). Surprisingly, we find enrichment of cognition, learning, and memory among the TF cofactors in the leukemia cell line. Since leukemia is a cancerous cell line, nonnative gene expression is not unexpected (Lotem et al. 2004(Lotem et al. , 2005. While ATF3 is not known to play a direct role in neuronal function, a functionally and structurally related protein CREBBP has a welldocumented role in neuronal activity and long-term memory formation in the brain (Mayr and Montminy 2001). This raises the possibility that either ATF3 has an unknown role in cognition or the same set of cofactors are involved in memory formation in conjunction with other TFs. For other TFs, the enriched GO-terms are listed in Supplemental Table S9 (enrichment scores ranges from 1.22-93.75 with a median of 7.44, false-discovery rate cutoff of 10%). The corresponding discussion based on a review of the literature is provided in Supplemental Note 7; Supplemental Note 8 includes functions of example cofactors in various cells. This can serve as a resource for further investigation into the cell-type-specific binding and function of a broad array of TFs. We noted substantial variability in the number of detected cofactors across cell types for a TF. Interestingly, a literature survey suggests that for the cell types for which the reference TF has a specific known function, the number of cofactors in that cell type is comparatively higher. For example, REST has well-known neuronal functions, and its binding sites in neurons exhibit lack of cognate RE1 motifs (Rockowitz et al. 2014), suggesting cofactor dependence. Consistently, SK-N-SH (brain cancer cell line) has the highest cofactor cardinality for REST. Similarly, JUN plays a specific role in hematopoietic differentiation, and we found that GM12878 (normal blood cell line) has the largest number of cofactors (Liebermann et al. 1998). We reasoned that a TF with greater cell-type-specific roles would exhibit greater variability in cofactor cardinality. For each TF, we measured the variability of its cofactor cardinality across cell types. As shown in Figure 6A, interestingly, TFs with ubiquitous and invariant roles such as TBP and CTCF have the least variable cofactor cardinality. Based on the trend shown in Figure 6A, we use the variability of cofactor cardinality as a proxy for the TF's cell type specificity. As an additional support, this proxy also correlates with the sparsity measure of cluster-membership matrix. Specifically, for each TF we computed the sparsity of its cluster-membership matrix (presented in Fig. 2A Figure 6B shows that sparsity is positively correlated with the variability of cofactor cardinality (Spearman correlation = 0.66, two-sided P-value = 9.2 × 10 −4 using k-NN). We also assessed whether differences in prediction accuracy achieved by the Interaction model and the NonInteraction model for a particular TF-cell type pair may reflect the TF's cofactor dependence. We compared cofactor cardinality to the normalized distance between Interaction and NonInteraction model performance (AUC shift). As shown in Figure 6C, the AUC shift is positively correlated with cofactor cardinality (Spearman correlation = 0.65, two-sided P-value = 2.7 × 10 −17 ). Previous studies have found that the DNA sequence specificity of a TF can be influenced by its interactions with cofactors (Siggers et al. 2011;Slattery et al. 2011). Interestingly, a close inspection of the feature importance estimated by the NonInteraction EMT model shows that for different cell types the composition of isks, see Methods), while TBP does not. Notably, such diverse usage is observed using NonInteraction models, suggesting a cell-typespecific motif preference. In Figure 6D other), yet they show very different usage. Even though both PWMs have very similar distributions of scores over the same genomic regions, in most cases M00925 yields a slightly higher score than M00926, and once M00925 is selected by a model, M00926 is deemed as redundant and not considered as important further. Hence, they show dissimilar importance. However, in our downstream analysis for assessing contribution of cell-specific usage, none of them are selected as having cell-specific influence and thus have no impact on the analysis. We further investigated the potential contribution of celltype-specific cofactors in modulating the cell-type-specific motif usage for the reference TF. In this regard, we identified pairs of reference motifs (m X & m Y ) having the most differential usage in cell types X and Y, respectively. For each such pair, we selected a set of candidate cofactors (f X & f Y ) that could potentially aid the TF for cell-type-specific binding; we call f X & f Y "influencing cofactors" of m X and m Y , respectively. Next comparing the log fold change (logFC) of f X & f Y in cell type X versus Y (Fig. 6G) shows that the influencing cofactors have higher expression in relevant cell types. Moreover, the influencing cofactors are more proximal to the influenced motif in the relevant cell type (for details, see Methods) ( Fig. 6H). Taken together, cell-type-specific cofactors revealed by TRISECT are consistent with their cell-type-specific expression and function, which may be critical in modulating a TF's celltype-specific biological function. Discussion In this study, we have presented a novel ensemble-based framework-TRISECT-to investigate intra-cell-type heterogeneity and inter-cell-type commonality of in vivo TF binding rules. To the best of our knowledge, this is the first study to comprehensively demonstrate that in vivo binding specificity rules are composed of multiple components, or submodels, many of which are shared across multiple cell types. Importantly, nonorthologous targets of binding sites across cell types governed by a shared binding submodel exhibit a greater functional and expression coherence than targets of binding sites in the same cell type that are governed by different binding rules. For each TF, TRISECT identified cell-typespecific cofactors that are supported by gene expression data and literature studies supporting their cell-type-specific function. We chose Adaboost as our ensemble model due to its architectural advantages with respect to our ultimate goal of analyzing common and distinct binding rules, or submodels, across ensembles learned for each cell type. Boosting ensemble methods, including Adaboost, are designed to learn optimal tree submodels for successive reweighted bootstrap samples. This is in contrast to other ensemble methods, including the popular Random Forest (RF) approach, which seeks to increase variability of submodels by estimating weak submodels from unweighted bootstrap samples. Since our primary goal is to reveal model heterogeneity, we chose to cluster submodels generated by Adaboost rather than Random Forest's weak learners. In terms of prediction accuracy, EMT compared favorably to the previously reported sequence-based discriminative model (kmer-SVM) (Arvey et al. 2012). Apart from the modeling approach, our study differs from that of Arvey et al. (2012) in several other aspects. The previous study compared the cell-type-specific models for only two cell types (GM12878 and K562), while we have investigated in depth the cell type specificity of TRISECT across four to 12 cell types for each TF. While the previous work pri-marily discusses cell type specificity and ubiquity of their models, by clustering the cell-type-specific submodels, our work investigates the extent of shared binding rules; cell type specificity and ubiquity are extreme cases thereof. In addition to the cell-typespecific variability in proximal cofactors, we investigated in much greater depth the cross-cell-type variability in the preferred motif for the reference TF. Together, these novel aspects of our study add to the knowledge of sequence information that specify a TF's in vivo binding in various cell types. Another recent study (Dror et al. 2015) aimed at deciphering the determinants of in vivo occupancy of a TF showed that TF binding specificity is influenced by nearby homotypic sites (for the reference TF), the local nucleotide composition, and certain DNA physical properties. Moreover, the preferred in vivo binding in homotypic clusters was related to a preferred nucleotide composition, e.g., GC-rich for zinc finger TFs and AT-rich for homeodomain reference TFs, in the binding site flanking region. These previous findings are consistent with the fact that the cofactors identified by TRISECT are enriched for the same family of TFs as the reference TF and thus have similar preferences for nucleotide composition to the reference TF. In the previous work (Dror et al. 2015), the accuracy in discriminating bound versus unbound sequences after controlling for the presence of a putative site for the reference TF was modest (ROC-AUC ∼ 0.6). In contrast, we have shown that the motifs for the reference TF alone can discriminate bound sites from unbound control sites with ROC-AUC ∼ 0.78, suggesting that the reference TF is the most informative determinant of in vivo binding, which is indeed expected and was also observed by Pique-Regi et al. (2011). The additional power of discrimination comes either from the presence of cofactor motifs, as suggested before Arvey et al. 2012), or from nucleotide composition and other DNA physical properties (Dror et al. 2015). Interestingly, DNA flexibility measured by propeller twist (el Hassan and Calladine 1996) is highly dependent on GC content (Hancock et al. 2013), which in turn is related to motif composition, as we have noted. Overall, the three properties of nucleotide composition, DNA physical properties, and motif composition are interrelated. The specific advantage of an ensemble model based on motif composition is that apart from achieving favorable accuracy, it is functionally more interpretable and can provide insight into a TF's cell-type-specific functions. Context-dependent function of a cis regulatory region requires binding of a specific combination of TFs. This modularity contributes to morphological evolution through changes in cis elements controlling transcription while avoiding the pleiotropic effects of a TF gene's expression change (Prud'homme et al. 2007). Shared submodels of TF binding rules across cell types, as revealed by TRISECT, may suggest shared history of cell types. The ability of a TF to bind to diverse reference motifs and, in conjunction, interact with diverse combinations of cofactors serves to enhance its functional repertoire across contexts (Meijsing et al. 2009;Arvey et al. 2012). Our analyses reveal a cell-type-specific preference for the reference motif as well as the cell-type-specific interaction partners of a TF. We found the expression of cell-type-specific interaction partners to be higher in the cell types where they are expected to interact with the TF, and their function is consistent with the context based on the literature. Thus, our study provides further support for a TF's celltype-specific functions and, more importantly, enables further investigation into the mechanisms underlying a TF's diverse cell-specific functions. Data processing We downloaded the ChIP-seq peaks for 23 TFs from ENCODE (Supplemental Table S1; The ENCODE Project Consortium 2013). For each TF we selected only those cell lines for which narrow-peak data were available. We chose the more stringent of the two criteria-top 5000 most significant peaks or FDR q-values <0.2-to select the binding sites. The criteria are reasoned by the availability of enough data to build a model and the backward compatibility of the previous method (Arvey et al. 2012). Notably, not all ENCODE data sets provide q-values, and in that situation, we generated the list of q-values from the given P-values using the qvalue package in R (http://github.com/jdstorey/qvalue). Relative to the center of ChIP-seq peaks, the DNA regions of length 100 bp were identified as the foreground. As negative control, we sampled flanking regions of 100 bp from 200 bp away from the positive sequences. Again, the choice for the size and location of foreground and background can be rationalized by the backward compatibility. In fact, choosing control sequences from near the foreground makes the modeling problem harder than when they are chosen from arbitrary locations in the genome. Moreover, control sequences overlapping with any peak were excluded. Due to the proximity of the negative examples, both foreground and background are expected to have similar GC composition (Arvey et al. 2012) and chromatin accessibility. However, we explicitly controlled for the GC composition using a sequence set balancing technique when comparing the foreground and the background (Whitaker et al. 2015). In the sequence set balancing, the GC percentage is divided into N bins (e.g., we choose N = 100). Then for both the foreground (F) and background (B) sets, the number of sequences falling into each bin are enumerated: sequences are selected randomly from the foreground and background sets. This way each set of sequences will have similar distribution of GC composition. After sequence set balancing, we discarded any cell line resulting in fewer than 4000 sites. In our list of TFs, EP300 is nonsequence specific. Even so, EP300 is localized to the chromatin by interacting with other motifs. Like Arvey et al. (2012) we include EP300 specifically to reveal those putative interactions. In addition to the 100-bp foreground and background, we also extracted another six sets of foreground and background of size 120, 150, 180 200, 250, and 300 bp. We keep increasing the size of foreground to check how much additional information was added to the model by the increased sequence size. Note that for all sequence sizes the middle point of the background does not vary; so as the sequence size is increased, the gap between foreground and background decreases. Learning EMT (Ensemble model of TF binding) We considered three types of feature set for the sequence specificity model: (1) K-mers, frequencies of 4096 6-mers in the 100-bp sequence; (2) K-merRC, frequencies of 2080 k-mer (k = 6) groups equating a k-mer and its reverse complement; and (3) Interaction (Lk), we obtained all 981 vertebrate positional frequency matrices (PFMs) from TRANSFAC 2011 as features. Each PFM was converted into PWMs, which is a log-likelihood matrix, by (1) adding a pseudocount of 0.2 for "C" and "G", and 0.3 for "A" and "T" in line with human genome composition, (2) normalizing the frequencies to get probabilities for each base, (3) dividing each base probability by the background probabilities (0.2 for "C" and "G", and 0.3 for "A" and "T"), and (4) taking the log of the probability ratio. The resulting PWMs were then used to get the motif matches using PWMSCAN . Here, Lk refers to the PWM hit threshold (hit expected every L kb on average in the genome); we used L = 1, 2, 5, or 10. In particular, we use log(1/Lk) as the threshold value to call a PWM "match." For instance, at L = 1, the expected frequency of matches is once every 1 kb, corresponding to a 20% chance of a match in a 100-bp region or its reverse complement. Previous research showed that clusters of homotypic "weak" binding sites are prevalent in regulatory regions (Gotea et al. 2010), and such a presence of multiple weak binding sites, called a homotypic cluster of binding sites, is preferred to single strong binding site (He et al. 2012). To mimic this binding affinity, from the output of PWMSCAN, we decided to use the sum of PWM-score (−log(match score)) for all matches as the feature value. However, we also collected the "maximum score" and "average score" of binding for each of the training sequences and measured their correlation with our feature value. The high correlations (0.8 and 0.87 respectively) suggest a minimal effect on downstream analysis and overall conclusions. Finally, we used the log sum of PWM-score to compensate for the skewed distribution of the number of binding sites for individual TFs. We found that the model performance was better for the 1-k than the 2-k thresholds, and at much higher stringency, the model performance significantly deteriorates due to the sparsity of the matches (Supplemental Fig. S2C). Furthermore, we determined the feature importance of the motifs for each model of TF-cell pair at those four thresholds. For each TF-cell pair, we calculated the correlation of the feature importance based on the 1-k threshold with those based on other thresholds, i.e., three correlation values. Thus in total, we calculated 405 correlation measures for 135 TF-cell pairs. We found that 90% of those correlations are significant, ranging from 0.21-0.81 with a median of 0.52 indicating nonsignificant effects of the thresholds on the models. Considering the relative performance of the Interaction (1 k) model, in the subsequent analysis we use them as the representative Interaction model and refer to it as such. We chose Adaptive boosting (Friedman 2002(Friedman , 2008) as our composite model where each submodel within the ensemble is a decision tree, and each decision tree is constructed based on a bootstrap sample. We used the Adaboost framework implemented in R gbm package (https://cran.r-project.org/package=gbm). In the framework, Huber loss function is selected to reduce overfitting. We estimated the classification accuracy of the model based on the 25% held-out data set, while 75% of the data were used to build the cell-specific models. In Supplemental Note 1, we summarize the interpretation of a model and parameter choices. Model conversion, Duda-Hart test, and Hopkins statistics Each submodel is represented by a point in a d-dimensional space. Each dimension denotes a feature, and the value along the dimension indicates the importance of the feature for the submodel. Therefore, each model (consisting of multiple submodels) can be represented as a set of points in a d-dimensional space, where d ≤ number of features (981). For a model, the feature importance was measured using the prediction performance improvement for out-of-bag sample predictions. We modified the R implementation of gbm package (https://cran.r-project.org/package=gbm) for feature importance to accommodate the calculation for single tree or the submodel in question. In other words, we determined the contribution of a single tree (submodel) in prediction performance improvement using the same out-of-bag samples. We disregarded the features that do not contribute to any submodel. We conducted a Duda-Hart test to show whether the submodels belong to one or multiple clusters. We measured Duda-Hart or dh-ratio (ratio of within-cluster sum of squares and overall sum of squares) for all cluster pairs, based on either cell-type-specific set of submodels or the pooled set of submodels across all cell types for a TF using fpc package in R (https://cran.r-project.org/package=fpc). While calculating dh-ratio, k-NN was used for clustering. Since the final output of k-NN depends on initial random set of centers, the dh-ratio calculation was repeated 1000 times to ascertain robustness. We noted that all test results were significant (P-value <0.001). Hopkins statistics (H) were measured to check clustering tendency of the submodels. To measure Hopkins statistics (H), the submodels are again represented as a set of points. H is defined by the following: W j is the nearest-neighbor distances of m randomly chosen points (submodels), which demarcate the sampling window. U j is the minimum distances of the submodels from m random points in the sampling window. To define the sampling window, we either took the 25-75 percentile of the feature values or from δ to max value-δ along each dimension, where δ denotes the standard deviation of the feature value (Zeng and Dubes 1985a,b;Dubes and Zeng 1987). To estimate the P-value, we repeated the above procedure 1000 times and measured the H-value. The P-values range from 0.026 to <0.001. Clustering submodels For a TF, we obtained the submodels from all cell types and then clustered all submodels using k-NN, where each submodel is an instance and the features of the instances are individual feature importance obtained in the context of the respective cell-specific model. Before feeding into the k-NN, we remove all the features whose cumulative importance over all submodels is zero. To check robustness, the submodels are also clustered using a XY-fused version of a self-organizing map (Melssen et al. 2006) from the kohonen R package (Wehrens and Buydens 2007). To make it comparable to k-NN, submodels were clustered without preexisting submodel cell labels; i.e., we assumed 100% weight for X map. Assignment of sequences and target genes to the clusters A cluster of submodels can be viewed as a new ensemble. Therefore, for each cluster, we built a gbm object by treating the cluster as an ensemble and used it the same way an original Interaction model would score a sequence. Thus, we scored each binding site sequence against each cluster, and a sequence is assigned to a cluster when it is scored above a threshold (of one) by the cluster. The choice of the threshold was based on the rationale that the intercept (bias of the gbm model [https://cran. r-project.org/package=gbm]) of cell-specific models is about one, and for a high-confidence positive sequence, the model score should be greater than the intercept. Each bound sequence (from all cell lines) is mapped to a set of clusters. For each bound sequence, the nearest gene on the genome is considered to be its putative target, as per convention (Zhu et al. 2010). Hence, each cluster corresponds to a set of target genes coming from different cells. Measuring pathway and expression coherence using Fisher's exact test To measure the functional coherence, we determined the target gene array of size M-by-N for M clusters and N cell types. The M-by-N array thus includes a set of genes corresponding to each cluster in a particular cell type. We compared gene-pairs from the same row across columns (same cluster, different cells) to a background of gene-pairs along columns from different rows (same cell, different cluster). Then we apply the Fisher's exact test in a cluster-centric fashion by comparing the fraction of coclustered gene-pairs in the foreground compared with the background. The measure is named as expression coherence: whether target gene-pairs from same cluster but different cell lines are more coexpressed than those from different clusters but same cell line. A gene-pair is considered coexpressed if both of the genes are turned on (RNA-seq log 2 CPM > 1) in their respective cells; CPM stands for counts per million. CPM, instead of the standard FPKM measure to quantify gene expression, suffices for our purpose as we only compare a gene's expression across samples and not with other genes in the same sample. We showed a similar trend of expression coherence with a different expression threshold (log 2 CPM ≥ 5) (Supplemental Fig. S6E,F). Pathway coherence is also assessed in similar fashion: whether the target genes from different cell lines that are assigned to the same cluster are more functionally related (i.e., in the same pathway) than the target genes coming from the same cell but from different clusters. Pathway data were downloaded from KEGG pathway database (www.genome.jp/kegg). Robustness of EMT and submodel clustering While building EMT using the gbm R package, we used the default parameter settings except maximum depth of variable interaction (interaction.depth), minimum number of observations in the trees terminal nodes (n.minobsinnode), and learning rate (shrinkage). Our parameter choices are the following: interaction.depth, 15; n.minobsinnode, 30; and shrinkage, 0.05. To check model and pipeline robustness, we built models with different values of these three parameters and compared the performance and model size (number of learned submodels). We found that performance and model size becomes stable after an interaction depth of 15 (Supplemental Fig. S2D,E), performance and model size do not vary much with the change of n.minobsinnode from 25 to 45 (Supplemental Fig. S2G,H), and performance does not change with shrinkage from 0.1 to 0.5 (Supplemental Fig. S2I). However, model size varies with the shrinkage parameter setting because with a lower learning rate, it takes longer to reach an optimum and it results in an increase in the model size (Supplemental Fig. S2J). Therefore, for different shrinkage parameters, we measured the clustering consistency. To this end, we took the models built with shrinkage = 0.05 as the reference models, and we compared the clustering pattern of reference models with the set of models built using different shrinkage values. More specifically, we determined whether a pair of sequences that falls into the same cluster for the reference model also falls in the same cluster for a different shrinkage value. We found that on average 96% of the sequence pairs fall in the same clusters regardless of shrinkage (Supplemental Fig. S2K). Model variability and motif divergence Model variability is defined by its normalized predictability across cell lines. For each model, n ROC-AUC values are obtained using the held-out data set of n cell lines. Cross-ROC-AUC values are normalized by self-ROC-AUC value. Mathematically, var modeli = j=i, jecells rocauc j rocauc i . Motif divergence is defined by the following equation: motif .div. pwms = i,j[pwms dist i,j IC i + IC j . Here, dist i,j = 1/similarity i,j and IC i is the information content of the ith motif. Similarity between two PWMs is calculated following the normalized version of the sum of column correlations (Pietrokovski 1996). Identification of cofactors EMT provides importance of all features in discriminating the foreground from the background. We retained all features with nonzero importance. From the initial set, we removed any motif that has 60% PWM similarity (consensus overlap) for at least 50% of the binding site locations with any of the reference motifs. Next, we calculated an enrichment score (i.e., odds ratio) of the motif in the foreground binding sites relative to control sites. We retained the motifs with more than 1.2-fold enrichment and two-sided P-value <0.05. The resulting motifs were considered as cofactors. For further analysis, we considered cellspecific cofactors by removing common motifs across cells. In particular, we excluded all cofactors that are common between any two cell lines. The functional cell specificity measure for a TF is determined using the variability of cofactor cardinality of such unique cofactors. Enrichment of PPI, same family TFs, and heterodimerizing TFs We obtained PPI data from STRING v10 (Szklarczyk et al. 2011). Using the TRANSFAC 2011 database, we determined the mapping from motifs to Ensembl protein id and the number of motif pairs having PPI. Using hypergeometric test, we calculated the enrichment of PPI between a reference TF and each set of cell-specific cofactors. The test summary indicated that 81% of the TF-cell cases have higher PPI enrichment among the interactions involving reference TFs and their cofactor (Supplemental Table S7a). We compiled each PWM's family and the list of heterodimerizing PWMs from the TRANSFAC 2011 database. To identify heterodimerizing TFs, we looked for the presence of the keyword "heterodimer" and absence of "no" or "not" in the description of the motif. Supplemental Table S6 shows the heterodimerizing PWMs. Detailed manual inspection of a random subsample suggests that this automated criterion may result in ∼5% false positives. We also noted that occasional use of the term "dimer" instead of "heterodimer" may lead to ∼20% false negatives. For the hypergeometric test of family-enrichment, we compared how many cofactors belong to the family of reference motifs relative to the 981 motifs. Heterodimer enrichment was tested similarly. The enrichment scores (odds ratios) and P-values are reported in the Supplemental Table S7, b and c. The Supplemental Table shows that 70% of the model cofactors are enriched for either heterodimerizing TFs or TFs coming from the same family. Gene expression and differential gene expression For gene expression, we used RNA-seq data downloaded from ENCODE (Supplemental Table S5). For each cell line, we obtained between two and four RNA-seq samples depending on the availability and obtained the number of reads aligned to each gene. We corrected for batch effect using the sva R package (Leek et al. 2012). To estimate the differential expression between two sets of cell lines (those in which a TF is deemed a cofactor, and those where it is not), we used the linear model implemented in the limma package in R (Ritchie et al. 2015). For each cofactor, we determined all possible relevant and nonrelevant cell pairs and took the log fold change (logFC) of the expression in those cells. To determine the control gene expression, we considered the same sets of cell pairs but took the logFC of an arbitrary gene instead of the cofactor. In both cases, we considered only significant differential expressions (logFC values with P-value <0.05) provided by the limma package (Ritchie et al. 2015). Cell-specific PWM for the reference TF We obtained relative feature importance of the reference motifs from the NonInteraction models and compared them with random expectation. To calculate the random expectation, 1000 NonInteraction models are learned based on randomly sampled 4000 sites from all binding sites across cell lines. From 1000 models, 1000 relative feature importances were calculated. Each set of relative importance was assumed a point in p-dimensional space where p is the number of reference motifs. We considered the relative importance vectors as data points from multivariate normal distribution and for each vector we calculated the Mahalanobis distances from the centroid, which follows a χ 2 distribution (Slotani 1964). The degrees of freedom (d) for the χ 2 distribution were determined using maximum likelihood estimate, and a P-value was generated from a χ 2 distribution function of d degrees of freedom. Influencing cofactors, proximity to the influenced motif, and expression in the most used cell type We identified the influencing cofactor set in the cell type where one motif is used much more frequently than the others. More specifically, for a TF, we identified pairs of motifs and cell types where there is a maximal differential in cell type usage of the two motifs (i.e., one of the motifs has the highest usage in one cell type and the lowest usage in another, and vice versa). For such pairs of cell types X, Y and corresponding reference motifs m X & m Y , we determined the candidate motif-specific cofactors f X & f Y as follows. We first separated the sequences from cell types X and Y where m X and m Y matches are found, respectively. Next, we assessed each putative cofactor's motif enrichment in each sequence set relative to the other sequence set. If the putative cofactor is enriched in X relative to Y, we consider it as a putative influencing cofactor for m x and likewise for m y . All other cofactors (f c ) are considered noninfluencing and serve as a negative control. We measured the fold change (logFC) of all influencing and noninfluencing cofactors in X versus Y using the limma package (Ritchie et al. 2015). To demonstrate the genomic proximity between influenced motif and influencing cofactors, we chose the nearest distance between them among potentially multiple motif matches. Feature count and gene expression in ubiquitous vs. cell-specific submodels We designated a cluster as cell-type-specific if all member submodels (at least five) came from the same cell type. We then estimated skewness (https://cran.r-project.org/package=e1071) for each multi-cell-type based on the numbers of submodels contributed to the cluster by various cell types. If the skewness was <25%, we designated the cluster as ubiquitous. For each cluster, we counted the number of relevant features (i.e., with nonzero importance). Among the relevant features, we retained only those which were deemed as putative cofactors for at least one of the cell-specific models in our earlier analysis. The retained cofactors are designated ubiquitous or cell-type-specific based on the label of the cluster they belong to. Any common features from the two sets are removed. For each feature, we collect the expression across cell types in question and measure the skewness of gene expression. Software availability Sample data and code are available for download from the Supplemental Material and from the following GitHub repository: https://github.com/mhfzsharmin/trisectr
2016-11-01T19:18:48.349Z
2015-10-09T00:00:00.000
{ "year": 2016, "sha1": "d7dc45d0158275cb17ae5fae9ff1ee93fc22eebb", "oa_license": "CCBYNC", "oa_url": "http://genome.cshlp.org/content/26/8/1110.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9bcf57baa79150263ec713ab373e624837a50364", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119717397
pes2o/s2orc
v3-fos-license
Lucas' theorem: its generalizations, extensions and applications (1878--2014) In 1878 \'E. Lucas proved a remarkable result which provides a simple way to compute the binomial coefficient ${n\choose m}$ modulo a prime $p$ in terms of the binomial coefficients of the base-$p$ digits of $n$ and $m$: {\it If $p$ is a prime, $n=n_0+n_1p+\cdots +n_sp^s$ and $m=m_0+m_1p+\cdots +m_sp^s$ are the $p$-adic expansions of nonnegative integers $n$ and $m$, then \begin{equation*} {n\choose m}\equiv \prod_{i=0}^{s}{n_i\choose m_i}\pmod{p}. \end{equation*}} The above congruence, the so-called {\it Lucas' theorem} (or {\it Theorem of Lucas}), plays an important role in Number Theory and Combinatorics. In this article, consisting of six sections, we provide a historical survey of Lucas type congruences, generalizations of Lucas' theorem modulo prime powers, Lucas like theorems for some generalized binomial coefficients, and some their applications. In Section 1 we present the fundamental congruences modulo a prime including the famous Lucas' theorem. In Section 2 we mention several known proofs and some consequences of Lucas' theorem. In Section 3 we present a number of extensions and variations of Lucas' theorem modulo prime powers. In Section 4 we consider the notions of the Lucas property and the double Lucas property, where we also present numerous integer sequences satisfying one of these properties or a certain Lucas type congruence. In Section 5 we collect several known Lucas type congruences for some generalized binomial coefficients. In particular, this concerns the Fibonomial coefficients, the Lucas $u$-nomial coefficients, the Gaussian $q$-nomial coefficients and their generalizations. Finally, some applications of Lucas' theorem in Number Theory and Combinatorics are given in Section 6. ROMEO MEŠTROVIĆ ABSTRACT. In 1878É. Lucas proved a remarkable result which provides a simple way to compute the binomial coefficient n m modulo a prime p in terms of the binomial coefficients of the base-p digits of n and m: If p is a prime, n = n 0 + n 1 p + · · · + n s p s and m = m 0 +m 1 p+· · ·+m s p s are the p-adic expansions of nonnegative integers n and m, then n m ≡ s i=0 n i m i (mod p). The above congruence, the so-called Lucas' theorem (or Theorem of Lucas), plays an important role in Number Theory and Combinatorics. In this article, consisting of six sections, we provide a historical survey of Lucas type congruences, generalizations of Lucas' theorem modulo prime powers, Lucas like theorems for some generalized binomial coefficients, and some their applications. In Section 1 we present the fundamental congruences modulo a prime including the famous Lucas' theorem. In Section 2 we mention several known proofs and some consequences of Lucas' theorem. In Section 3 we present a number of extensions and variations of Lucas' theorem modulo prime powers. In Section 4 we consider the notions of the Lucas property and the double Lucas property, where we also present numerous integer sequences satisfying one of these properties or a certain Lucas type congruence. In Section 5 we collect several known Lucas type congruences for some generalized binomial coefficients. In particular, this concerns the Fibonomial coefficients, the Lucas u-nomial coefficients, the Gaussian q-nomial coefficients and their generalizations. Finally, some applications of Lucas' theorem in Number Theory and Combinatorics are given in Section 6. INTRODUCTION Prime numbers have been studied since the earliest days of mathematics. Congruences modulo primes have been widely investigated since the time of Fermat. There are numerous useful and often remarkable congruences and divisibility results for binomial coefficients; see [36,Ch. XI] for older results and [52] for a modern perspective. Let p be a prime. Then by Fermat little theorem, for each integer a not divisible by p a p−1 ≡ 1 (mod p). In attempting to discover some analogous expression which should be divisible by n 2 , whenever n is a prime, but not divisible if n is a composite number, in 1819 Charles Babbage [9] is led to the congruence for all primes p ≥ 3. In 1862 J. Wolstenholme [142] proved that the above congruence holds modulo p 3 for any prime p ≥ 5. The study of arithmetic properties of binomial coefficients has a rich history. As noticed in [52], many great mathematicians of the nineteenth century considered problems involving binomial coefficients modulo a prime power (for instance Babbage [9], Cauchy, Cayley, Gauss [45], Hensel, Hermite [57], Kummer [80], Legendre, Lucas [86] and [87], and Stickelberger). They discovered a variety of elegant and surprising theorems which are often easy to prove. For more information on these classical results, their extensions, and new results about this subject, see books of Dickson [36, Chapter IX] and Guy [53], while a more modern treatment of the subject is given by A. Granville [52]. Suppose that a prime p and pair of integers n ≥ m ≥ 0 are given. A beautiful theorem of E. Kummer of 1852 ([80, pp. 115-116]; also see [36, p. 270]) states that the exact power of the prime p which divides n m is given by the number of "carries" when m and n − m are added in base p arithmetic. This is a fundamental result in the study of divisibility properties of binomial coefficients. If n = n 0 +n 1 p+· · ·+n s p s and m = m 0 +m 1 p+· · ·+m s p s are the p-adic expansions of nonnegative integers n and m (so that 0 ≤ m i , n i ≤ p − 1 for each i), then by Lucas's theorem established byÉdouard Lucas in 1878 [86] (also see [36, p. 271] and [52]), The same result is without proof also presented by Lucas in 1878, in Section XXI of his massive journal paper [87, pp. 229-230]. This remarkable result by Lucas provides a simple way to compute the binomial coefficient n m modulo a prime p in terms of the binomial coefficients of the base-p digits of n and m. The above congruence, the socalled Lucas' theorem (or Theorem of Lucas) is a very important congruence in Combinatorial Number Theory and Combinatorics. In particular, this concerns the divisibility of binomial coefficients by primes. In this article, consisting of six sections, we provide a historical survey of Lucas type congruences, generalizations of Lucas' theorem modulo prime powers and Lucas like theorems for some classes of generalized binomial coefficients. Furthermore, we present some known applications of Lucas' theorem and certain of its variations in Number Theory and Combinatorics. This article is organized as follows. In Section 2 we mention several known algebraic and combinatorial proofs of Lucas' theorem. We also give some consequences and variations of Lucas' theorem. In Section 3 we present a number of extensions and variations of Lucas' theorem modulo prime powers. In Section 4 we consider the notions of the Lucas property and the double Lucas property. In this section we also present numerous integer sequences satisfying one of these properties or a certain similar Lucas type congruence. In particular, these properties are closely related to the divisibility properties of certain binomial coefficients, matrices, different binomial sums, Apéry numbers, Delannoy numbers, Stirling numbers of the first and second kind etc. In Section 5 we collect several known Lucas type congruences for some generalized binomial coefficients. In particular, this concerns the Fibonomial coefficients, the Lucas u-nomial coefficients, the Gaussian q-nomial coefficients and some their generalizations. Finally, applications of Lucas' theorem are given in Section 6 of this survey article. Some of these applications are closely related to the determination of number of entries of Pascal's triangle with a prescribed divisibility property. We also present some known primality criteria whose proofs are based on Lucas' theorem. Furthermore, we give certain known results concerning the characterizations of the algebraicity of some classes of formal power series in terms of the notion of the p-Lucas property. LUCAS' THEOREM AND ITS VARIATIONS 2.1. Lucas' theorem. As noticed above, if n = n 0 + n 1 p + · · · + n s p s and m = m 0 + m 1 p + · · · + m s p s are the p-adic expansions of integers n and m such that 0 ≤ m i , n i ≤ p − 1 for each i = 0, 1, . . . , s, then a beautiful Lucas's theorem ([86]; also see [52] ( [86] and [36, p. 271]) states that (with the usual convention that 0 0 = 1, and l r = 0 if l < r). The congruence (1) was established by Lucas by considering patterns in Pascal's triangle. Furthermore, (1) is equivalent to the following Lucas' earlier generalization [86, p. 52] of an 1869 result of H. Anton [7, pp. 303-306] (also see [36, p. 271]): where n div p denotes the integer quotient of n by a prime p, and n mod p its remainder. The congruence (2) is in fact the equivalent form of Lucas' theorem which is often stated in the follwing way: where p is a prime, n, m, r and s are nonnegative integers such that 0 ≤ r, s ≤ p − 1. If a prime p divides n m then (1) follows easily from Kummer's theorem. However, if p l is the exact power of p dividing n m , then we might ask for the value of 1 p l n m (mod p). The related result was discovered by H. Anton in 1869 [7] (see also [52], [75, pp. 3-4] and [121]) who proved that if p l is the exact power of p dividing n m , (l is by Kummer's theorem, the number of "carries" when m and n − m are added in base p arithmetic), then where n = n 0 + n 1 p + · · · + n s p s , m = m 0 + m 1 p + · · · + m s p s , and r = n − m = r 0 + r 1 p + · · · + r s p s with 0 ≤ m i , n i , r i ≤ p − 1 for each i = 0, 1, . . . , s. Remark 1. Numerous authors have asked whether there is an analogous congruence modulo p l to (4), for arbitrary l ≥ 1. In 1995 A. Granville [52,Theorem 1] gave a positive answer to this question (see the congruence (33)) in Subsection 3.2). The several proofs offered for Lucas' theorem are primarily of to typesalgebraic and combinatorial. The well known algebraic proof of Lucas' theorem due to N.J. Fine [39] in 1947 is based on the binomial theorem for expansion of (1 + x) n . This proof runs as follows. Since by Kummer's theorem, the binomial coefficient p k is divisible by a prime p for every k = 1, 2, . . . , p − 1, by the binomial expansion it follows that Continuing by induction, we have that for every nonnegative integer i Write n and m in base p, so that n = s i=1 n i and m = s i=1 m i for some nonnegative integers s, n 0 , . . . , n s , m 0 , . . . , m s with 0 ≤ n i , m i ≤ p − 1 for all i = 0, 1, . . . , s. Then By comparing the coefficients of X m on the left hand side and on the right hand side of the above congruence immediately yields Lucas' theorem given by (1). As an application of a counting technique due to M. Hausner in 1983 [55], in the same paper [55, Example 4] the author established another combinatorial proof of (3). Another proof of the congruence (3) based on a simple combinatorial lemma is presented in 2005 by P.G. Anderson, A.T. Benjamin and J.A. Rouse in [6, p. 268] (see also [13]). Another two proofs of Lucas' theorem, based on techniques from Elementary Number Theory were obtained in 2010 by S.-C. Liu and J.C.-C. Yeh [83] and in 2012 by A. Laugier and M.P. Saikia [82]. The congruence (3) immediately yields since the same products of binomial coefficients are formed on the right side of Lucas's theorem in both cases, other than an extra 0 0 = 1. A direct proof of the congruence (5), based on a polynomial method, is given in [133, Solution of Problem A-5, p. 173] as follows. It is well known that p i ≡ 0(mod p) for each i = 1, 2, . . . , p − 1 (see (11)) or equivalently that in the ring Since coefficients of like powers must be congruent modulo p in the equality Further, notice that the Lucas' congruence (3) easily follows by induction on the sum r + s ≥ 0 using the base induction r + s = 0 with r = s = 0 satisfying via the congruence (5), and the Pascal formulas: Remark 2. The Lucas' congruence (3) also can be interpreted as a result about cellular automata (cf. Granville [52,Section 5]). Namely, Lucas' theorem can be interpreted as a two-dimensional p-automaton (for a formal definition see [3]). Some consequences and extensions of Lucas' theorem. Here, as always in the sequel, p will denote any prime. As noticed in 2011 by A. Nowicki [103, the congruences 7.3.1-7.33], if n = n 0 + n 1 p + · · · + n s p s is the p-adic expansion of a positive integer n, then for each k = 0, 1, . . . , s holds, and consequently, where ⌊x⌋ is the greatest integer less than or equal to x. Remark 3. The congruence (7) is proposed by L.E. Clarke [26] in 1956 as a problem which is solved in 1957 by P.A. Piza [108]. Moreover, if 0 ≤ r < p f and 0 ≤ m < p f , then the Lucas' congruence (3) immediately yields (see [103, the congruence 7.3.6]) Lucas' theorem immediately yields the following well known congruence: where p is a prime and k is an integer such that 1 ≤ k ≤ p − 1. Furthermore, if p is a prime and f a positive integer, then by Lucas' theorem for any f ≥ 1 and 1 ≤ k ≤ p f − 1 we have (see, e.g., [13,Theorem 24]) Further, if p is a prime and n, m and k are positive integers with m ≤ n, then the congruence (5) where m div p is the integer quotient of m by p and m mod p is the remainder of m by division by p. (similarly, for n instead of m). It follows that if n = n 0 + n 1 p + · · · + n s p s and m = m 0 + m 1 p + · · · + m s p s , where 0 ≤ m i , n i ≤ p − 1 for each i = 0, 1, . . . s, then Following Granville [52,Section 6], for an integer polynomial f (X) of degree d, define the numbers m n f with m, n ∈ Z by the generating function and let m n f = 0 if n < 0 or n > md (note that m n f = m n when f (X) = X + 1). Clearly, by Fermat little theorem, f (X) p ≡ f (X p )(mod p), and using this in 1995 A. Granville [52, Section 6, the congruence (24)] proved the following generalization of the congruence (4): If p is a prime, m, n nonnegative integers such that m = pl + m 0 , n = pt + n 0 , l, t, m 0 , n 0 ∈ N and 0 ≤ m 0 , n 0 ≤ p − 1, then Notice that when f (X) = X + 1 then the congruence (16) becomes which is in fact the Lucas's congruence (3). By using a congruence based on Burnside's theorem, in 2005, T.J. Evans [38,Theorem 3] proved the following extension of Lucas' theorem involving Euler's totient function ϕ: If n ≥ 1, m, M, m 0 , r, R and r 0 are nonnegative integers such that m = Mn + m 0 , r = Rn + r 0 , with 0 ≤ m 0 , r 0 < n, then (17) where the summation runs among all positive divisors d of n. Remark 4. It was proved in [38, Corollary 3] that Lucas' theorem easily follows from the congruence (17). 3. LUCAS TYPE CONGRUENCES FOR PRIME POWERS 3.1. Wolstenholme type congruences. Notice that for any prime p the congruence (5) with n = 2 and m = 1 becomes whence by the identity 2p p = 2 2p−1 p−1 it follows that for any prime p As noticed in 1, in 1819 Charles Babbage [9] (also see [52,Introduction] or [36, page 271]) showed that the congruence (18) holds modulo p 2 , that is, for a prime p ≥ 3 holds The congruence (19) was generalized in 1862 by Joseph Wolstenholme [142] as it is presented in the next section. Namely, Wolstenholme's theorem asserts that (20) 2p for all primes p ≥ 5. For a survey of Wolstenholme's theorem see [93] and for its extensions see [146] and [100]. . Further, the congruence (22) is refined in 1952 by E. Jacobsthal [19] (also see [52]) as follows: if p ≥ 5 is a prime, n and m are positive integers with m ≤ n, then where t is the power of p dividing p 3 nm(n − m) (this exponent t can only be increased if p divides B p−3 , the (p − 3)rd Bernoulli number). Remark 8. In the literature, the congruence (23) is often called Jacobsthal-Kazandzidis congruence (see e.g., [27, Section 11.6, p. 380]). In 2008 C. Helou and G. Terjanian [56, the congruence (1) of Corollary on page 490] refined the Jacobsthal's result as follows (also see [27, Section 11.6, Corollary 11.6.22, p. 381] for a stronger form)): If p ≥ 5 is a prime, n and m are positive integers with m ≤ n, then where t is the power of p dividing p 3 m(n − m) n m . By a problem N4 of Short list of 48th IMO 2006 [35], for every integer k ≥ 2, 2 3k divides the number Variations of Lucas' theorem modulo prime powers. In 1991 D.F. Bailey [11,Theorem 4] proved that if p is a prime, n and r are nonnegative integers and s a positive integer less than p, then In the same paper [11,Theorem 5], the author extended the previous congruence as follows: if p ≥ 5 is a prime, 0 ≤ m ≤ n, 0 ≤ r < p and 1 ≤ s < p, then Remark 9. Notice that Bailey's proof of the congruence (27) (proof of Theorem 5 in [10]) is deduced applying the Ljunggren's congruence (22) (Theorem 4 in [10]) and a counting technique of M. Hausner from [55]. (28) Remark 10. If we put a = a s−1 p s−1 + · · · + a 1 p + a 0 , then the congruence (28) can be written as (29) np s mp s + a ≡ (m + 1) n m + 1 where a is a positive integer less than p s which is not divisible by p. . Using a multiple application of Lucas' theorem, in 2012 the author of this article [98, Theorem 1.1] proved the following similar congruence to (29): where p is a prime, n, m, s and a are nonnegative integers such that n ≥ m, s ≥ 1, 1 ≤ a ≤ p s − 1, and a is not divisible by p. Kummer's theorem given in Section 1, is useful in situations where the binomial coefficient is divisible by a prime power. However, if the binomial coefficient is not congruent to zero modulo a prime, then the question remains for a way to simplify the expression. In 1995 A. Granville [52, Theorem 1] generalized Anton's congruence (4) modulo prime powers as follows. For a given integer k define (k!) p to be the product of all integers less than or equal to k, which are not divisible by p. Suppose that prime power p f and positive integers n and m are given with r := n − m ≥ 0. Write n = n 0 + n 1 p + · · · + n s p s in base p, and let N j be the least positive residue of ⌊n/p j ⌋(mod p f ) for each j ≥ 0 (so that N j = n j + n j+1 p + · · · + n j+f −1 p f −1 ); also make the corresponding definitions for m j , M j , r j , R j . Let e j be the number of indices i ≥ j for which n i < m i (that is, the number of "carries" when adding m and r in base p, on or beyond the jth digit). Then Here, as usually in the sequel, we will consider the congruence relation modulo a prime power p l extended to the ring of rational numbers with denominators not divisible by p. For such fractions we put m/n ≡ r/s (mod p l ) if and only if ms ≡ nr (mod p l ), and the residue class of m/n is the residue class of mn ′ where n ′ is the inverse of n modulo p l . A result which gives readily an extension of Lucas' theorem in the form of the congruence to prime power moduli is given in 1992 by A. Granville [51, Proposition 2] as follows: For each positive integer j, define n j to be the least nonnegative residue of an integer n modulo p j . If p is a prime that does not divide n m , then for any positive integer f . In particular, if n m is not divisible by p and m ≡ n(mod p f ), then by (34) (also see [103, the congruence 7.1.16]) (35) n m ≡ ⌊n/p⌋ ⌊m/p⌋ (mod p f ). As observed in 1998 by D. Berend and J.E. Harmse [15, p. 34, congruence (2.2)], if a prime p does not divide n m and n = n 0 + n 1 p + · · · + n s p s , m = m 0 + m 1 p + · · · + m s p s are the p-adic expansions of n and m, then iterating the congruence (34), we find that The congruence (36) , which is there formulated as follows: If n = n 0 + n 1 p + · · · + n s p s , m = m 0 + m 1 p + · · · + m s p s are the p-adic expansions of n and m, and l < s, then If a = a 0 +a 1 p+· · ·+a k−1 p k−1 +a k p k and b = b 0 +b 1 p+· · ·+b k−1 p k−1 +b k p k are the p-adic expansions of a and b such that b k > a k , then we define Remark 12. For help in understanding the above result concerning the congruence (37), we offer the following example [85, p. 88]: In 2005 A.D. Loveless [85, p. 88] noticed that the above result concerning the congruence (37) can be used to simplify general classes of congruences modulo prime powers involving binomial coefficients. In particular, Loveless [85, p. 88, Theorem 5.1.3]) proved that if p is a prime, s and n are positive integers with n ≤ p s , then A similar result was earlier directly proved in 1980 by P.W. Haggard and J.O. Kiltinen [54,p. 398,Theorem]. This result asserts that if p is a prime, l and f are positive integers with f ≥ l − 1 and 0 ≤ n ≤ p f , then Using the congruence (37) (31) and (32) for any modulus p k with p ≥ 5 and k ≥ 1. They proved [30,Theorem 3] that if p is any prime, k, n, m, a, b and s are positive integers such that 0 < a, b < p s , then Remark 13. Notice that under the same assumption preceding the congruence (40), and if np k+s +a mp k+s ≡ 0(mod p), then the congruence (40) can be obtained by iterating s times the Granville's congruence (34). Notice also that the condition np k+s +a mp k+s ≡ 0(mod p) is by Lucas' theorem equivalent to the following two conditions: n m ≡ 0(mod p) and a b ≡ 0(mod p). Further, by repeated application of the congruence (40), and using Ljunggren's congruence (22), we find that under the same assumptions preceding the congruence (40) [30, Corollary 1] for any prime p > 3, In particular, the congruence (41) with s = 1 and k − 1 ≥ 0 instead of k implies that for each prime p ≥ 5 and for all integers k ≥ 1, n ≥ 0, a and b with 0 ≤ a, b < p Furthermore, the congruence (42) with ⌊k/2⌋ instead of ⌊(k − 1)/3⌋ is satisfied for p = 2, and the congruence (42) with ⌊(k − 1)/2⌋ instead of ⌊(k − 1)/3⌋ is also satisfied for p = 3. Remark 14. As noticed above, a proof of the congruence (41) given by Davis and Webb is based on their earlier result from [29] given by the congruence (41). However, this result together with related proof is slightly more complicated. In 2012 the author of this article [97,Theorem] gave a simple induction proof of the congruence (42) which uses only the usual properties of binomial coefficients. Adapting Fine's method [39], in 1988 R.A. Macleod [88, Theorem 2] proved the following variation of Lucas' theorem: Let p be a prime, let r be a positive integer, and let Then for every nonnegative integer where the summation ranges over all k + 1-tuples (N 0 , N 1 , . . . , N k ) such that Quite recently, in 2014 E. Rowland and R. Yassawi [115, Section 5, Theorem 5.3] established a new generalization of Lucas' theorem to prime powers as follows: Let p be a prime, let f be a positive integer and let D = {0, 1, . . . , p f − p f −1 }. If n = n 0 + n 1 p + · · · + n s p s and m = m 0 + m 1 p + · · · + m s p s are the p-adic expansions of nonnegative integers n and m, then n m ≡ where i = l h=0 i h p h and j = l h=0 j h p h . Remark 15. Note that i = l h=0 i h p h and j = l h=0 j h p h are representations of integers i and j in base p with an enlarged digit set D rather than the standard digit set {0, 1, . . . , p − 1}. Remark 16. E. Rowland and R. Yassawi [115,Section 5] showed that a broad range of multidimensional sequences possess "Lucas products" modulo a prime p. Furthermore, in 2009 K. Samol and D. van Straten [117,Proposition 4.1] established the Lucas type congruence for a sequence whose terms are constant terms of P (x) n for certain Laurent polynomials P (x). Characterizations of Wolstenholme primes. or equivalently, The two known such primes are 16843 and 2124679, and R.J. McIntosh and E.L. Roettger reported in [91] that these primes are only two Wolstenholme primes less than 10 9 . However, McIntosh in [90] conjectured that there are infinitely many Wolstenholme primes (for more information see [94] where B k (k = 0, 1, 2, . . .) are Bernoulli numbers defined by the generating The congruence (46) shows that a prime p is a Wolstenholme prime if and only if p divides the numerator of B p−3 , the (p − 3)rd Bernoulli number. As an application of the congruences (42) with k = 4 and Jacobsthal's congruence (23), we can obtain the following characterization of Wolstenholme primes given in 2012 by the author of this article [97,Proposition]: The following statements about a prime p ≥ 5 are equivalent. (i) p is a Wolstenholme prime; (ii) for all nonnegative integers n and m the congruence holds; (iii) for all nonnegative integers n, m, n 0 and m 0 such that n 0 and m 0 are less than p, McIntosh [89] proposed the following definition: Definition. The integer sequence (a n ) n≥0 has the Lucas property if a 0 = 1, and for every prime p, every n ≥ 0, and every j ∈ {0, 1, . . . , p − 1} the congruence (49) a pn+j ≡ a n a j (mod p) THE LUCAS PROPERTY AND THE p-LUCAS holds. Remark 17. (cf. [1, p. 152, Remark 6.1]). Taking n = j = 0 in the congruence (49) gives a 0 ≡ a 2 0 (mod p). This yields that either a 0 ≡ 0(mod p) or a 0 ≡ 1(mod p). In the first case, taking n = 0 and j ∈ {0, 1, . . . , p − 1} gives a j ≡ 0(mod p); hence a pn+j ≡ a n a j ≡ 0(mod p) for all n's and j's. This means that a n is a zero sequence modulo p. What precedes implies that such a sequence either satisfies a n = 0 for all n ≥ 0 or a 0 = 1. An analogous definition of double Lucas property is given also by McIntosh [89] as follows: Definition. The function L : N × N → Z has the double Lucas property if L(n, m) = 0 for all n < m, and for every prime p, every n, m ≥ 0, and every r, s with 0 ≤ r, s ≤ p − 1 the congruence (50) L(np + r, mp + s) ≡ L(n, m)L(r, s) (mod p) For a prime p and a positive integer k, in 1994 M. Razpet [111] considered the p k × p k matrix A(k, p) = [a i,j (k, p)] 0≤j≤p k −1 0≤i≤p k −1 , whose the entry a i,j (k, p) is defined as the remainder of the division of i j by p. In particular, for k = 1 we write A(p) = A(1, p) = [a i,j (p)] 0≤j≤p−1 0≤i≤p−1 . M. Razpet [111] noticed that for every k ≥ 1 and every prime p, the matrix A(k, p) is the k-fold tensor (or Kronecker) product of the matrix A(p) by itself in the field Z p , that is, A(k, p) = A(p) ⊗ A(p) · · · ⊗ A(p) k = A(p) ⊗k . Note that matrix indices start at index pair (0, 0). This is an algebraic and "square" representation of the oft-noted self-similarity structure of Pascal's triangle (see, e.g., [58] and [141]). Furthermore, as noticed in [111, p. 378], by Lucas' theorem we have Remark 19. In [109] M. Prunescu pointed out that Pascal's triangle modulo p k is not a limit of tensor powers of matrices if k ≥ 2. However, Pascal's triangle modulo p k are p-automatic, and consequently can be produced by matrix substitution and are projections of double sequences produced by two-dimensional morphisms (see [4]). In where n and a are nonnegative integers. Then f n,0 = n + 1, f n,1 = 2 n and f n,2 = 2n n . The sequences (f n,a ) n≥0 for a = 3, 4, 5, 6 are Sloane's sequences A000172 (Franel numbers), A005260, A005261, A069865 in [124], respectively. Calkin [21,Lemma 4] proved that for every positive integer a, the sequence (f n,a ) n≥0 has the Lucas property. This means that if p is a prime and if n = n 0 + n 1 p + · · · + n s p s is the p-adic expansion of n, then (56) f n,a ≡ s i=0 f n i ,a (mod p). Calkin [21, p. 21] also noticed that for any a ∈ {1, 2, . . .} the sequence (h n,a ) n≥0 defined as also has the Lucas property. For a positive integer n the central trinomial coefficient T n is the largest coefficient in the expansion (1 + x + x 2 ) n (Sloane's sequence A002426 in [124]). It is easy to express T n in terms of trinomial coefficients as where we use the convention that if any multinomial coefficient has a negative number on the bottom then the coefficient is zero. In 2006 E. Deutsch and B.E. Sagan [33] proved that the sequence (T n ) n≥0 has the Lucas property. Namely, by [33,Theorem 4.7] if p is a prime and n = n 0 + n 1 p + · · · + n s p s is a positive integer with 0 ≤ n i ≤ p − 1 for all i = 0, 1, . . . , s, then Furthermore, E. Deutsch and B.E. Sagan [33,Theorem 4.4] proved the following result for central binomial coefficients 2n n (Sloane's sequence A000984 in [124]): Let p be a prime and let n = n 0 + n 1 p + · · · + n s p s be a positive integer with 0 ≤ n i ≤ p − 1 for all i = 0, 1, . . . , s. For every j ∈ {0, 1, . . . , p − 1} let δ p,j (n) be the number of elements of the set {n 0 , n 1 , . . . , n s } equal to j. Then (58) 2n n ≡ As an application, the authors proved [25, Corollary 2.1] that for every prime p ≥ 3 and every integer n = n 0 + n 1 p + · · · + n s p s with 0 ≤ n i ≤ (p − 1)/2 for each i = 0, 1, . . . , s, we have Similarly, if the sums w(n) are defined as Remark 20. We point out that the Lucas property holds for a general family of sequences considered in 2006 by T.D. Noe [102]. For all nonnegative integers i and j let w(i, j|a, b, c) denote the number of all paths in the plane from (0, 0) to (i, j) with steps (1, 0), (0, 1), (1, 1), and with positive integer weights a, b, c, respectively. The explicit formula for w(i, j|a, b, c) was obtained by several authors by using combinatorial arguments (see, e.g., [43]): Actually, k in the above sum runs from max{i, j} to i + j. In the case a = b = c = 1, we have even the Delannoy numbers which count the usual, unweighted lattice paths from the point Remark 21. Razpet [112] notice that the congruence (62) is particularly true for the Delannoy numbers D(i, j) := w(i, j|1, 1, 1) as proven in another way in 1990 by M. Razpet [110] and by M. Sved and R.J. Clarke [132] (see also [33] and [37]). Further Lucas type congruences. For nonnegative integers n and k Stirling numbers of the second kind n k (Sloane's sequence A008277 in [124]) are recursively defined as: Notice also that under the hypothesis that r + n − m + 1 < s + p, the congruence (65) reduces to (66) np + r mp + s ≡ n − m + r s n m (mod p). For nonnegative integers n and k Stirling numbers of the first kind n k (Sloane's sequence A008275 in [124]) are defined by the recurrence relation The absolute value of n k (Sloane's sequence A094638 in [124]) denotes, as usual, the number of permutations of n elements which contain exactly k permutation cycles. In 1993 R. Peele, A.J. Radcliffe and H.S. Wilf [107, Proposition 2.1] proved the following analogue of Lucas' theorem for the numbers n k : Let p be a prime and let n and k be integers with 1 ≤ k ≤ n. Let n ′ = ⌊n/p⌋ and n 0 = n − n ′ p. Further, define integers i and j as follows: For a nonnegative integer k let J k (z) be the -it Bessel function of the first kind. Put Furthermore, define the polynomial u i (k; x) by means of Certain Lucas type congruences for w i (x) = u i (0; x) and the integers w i = w i (0) with i = 0, 1, 2 . . . , were derived by L. Carlitz [22] in 1955, and an interesting application was presented ((w n ) n≥0 is Sloane's sequence A000275). Let p be a prime and let n, r, l and a be positive integers. Following Z.-W. Sun and D. Wan [130], the normalized cyclotomic ψ-coefficient is defined as J. Boulanger and J.-L. Chabert [18] have extended Lucas' theorem to Linear Algebra and Even Topology. Their result can be briefly exposed as follows. Let V be a discrete valuation domain with finite residue field. Denote by K the quotient field of V , by v the corresponding valuation of K, by m the maximal ideal of V , and by q the cardinality of the residue field V /m. We denote by K, V and m the completions of K, V and m, respectively, with respect to the m-adic topology and we still denote by v the extension of v to K. Consider the ring Int(V ) of integer-valued polynomials on V , that is, A basis C n (X) of the V -module Int(V ) can be constructed as follows [20, Chapter II, §2 ]. We choose a generator t of m and a set U = {u 0 = 0, u 1 , . . . , u q−1 } of representatives of V modulo m. It is known that each element x of V has a unique t-adic expansion x j t j with x j ∈ U for each j ∈ N. We now construct a sequence (u n ) n≥0 of elements of V which will replace the sequence of nonnegative integers. Taking q as the basis of the numeration, that is, writing every positive integer n in the form n = k i=0 n i q i with 0 ≤ n i < q for each i = 0, 1, . . . , k, we extend the sequence (u j ) 0≤j<k in the following way: u n = u n 0 + u n 1 t + u n 2 t 2 + · · · + u n k t k . We then replace the binomial polynomials is the t-adic expansion of an element x of V , then Remark 24. Notice also that in 1993 N. Zaheer [144] generalized Lucas' theorem to vector-valued abstract polynomials in vector spaces. It is well known that for all n = 0, 1, 2, . . . LUCAS TYPE THEOREMS FOR SOME GENERALIZED BINOMIAL In fact, α and β are roots of the characteristic equation x 2 − Ax + B = 0. Note that for A = 1, B = −1 the terms of the sequence (u n ) n≥0 defined by (77) are the well-known Fibonacci numbers F n defined recursively as F 0 = 0, F 1 = 1 and Fibonacci numbers are in fact the Lucas sequence (u n ) n≥0 given by (77) with u 0 = 0 and u 1 = 1. Similarly, the Lucas numbers L n are defined by L 0 = 2, L 1 = 1 and L n+1 = L n + L n−1 for n ≥ 1. Fibonacci numbers F n and Lucas numbers L n are given as Sloane's sequences A000045 and A000032 in [124], respectively. Let a := (a n ) n≥0 be a sequence of real or complex numbers such that a n = 0 for all n ≥ 1. The a-nomial coefficients (or the generalized binomial coefficients) (associated to the sequence a) are defined by n k a = a n a n−1 · · · a 1 (a k a k−1 · · · a 1 )(a n−k a n−k−1 · · · a 1 ) for n ≥ 2 and 1 ≤ k ≤ n−1, and n 0 a = n n a = 1 for n ≥ 0. In general, even if the all terms of a sequence a = (a n ) n≥0 are integers, n k a may not be integers. In 1913 R.D. Carmichael [24, page 40] proved that if the sequence a := (a n ) n≥1 of positive integers is defined recursively as a 1 = a 2 = 1, and a n+1 = ca n + da n−1 for n = 2, 3, 4, . . . , where c and d are integers, then the all a-nomial coefficients are integers. For a more general result see Remark 28. If u := (u n ) n≥0 is the Lucas sequence defined by (77), and if A = ±1 or B = 1, then u 1 , u 2 , . . . are nonzero (see, e.g., [69]), and so are v 1 = u 2 /u 1 , v 2 = u 4 /u 2 , . . ., where v := (v n ) n≥0 is the companion sequence of the sequence (u n ) n≥0 given by (78). In the case when A 2 = B = 1, then as noticed in [69] u n = 0 if and only if 3 | n. If v n = 0, then u 2n = u n v n = 0; hence 3 | n and u n = 0, which is impossible since v 2 n − ∆u 2 n = 4B n (cf. [68]). Thus v 0 , v 1 , v 2 , . . . are all nonzero. If f A = ±1 or B = 1 the Lucas u-nomial coefficient n k u with 1 ≤ k ≤ n is the generalized binomial coefficient associated to the Lucas sequence u := (u n ) n≥0 defined by (77), that is, for n ≥ 2 and 1 ≤ k ≤ n−1, and n 0 u = n n u = 1 for all n ≥ 0. In the sam way we define the v-nomial generalized binomial coefficient n k v , where v := (v n ) n≥0 is the companion sequence of the Lucas sequence (u n ) n≥0 defined by (78). It is easy to see that if 0 ≤ m ≤ n, then and whence easily follows by induction that if q is any positive integer, then n m q are also integers for all n and m. Remark 26. An analogy to the Lucas u-nomial coefficients n k u was obtained in 1995 by W.A. Kimball and W.A. Webb [77] and in 1998 by B. Wilson [140] in some special cases, and in 2001 by H. Hu and Z.-W. Sun [69] for the general case (see Subsection 5.2). Under the above notations, in 1994 D.L. Wells [139,Theorem 2] proved that In 1988 M. Sved [131] establihed that the geometry of the binomial arrays of Pascal's triangle modulo p gives a simple interpretation of Lucas' theorem. Moreover, as noticed in [131, p. 58], this interpretation can be extended to arrays of other combinatorial functions; in particular, Lucas' theorem can be generalized to the Gaussian q-nomial coefficients as follows. Let p be a prime, q > 1 a positive integer not divisible by p, and let a = 1 be the minimal exponent for which q a ≡ 1 (mod p); then by Fermat little theorem it follows that a | (p − 1). Further, if n = Na + n 0 , m = Ma + m 0 with 0 ≤ n 0 < a and 0 ≤ m 0 < a, then [131, p. 60] Remark 27. In the same area of research A. Bès [16] generalized Lucas' theorem. This accomplishment obviously serves to improve the security of cryptographic applications modulo prime powers [16]. Definition. For a positive integer d, the rank of apparition r = r(d) with respect to the integer sequence (a n ) n≥0 is the least index n for which d divides a n , that is, r(d) = min{n ∈ N : d | a n } (if d does not divide any a n , then r(d) = ∞). Remark 28. Let a = (a n ) n≥0 be an integer sequence. In order to guarantee that the all a-nomial coefficients n k a = 0 are integers, it is usually required that the sequence a = (a n ) n≥0 be regularly divisible, that is, p i | a j if and only if r(p i ) | j for all i ≥ 1, j ≥ 1, and all primes p. Here r(p i ) denotes the rank of apparition og p i as defined above. The principal class of sequences which are known to be regularly divisible are the Lucas sequences given by (77) for which gcd(A, B) = 1 (see [63]). In 2000 J.M. Holte [61, Theorem 1] proved the following result: Let p be a prime and let m and n be nonnegative integers. Let r be the rank of apparition of p with respect to the Lucas sequence u = (u n ), let τ be the period of (u n ) modulo p, and let t = τ /r (t is necessarily a positive integer). Furthermore, for i, j ≥ 0 and for 0 ≤ k, l < r, let A i,j (k, l) denote the solution of the modulo p recurrence relation and let H i,j (k, l) = u rij r+1 A i,j (k, l). Set n 0 = n(mod r), m 0 = m(mod r), n ′ = n + r, m ′ = m + r, n ′′ = n ′ (mod t), and m ′′ = m ′ (mod t). Then (81) m + n n u ≡ m ′ + n ′ n ′ H m ′′ ,n ′′ (m 0 , n 0 ) (mod p). Using the above result, with the same notations as above, Holte [61, Theorem 3] also proved the following result: Let (u n ) be the Lucas sequence defined by (77), let p be a prime such that B is not divisible by p. Set λ = max{0, m ′′ + n ′′ − (p − 1)}, n * = n(mod t) and m * = m(mod t). Then (82) m + n n Thus, except when s = p − 1 and m ′′ + n ′′ ≥ p, then Holte [61,Section 7] noticed that by means of a bit of translation, the congruence (82) may be transformed into the following result obtained in 1992 by D. Wells [137] (also see [138]): Let N = n + m, and correspondingly, N 0 = N(mod r), N ′ = ⌊N/r⌋, and N ′′ = N ′ (mod s). Let N ′ = l j=0 N j p j and m ′ = l j=0 m j p j be the p-adic expansions of N ′ and m ′ . If p is a prime such that B is not divisible by p, then under the same definitions of B and t as above, for N ′′ ≥ m ′′ , and for N ′′ < m ′′ , [69,Theorem] proved the following result for the Lucas u-nomial coefficients: Let u = (u n ) n≥0 be a Lucas sequence defined by (77). Suppose that gcd(A, B) = 1, and A = ±1 or B = ±1. Then u k = 0 for every k ≥ 1. Let q be a positive integer, let m and n be nonnegative integers, and let R(q) = {0, 1, . . . , q − 1}. If s, t ∈ R(q) then where w q is the largest divisor of u q relatively prime to u 1 , . . . , u q−1 . If q or m(n + t) + n(s + 1) is even, then Remark 30. ([69, Remark 1]) When A = 2 and B = 1, we have u k = k for each nonnegative integer k, and if in addition we assume that q = p is a prime, then w p = p, and hence the congruence (86) becomes which is in fact, Lucas' theorem. In 2002 H. Hu [68, p. 291, Theorem] proved the following result: Let q be a positive integer, and let m and n be even nonnegative integers with n ≤ m. Let s and t be nonnegative integers such that t ≤ s < q, and let v * q be the largest divisor of v q relatively prime to v 0 , . . . , v q−1 . Then (88) m/2 n/2 Lucas type congruences modulo p 2 and p 3 (p is a prime > 3) for Lucas u-nomial coefficients and Fibonomial coefficients are established in [76], [77] and [120]. Namely, in 1993 W.A. Kimball and W.A. Webb [76] (also see [120, p. 1029]) proved the following two results: Let p be an odd prime and let m and n be nonnegative integers. Suppose that τ is the period of the Fibonacci sequence (F n ) n≥0 modulo p, r is the rank of apparition of p (that is, r is the least index k for which p divides F k ), and t = τ /r is an integer. In [134] it is shown that t ∈ {1, 2, 4}. The number ε is defined as follows: ε = 1 if τ = r; ε = −1 if τ = 2r; and ε 2 ≡ −1(mod p 2 ) if τ = 4r; in this case p ≡ 1(mod 4). Then and In 1995 Kimball and Webb [77, Theorems 1 and 3] proved the following results: Let (u n ) n≥0 and (v n ) n≥0 be the sequences defined by (77) and (78), respectively, where A and B are nonzero integers such that gcd(A, B) = 1. Let p be an odd prime, let τ be the period of the sequence (u n ) n≥0 modulo p, and let r be the rank of apparition of p. Then for all nonnegative integers m and n such that n ≤ m there holds (91) mr and As a consequence of the congruence (91), it is proved in [77, Corollary 2] that Moreover, the congruence (92) Kimball and Webb [77,Theorem 5] also proved the following congruences for the Gaussian q-nomial coefficients: where p is a prime, q is any p-integral rational number such that q 2 − q is not divisible by p, and r is the rank of apparition of p. In 1998 B. Wilson [140] proved the following result: Let p be a prime such that p = 2, 5, and let r be the rank of apparition of p with respect to the Fibonacci sequence (F n ) n≥0 . Then for any nonnegative integers m, n, s and l such that 0 ≤ s, l < r and In 2007 L.-L. Shi [120] proved another congruence modulo p 2 (where p > 3 is a prime) for the Lucas u-nomial coefficients. Namely, in [120,Theorem 2] it is proved the following result: Let (u n ) n≥0 be the Lucas sequence defined by (77), where A and B are nonzero integers such that gcd(A, B) = 1, and A = ±1 or B = 1. Let p > 3 be a prime not dividing B. If r is the rank of apparition of p with respect to (u n ) n≥0 , then for any nonnegative integers m, n, s and t such that 0 ≤ s, l < r, we have (98) (98) can be replaced by (v r /2) (m−n)nr m n . In 1995 Kimball and Webb [78,Theorem] and in 2007 L.-L. Shi [120] considered the generalized Lucas u-nomial coefficients and the generalized Fibonomial coefficients defined as follows. If (u n ) n≥0 is the Lucas sequence defined by (77) such that A = ±1 or B = 1, and let (F n ) n≥0 be the Fibonacci sequence. For any positive integer j we set where (u ij /u j ) i≥0 is also a Lucas sequence. In 1995 Kimball and Webb [78,Theorem] extended the congruence (90) by showing that if the rank r of apparition of p is p + 1 or p − 1, then for any prime p > 3 and any m ≥ n ≥ 0, In 2007 Shi [120] proved the congruence modulo p 3 (where p > 3 is a prime) for the generalized Lucas u-nomial coefficients. Namely, in [120,Theorem 1] it is proved the following result: Let A and B be nonzero integers such that gcd(A, B) = 1, and A = ±1 or B = 1. Let p > 3 be a prime not dividing B. If the rank r of apparition of p is p + 1 or p − 1 (and hence p denotes the Legendre symobol, then for any nonnegative integers m and n we have Remark 31. In the case A = −B = 1 the congruence (100) yields the congruence (99) of Kimball and Webb [78]. In 1965 G. Olive [104] (also see [105, Lemma 2.1]) proved the following result: Suppose that d is a positive integer and a, b, h, l are integers such that 0 ≤ b, l ≤ d − 1. Then where Φ d (q) is the dth cyclotomic polynomial. Remark 32. As noticed in [119,Chapter 5,p. 506], the congruence (101) perhaps was known to Gauss and it is rediscovered in 1982 by J. Désarménien [32] and V. Strehl [128] whose proof uses combinatorial arguments. Remark 33. Another different q-analogue of the congruence (101) was established in 1967 by R.D. Fray [42]. [37] established the congruences of several combinatorial numbers, including Delannoy numbers and a class of Apéry-like numbers, the numbers of noncrossing connected graphs (Sloane's sequence A007297), the numbers of total edges of all noncrossing connected graphs on n vertices (Sloane's sequence A045741), etc. 6. SOME APPLICATIONS OF LUCAS' THEOREM Even today, Lucas' theorem is being studied widely, and has both extended and generalized, particularly in the area of divisibility of binomial coefficients. Numerous results on divisibility of binomial and multinomial coefficients by primes and prime powers and related historical notes are given in 1980 by D. Singmaster [122]. Furthermore, Lucas' theorem has numerous applications in Number Theory, Combinatorics, Cryptography and Probability. We also point out that this theorem has become ubiquitous in the Theory of cellular automata. 6.1. Lucas' theorem and the Pascal's triangle. Let a k (n) be the number of integers 0 ≤ m ≤ n such that n m ≡ 0(mod k), that is, a k (n) is the number of nonzero entries on row n of Pascal's triangle modulo k. Let |n| w be the number of occurrences of the word w in n s n s−1 · · · n 0 , where n = s i=0 n i k i is the base-k representation of n. In 1899 J.W.L. Glaisher [48, §14] initiated the study of counting entries on row n of Pascal's triangle modulo k by using Lucas' theorem to determine a 2 (n) = 2 |n| 1 . The proof is simple (cf. [114, p. 1]): In order that n m be odd, each term n i m i in the product must be 1, so if n i = 0 then m i = 0 and if n i = 1 then m i can be either 0 or 1. It was the first result on a thorny path of solution of this difficult problem. However, this topic was forgotten for almost a half-century. In 1947 N.J. Fine [39] generalized Glaisher's result to an arbitrary prime. Namely, the formula (102) immediately follows from the fact that by Lucas' theorem, the binomial coefficient n m with m = s i=0 m i p i is not divisible by a prime p if and only if 0 ≤ m i ≤ n i for all i = 0, 1, . . . , s. Remark. 35. If p = 2, then the formula (102) presents the number of odd entries on row n = s i=0 n i 2 i of Pascal's triangle. Notice that the parity of binomial coefficients has played an important role in a paper from 1984 of J.P. Jones and Y.V. Matijasevič [73] in connection with Hilbert's tenth problem, Gödel's undecidability proposition and computational complexity. They base their Lemma on the Lucas' theorem given by the congruence (1) with p = 2 (cf. [74, Lemmas 3.9 and 3.10]). As noticed in [114], one may generalize Glaisher's result in a different direction, namely to ask for the number a k,r of integers 0 ≤ m ≤ n such that n m ≡ r(mod k). In 2011 E. Rowland [114, Section 2, Theorem 1] generalized Fine's result to prime powers, obtaining a formula for the sum a p α (n) = p α −1,r r=1 (n). Notice that in 1978 E. Hexel and H. Sachs [58, §5] determined a formula for a p,r i (n) in terms of (p − 1)th roots of unity, where r is a primitive root modulo p. For some related results see also [5], [28], [44], [51] and [114,Theorem 2]). The previous considerations can be genearlized as follows. Let p be a prime. For nonnegative integers n and k consider the set where p k n j denotes that p k | n j and n j ≡ 0(mod p k+1 ). In particular, A (p) n,0 is a set of nonzero entries on row n of Pascal's triangle modulo k. Therefore, under the previous notation, for a prime p we have a p (n) = |A (p) n,0 | (|S| denotes the cardinality of a set S), Notice that |A (p) n,0 | can be evaluated by Fine's formula (102). In 1967 L. Carlitz [23] solved a difficult problem for evaluation of |A (p) n,1 |. In 1971 F.T. Howard [64], discovered the formula for |A (2) n,k | for arbitrary k. In 1973 F.T. Howard [65] found a solution for |A (p) n,2 |. Further related results are given in [52], and in 1997 by J.G. Huard, B.K. Spearman and K.S. Williams [70]. Let n be a nonnegative integer. The nth row of Pascal's triangle consists of the following n+1 binomial coefficients: n 0 , n 1 , n 2 , . . . , n n . We denote by N n (t, m) the number of those binomial coefficients which are congruent to t modulo m, where t and m ≥ 1 are integers such that 0 ≤ t ≤ m − 1. Let p be a prime, and let n be a positive integer with the p-adic expansion n = k i=0 n i p i . We denote the number of r's occuring among n 0 , n 1 , . . . , n k by l r (r = 0, 1, . . . , p − 1). Set ω = e 2πi/(p−1) and let g denote a primitive root modulo p. Denote by ind g t the index of the integer t ≡ 0(mod p) with respect to g; that is, ind g t is the unique integer j such that t ≡ g j (mod p). In 1978 E. Hexel and H. Sachs [58,Theorem 3] have shown that for t = 1, 2, . . . , p − 1, Let p be a prime, and let k be a positive integer. Let A(k, p) be the matrix with entries i j p is the remainder of the division of i j by p). By using the Lucas property of the matrix A(k, p) given by (54), in 1994 M. Razpet [111, p. 378] proved that the number of all zero entries of the matrix A(k, p) is equal to p 2n − p+1 2 k , and hence, the number of all nonzero entries of the matrix A(k, p) is equal to p+1 2 k . Let p be a prime, and let n be a positive integer. For an integer r such that 0 ≤ r ≤ p − 1, let b r (n) be the number of binomial coefficients i j with 0 ≤ j ≤ i < n such that i j ≡ r(mod p). In 1957 J.B. Roberts [113] established systems of simultaneous linear difference equations with constant coefficients whose solutions would yield the quantities b r (n) explicitly. Namely, if 0 ≤ c ≤ p − 1, 1 ≤ t ≤ p k , k > 0, and ifq is the reciprocal of q ∈ {1, 3, . . . , p − 1} modulo p (i.e., qq ≡ 1( mod p)), then by [113,Theorem 1], Furthermore, if b(n) = p−1 r=1 b r (n) and n = k i=0 n i p i with 0 ≤ n i ≤ p−1 for all i = 0, 1, . . . , k, then by [113,Corollary 4], (106) b(n) = 1 2 k i=0 n i ((n i + 1) · · · (n k + 1)) 1 2 p(p + 1) i . By using Lucas' theorem, in 1992 R. Garfield and H.S. Wilf [44, Theorem] proved the following result: Let p be a prime, let a be a primitive root modulo p, and let n be a nonnegative integer with the p-adic expansion n = s i=0 n i p i . Denote by l j (n) the number of j's occuring among n 0 , n 1 , . . . , n s (j = 0, 1, . . . , p − 1). Further, for each i ∈ {0, 1, . . . , p − 2} let r i (n) be the number of integers k with 0 ≤ k ≤ n, for which n k ≡ a i (mod p), and let R n (X) = p−2 i=0 r i (n)X i be their generating function. Then In 1990 R. Bollinger and C. Burchard [17] considered the extended pascal's triangles which arise, by analogy with the ordinary Pascal's triangle as the (left-justified) arrays of the coefficients in the expansion (1 + x + x 2 + · · · + x k−1 ) n . That is, the array T k has in row n, column m, the number C k (n, m) defined for k, n, m ≥ 0 by the expansion (1 + x + x 2 + · · · + x k−1 ) n = and hence, C 2 (n, m) = n m . Accordingly, T 2 is the Pascal's triangle. R. Bollinger and C. Burchard [17, Theorem 1] applied Lucas' theorem to the Pascal's triangle, proving that if p is a prime, and if n = n 0 + n 1 p + · · · + n s p s and m = m 0 + m 1 p + · · · + m s p l are the p-adic expansions of n and m, then (108) C k (n, m) ≡ r 0 ,...,rs where the sum is taken over all (s + 1)-tuples (r 0 , r 1 , . . . , r s ) such that i) m = r 0 + r 1 p + · · · + r s p s and ii) 0 ≤ r i ≤ (k − 1)n i for each i = 0, 1, . . . , s; if m is not representable in this form, then certainly C k (n, m) ≡ 0 (mod p). By using Lucas' theorem, in 2009 the author of this article proved the following result [92,Theorem]. If d, q > 1 are integers such that (110) nd md ≡ n m (mod q) for every pair of integers n ≥ m ≥ 0, then d and q are powers of the same prime p. Remark 36. Observe that the above result may be considered as a partial converse theorem of the congruence (5) [9] (also see [52,Section 4]). Lucas' theorem is also applied in a recent author's note [101,Theorem 1] in order to prove the following result: If n > 1 and q > 1 are integers such that for every integer k ∈ {0, 1, . . . , n − 1}, then q is a prime and n is a power of q. Definition (see, e.g., [2]). Let p be a prime. We say that the sequence of rational numbers (a n ) n≥0 (a n ) n≥0 has the p-Lucas property (or that the sequence (a n ) n≥0 is p-Lucas) if the denominators of all the a n 's are not divisible by p, and if for all n ≥ 0 and for all j ∈ {0, 1, . . . , p − 1} it holds (112) a pn+j ≡ a n a j (mod p). Clearly, the sequence of rational numbers (a n ) n≥0 has the p-Lucas property if and only if (113) a n ≡ s i=0 a n i (mod p), for every positive integer n with the p-adic expansion n = n 0 + n 1 p + · · · + n s p s such that 0 ≤ n i ≤ p − 1 for all i = 0, 1, . . . , s. Furthermore, the integer sequence (a n ) n≥0 has the Lucas property if and only if (a n ) n≥0 has the p-Lucas property for every prime p. In what follows, we will consider sequences (a n ) n≥0 having the p-Lucas property for infinitely many primes p. As noticed in [2, Remarks 1], such a sequence is either 0 or it satisfies a 0 = 1. For a positive integer t consider the formal power series ∞ n=0 2n n t X n . It is known that the above formal power series is transendental over Q(X) when t ≥ 2. This is due in 1980 to Stanley [125], and independently in 1987 to Flajolet [40] and in 1989 to C.F. Woodcock and H. Sharif [143]. While Stanley and Flajolet used analytic methods and studied the asymptotics of the coefficients of this series, Woodcock and Sharif gave a purely algebraic proof. Their basic idea is to reduce this series modulo a prime p, and to use the p-Lucas property for central binomial coefficients: if n = s i=0 n i is the base p expansion of a positive integer n, then ( [89]; cf. (58) of Subsection 4.1) (114) 2n n ≡ s i=0 2n i n i (mod p). Namely, a proof of Woodcock and Sharif [143] is based on the following congruence which follows from Lucas' theorem: In 1998 J.-P. Allouche, D. Gouyou-Beauchamps and G. Skordev [2] generalized the method of Woodcock and Sharif to characterize all formal power series that have the p-Lucas property for "many" primes p, and that are furthermore algebraic over Q(X). Namely, they proved the following result [2, Theorem 1]: Let s be an integer ≥ 2. Define s ′ = s if s is even, and s ′ = 2s if s is odd. Let F (X) = ∞ n=0 a n X n be a nonzero formal power series with coefficients in Q. Then the following conditions are equivalent: (i) The sequence (a n ) n≥0 has the p-Lucas property for all large primes p such that p ≡ 1(mod s), and the formal power series F (X) is algebraic over Q(X). (ii) There exists a polynomial P (X) in Q[X] of degree at most s ′ , with P (0) = 1, such that F (X) = (P (X)) −1/s ′ . If s is odd, and if the number s ′ is replaced by s in the statement (ii), we still have (ii) implies (i), but the converse is not necessarily true. Furthermore, when the number s is equal to 2, in 1999 Allouche [1, Theorem 6.4] proved the following result (cf. [2, Theorem 2]): Let (a n ) n≥0 be a nonzero sequence of rational numbers. Then the following assertions are equivalent. (i) The sequence (a n ) n≥0 has the p-Lucas property for all large primes p, and the series F (X) = ∞ n=0 a n X n is algebraic over Q(X). (ii) For all large primes p the sequence (a n ) n≥0 has the p-Lucas property, and the degree d p of the series ∞ n=0 (a n (mod p))X n (that is necessarily algebraic over F p (X) from the p-Lucas property) is bounded independently of p. (iii) There exists a polynomial P (X) in Q[X] of degree at most 2, with P (0) = 1, such that F (X) = ∞ n=0 a n X n = (P (X)) −1/2 . Remark 37. In 2013É. Delaygue [31,Subsection 1.2] considered the notion of p-Lucas property to a Z p -valued family A = (A(n)) n∈N d , where p is a prime, Z p is the ring of p-adic integers and d is a positive integer. We say that A satisfies the p-Lucas property if and only if, for all v ∈ {0, 1, . . . , p − 1} d and all n ∈ N d , we have Delaygue [31,Theorem 3] established an effective criterion for a sequence of factorial ratios to satisfy the p-Lucas property for almost all primes p.
2014-09-06T20:38:47.000Z
2014-09-06T00:00:00.000
{ "year": 2014, "sha1": "e2d20c4d3656cea3c08c205962cdc1146c8872c7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e2d20c4d3656cea3c08c205962cdc1146c8872c7", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
54214052
pes2o/s2orc
v3-fos-license
The Exponential Attractors for the g-Navier-Stokes Equations We consider the exponential attractors for the two-dimensional g-Navier-Stokes equations in bounded domain Ω. We establish the existence of the exponential attractor in L2(Ω). Introduction In this paper, we study the behavior of solutions of the g-Navier-Stokes equations in spatial dimension 2. These equations are a variation of the standard Navier-Stokes equations, and they assume the form, where g g x 1 , x 2 is a suitable smooth real-valued function defined on Ω and Ω is a suitable bounded domain in R 2 .Notice that if g x 1 , x 2 1, then 1.1 reduce to the standard Navier-Stokes equations. In Roh 1 the author established the global regularity of solutions of the g-Navier-Stokes equations.One can refer to 2 for details.For the boundary conditions, we will consider the periodic boundary conditions, while same results can be got for the Dirichlet boundary conditions on the smooth bounded domain.Before we present the derivation of the g-Navier-Stokes equations, it is convenient to recall some relevant aspects of the classical theory of the Navier-Stokes equations.For many years, the Navier-Stokes equations were investigated by many authors and the existence of the attractors for 2D Navier-Stokes equations was first proved by Ladyzhenskaya 3 and independently by Foias ¸and Temam 4 .The finite-dimensional property of the global attractor for general dissipative equations was first proved by Mallet-Paret 5 .For the analysis on the Navier-Stokes equations, one can refer to 6 . In the past decades, many papers in the literature show that the long-time behavior of dissipative systems can be understood through the concept of attractors, see 7-14 .In addition, in 15 the authors introduced the so-called exponential attractors, which is an interesting intermediate object between the usual global attractors and an inertial manifold and satisfies some nice properties like those of inertial manifolds e.g., finite fractal dimension, exponential attracting, stable with respect to some perturbations .Indeed it now seems clear that the interesting object to investigate is the exponential attractor, rather than the usual global attractor which is recovered as a byproduct .See 16, 17 , and so forth.The exponential attractor is a compact and positively invariant set having finite fractal dimension which contains the global attractor and attracts every trajectory at an exponential rate.It is also known that the exponential attractor enjoys stronger robustness than the global attractor.When the semigroup of a dynamical system depends continuously on a parameter, the global attractor is in general only upper-semicontinuous.In turn, under some reasonable assumptions, if an exponential attractor exists, it can depend continuously on the parameter.Such a continuous dependence was recently studied by Efendiev and Yagi 18 .When the underlying space is a Hilbert space, it is known by the same reference 15 quoted above that the squeezing property of semigroup implies existence of exponential attractors and provides a sharp estimate of attractor dimensions.When the underlying space is a Banach space, it is known by Efendiev et al. 19 that the compact smoothing property of semigroup implies existence of exponential attractors Theorem 2.3 .Another construction of exponential attractors in Banach spaces was proposed by Dung and Nicolaenko in 20 .We also refer to 17, 21-25 for more details. In the paper, compared with the result obtained in 26 , taking advantage of a recent result due to Efendiev et al. 19 Theorem 2.3 , we construct the exponential attractor.This paper is organized as follows.In Section 2, we first recall some basic results, and then, give an important technique tool 19 , that is, Theorem 2.3.In Section 3, we study the existence of compact exponential attractor for the two-dimensional g-Navier-Stokes equations in the periodic boundary conditions Ω. Preliminary Results Let Ω 0, 1 × 0, 1 and we assume that the function g x g x 1 , x 2 satisfies the following properties: per Ω 2 there exist constants m 0 m 0 g and M 0 M 0 g such that, for all x ∈ Ω, 0 < m 0 ≤ g x ≤ M 0 .Note that the constant function g ≡ 1 satisfies these conditions. We denote by L 2 Ω, g the space with the scalar product and the norm given by as well as H 1 Ω, g with the norm where ∂u/∂x i D i u. Then for the functional setting of the problems 1.1 , we use the following functional spaces: where H g is endowed with the scalar product and the norm in L 2 Ω, g , and V g is the spaces with the scalar product and the norm given by Also, we define the orthogonal projection P g as and we have that Q ⊆ H ⊥ g , where Then, we define the g-Laplacian operator to have the linear operator For the linear operator A g , the following hold see Roh 1 : 1 A g is a positive, self-adjoint operator with compact inverse, where the domain of 2 There exist countable eigenvalues of A g satisfying where λ g 4π 2 m/M and λ 1 is the smallest eigenvalue of A g .In addition, there exists the corresponding collection of eigenfunctions {e 1 , e 2 , e 3 , . ..} which forms an orthonormal basis for H g . Next, we denote the bilinear operator B g u, v P g u • ∇ v and the trilinear form where u, v, w lie in appropriate subspaces of L 2 Ω, g .Then, the form b g satisfies We denote a linear operator R on V g by 12 and have R as a continuous linear operator from V g into H g such that 2.13 We now rewrite 1.1 as abstract evolution equations, du dt νA g u B g u νRu P g f, u 0 u 0 . 2.14 Let us first recall some basic matters on the dynamical system.Let E be a Banach space and let K be a subset of E, K being a metric space equipped with the distance induced from the norm of E. Let S t , 0 ≤ t < ∞ be a family of mappings from K into itself having the following properties: i S 0 I the identity mapping ; ii S t S s S t s , 0 ≤ t, s < ∞ the semigroup property ; iii the mapping G : 0, ∞ × K → K, t, u 0 → S t u 0 , is continuous.Such a family is called a continuous nonlinear semigroup acting on K.The image of S • u 0 drawn in K is called the trajectory starting from K. The whole of such trajectories is the dynamical system S t , K, E , where K and E are called the phase-space and the universal space, respectively. A subset A of the phase-space K is the global attractor of S t , K, E if the following conditions are satisfied: i A is a compact subset of E; ii A is an invariant set, that is, S t A A for every 0 < t < ∞; iii A attracts every bounded subset of K, namely, for any bounded subset B ⊂ K, it holds that lim t → ∞ dist S t B, A 0, where dist A, B sup x∈A inf y∈B x − y E denotes the Hausdorff pseudodistance between two sets A and B. We recall the definition of an exponential attractor see, e.g., 15, 17 . 2 it is positively invariant, S t A ⊆ A, for all t ≥ 0, 3 it attracts exponentially the bounded subsets of E in the following sense: where the positive constant α, the monotonic function Q are independent of B. Remark 2.2.We note that the existence of an exponential attractor A for the semigroup S t automatically implies the existence of the global attractor A and the embedding A ⊂ A. We note however that, in contrast to the global attractor, an exponential attractor is not uniquely defined. To construct an exponential attractor, we make use of the following result due to Efendiev et al. 19 . Theorem 2.3.Let X, Y be two Banach spaces such that Y is compactly embedded in X.Let Z be a bounded closed subset of X. Assume that a semigroup S t t>0 on X satisfies the following conditions: there exists a time t * > 0, constants L 1 , L 2 > 0, and exponents γ 1 , γ 2 > 0 such that S t * maps Z into itself and 2.16 hold for any u 0 , v 0 ∈ Z and s, t ∈ 0, t * .Then the dynamical system S t t>0 , Z admits an exponential attractor. Hereafter c will denote a generic scale invariant positive constant, which is independent of the physical parameters in the equation and may be different from line to line and even in the same line. Exponential Attractor of g-Navier-Stokes Equations This section deals with the existence of the exponential attractor for the two-dimensional g-Navier-Stokes equations with periodic boundary condition. In Roh 1 , the authors have shown that the semigroup S t : H g → H g t ≥ 0 associated with the systems 2.14 possesses a global attractor in H g and V g .The main objective of this section is to prove that the system 2.14 has exponential attractors in H g . To this end, we first state some of the following results of existence and uniqueness of solutions of 2.14 .Theorem 3.1.Let f ∈ V g be given.Then for every u 0 ∈ H g there exists a unique solution u u t on 0, ∞ of 2.14 .Moreover, one has Proof.The Proof of Theorem 3.1 is similar to Roh 1 and Kwaket al. 26 and Temam 12 . In a similar manner as in 13, 14 , we can establish the following a priori estimate for for 2.14 .Lemma 3.2.Let B be a bounded subset of H g .The semigroup {S t } : H g V g → H g V g associated with 2.14 possesses absorbing sets which absorb all bounded sets of H g .Moreover B 0 and B 1 absorb all bounded sets of H g and V g in the norms of H g and V g , respectively. Let S t t≥0 be the semigroup associated with 2.14 .Since Ω is bounded, V g is compactly embedded in H g .Then we consider H g , V g as X, Y in Theorem 2.3, respectively.The crucial point is the choice of the bounded subset Z.Let where B denote the closure of B in H g and τ is the time when B 1 absorbs itself.We claim that A has all properties required for Z.In fact, it is easy to see that A is positively invariant under the semiflow S t .In order to see that A has the other required properties, we begin with constructing uniform a priori estimates in time t for the solution u to 2.14 .Now we consider difference of two solutions of 2.14 starting from B 0 . Proposition 3.3.Let the assumptions of Theorem 2.3 hold.Then, there exists a time t * > 0, constants L 1 > 0, and exponents γ 1 , γ 2 > 0 such that S t * maps A into itself and holds for any u 0 , v 0 ∈ A and s, t ∈ 0, t * . Let u u 1 − u 2 which satisfies Multiplying 3.6 by u, we have Since b g satisfies the following inequality see Temam 12 : 3.15 Next we multiply 2.14 by u, and we have du dt , u g νA g u, u g B u, u , u g f, u g − Ru, u g . 3.18 Multiplying 2.14 by A g u, we have 1 2 ≤ B g u, u , A g u g f, A g u g Ru, A g u g . 3.19 Expanding and using Young's inequality, together with b g satisfying inequalities 12 , there exists a constant 3.24 To estimate b g , we recall some inequalities 12 : for every u, v ∈ D A g , 3.25 Expanding and using Young's inequality, together with 3.25 , we have b g u, u 2 , A g u g ≤ | u| Now, we give our main theorem which relies on the Propositions 3.3 and 3.4 to construct an exponential attractor.Theorem 3.5.There exists a subset A of H g such that S t maps A into itself and the dynamical system S t t>0 , A admits an exponential attractor. Based on the abve results Propositions 3.3 and 3.4 and applying Theorem 2.3, we can deduce Theorem 3.5. the solution u with u 0 u 0 satisfies du/dt ∈ L 2 0, T; H g , it holds that |u s − u t | g ≤ From 3.18 , 3.20 and Lemma 3.2 we can show that there exists a constant M > 0 which satisfies du/dt L 2 0,T ;H g ≤ M and depends on T but not on u 0 .Putting 3.15 and 3.21 together, therefore 3.5 turns out to be valid with exponents γ 1 1/2 and γ 2 1.Let the assumptions of Theorem 2.3 hold.Then, there exists a time t * > 0 and constants L 2 > 0 such that S t * maps A into itself andS t * u 0 − S t * v 0 g ≤ L 2 |u 0 − v 0 | g 3.22hold for any u 0 , v 0 ∈ A and s, t ∈ 0, t * .
2018-12-02T08:21:29.006Z
2012-05-24T00:00:00.000
{ "year": 2012, "sha1": "cd22969985e98eaa97f2b37d7cbdaf583bd393a8", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jfs/2012/503454.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cd22969985e98eaa97f2b37d7cbdaf583bd393a8", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
244729671
pes2o/s2orc
v3-fos-license
Network Traffic Shaping for Enhancing Privacy in IoT Systems Motivated by privacy issues caused by inference attacks on user activities in the packet sizes and timing information of Internet of Things (IoT) network traffic, we establish a rigorous event-level differential privacy (DP) model on infinite packet streams. We propose a memoryless traffic shaping mechanism satisfying a first-come-first-served queuing discipline that outputs traffic dependent on the input using a DP mechanism. We show that in special cases the proposed mechanism recovers existing shapers which standardize the output independently from the input. To find the optimal shapers for given levels of privacy and transmission efficiency, we formulate the constrained problem of minimizing the expected delay per packet and propose using the expected queue size across time as a proxy. We further show that the constrained minimization is a convex program. We demonstrate the effect of shapers on both synthetic data and packet traces from actual IoT devices. The experimental results reveal inherent privacy-overhead tradeoffs: more shaping overhead provides better privacy protection. Under the same privacy level, there naturally exists a tradeoff between dummy traffic and delay. When dealing with heavier or less bursty input traffic, all shapers become more overhead-efficient. We also show that increased traffic from a larger number of IoT devices makes guaranteeing event-level privacy easier. The DP shaper offers tunable privacy that is invariant with the change in the input traffic distribution and has an advantage in handling burstiness over traffic-independent shapers. This approach well accommodates heterogeneous network conditions and enables users to adapt to their privacy/overhead demands. I. INTRODUCTION P RIVACY is a crucial factor inhibiting the proliferation of IoT devices and systems.Privacy concerns are aggravated in applications such as smart home and smart healthcare where sensing data containing personal information is continuously generated and often transmitted wirelessly onto the cloud.The sheer volume of this data poses huge challenges for privacy protection and (network) resource management [1]. Privacy attacks for user information can happen to many forms of IoT data and motivate different kinds of counter-Manuscript received March xx, 2020; revised xxxx xx, 2021.This work was supported by the United States National Science Foundation under Grant numbers SaTC-1617849, CCF-1453432 and CRISP-1541069, and by DARPA and US Navy under contract N66001-15-C-4070.Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation, DARPA, or the US Navy.S. Xiong, A. D. Sarwate and N. B. Mandayam are with the Department of Electrical and Computer Engineering, Rutgers, The State University of New Jersey, Piscataway, NJ 08854 USA e-mail: sijie.xiong@rutgers.edu,anand.sarwate@rutgers.edu,narayan@winlab.rutgers.edu.measures [2].For example, raw sensor data contains unique patterns which can be extracted by untrusted application servers for user activity inference.To counter inference attacks on raw sensor data, existing methods include but are not limited to obfuscation (e.g., Raval et al. [3] and Malekzadeh et al. [4]) and quantization (e.g., Xiong et al. [5]).They aim to guarantee rigorous privacy and minimize data utility loss.In addition, encryption techniques protect the raw sensor data from network observers.However, even with encrypted data, the communication system itself may leak privacy.Network traffic of IoT devices contains identifiable information like packet headers, sizes and timing that are highly correlated with the underlying user activities.By means of eavesdropping on encrypted IoT network traffic to obtain such information, many recent works demonstrate successful traffic analysis attacks [6] to recover private user information. This paper addresses traffic analysis attacks on packet sizes and timing information of encrypted IoT network traffic.Examples include Apthorpe et al. [7] who use a clustering method on encrypted packet traces of smart home IoT devices to easily identify their operating states, with each state triggered by a specific user activity; Das et al. [8] show that encrypted traffic from wearable (e.g., a fitness tracker) Bluetooth Low Energy (BLE) signal allows a BLE sniffer to identify user activities (e.g., whether the person is at rest/working/walking/running) from packet-size and inter-arrival time distributions; Buttyan and Holczer [9] apply the Discrete Fourier Transform on traffic rates from a wireless body area sensor network to reveal the types of medical sensors mounted on the patient, hence the patient's health conditions. To mask private user information encoded in packet sizes and timing, network traffic shaping is a standard technique to change the original packet sizes and timing by means of delaying, padding, fragmentation and inserting dummy packets.There have been extensive studies and designs of network traffic shaping in many contexts such as anonymous networks, traditional Internet, as well as IoT systems.However, designing a privacy-preserving traffic shaping mechanism M for IoT systems requires special tailoring.Here, we highlight the following unique characteristics of IoT networks [10] and their implications on privacy and resource requirements for M, • Continuous sensing.IoT devices continuously transmit sensing data in streams of packets, M should provide the same privacy guarantee at any time during transmission.• Changing environment.The network of IoT devices and their traffic features can change rapidly, M should guarantee privacy that is invariant with the changes. • Limited resources.Many IoT networks are lowbandwidth and certain applications are delay-sensitive.With resource constraints, M must be efficient and incur minimal overhead in terms of delay and dummy traffic (padding and dummy packets) for privacy protection.• Heterogeneity.Due to heterogeneous network conditions and user demands for privacy, M should offer tunable privacy and optimize for the privacy-overhead tradeoff. A. Research Questions In the face of traffic analysis attacks in existing IoT devices and networks that utilize packet sizes and timing information, what kind of privacy can we hope to achieve?How should we design new protocols, standards and M that not only enable privacy protection against such attacks, but also fulfill the preceding desiderata for IoT networks?How do different types of IoT network traffic impact the privacy-overhead tradeoff of M? In this paper, we address these questions by setting up a packet stream model for IoT network traffic, developing a formal and tunable privacy model for protecting packet sizes and timing information.We then design a novel shaping mechanism under the established traffic and privacy models and formulate a problem to optimize its shaping overhead.In the end, we will demonstrate the performance of the shaper on different types of IoT traffic by comprehensive experiments. B. Related Work Existing network traffic shaping mechanisms proposed in the contexts of anonymous networks, traditional Internet and IoT systems mainly differ in: i) privacy measure, ii) privacy guarantee (whether it is invariant with traffic changes, valid and tunable at all times during communication and whether the mechanism protects both packet sizes and timing information), and iii) overhead (what resources the mechanism utilizes for shaping and whether it's optimized for overhead efficiency).We center our discussion about related work in these aspects. 1) Anonymous Networks: Traffic shaping in anonymous networks [20] focus on hiding who is communicating with whom.An adversary can observe the correspondences between the size and timing of messages (packets) going into and out of an intermediate network node (e.g., a router), and make inferences about source-destination pairs in the network.Anonymity is measured by entropy (i.e., the uncertainty of the observing adversary about the sender/recipient of the messages) and achieved by the design of mixes. Anonymity mixes include pool mixes (PM) such as the original Chaum [11] mix which collects a fixed number of messages, pads them to a uniform length and outputs them in a batch.It hides both the sizes and timing of arrivals hence the correspondences between input and output messages.However, excessive padding and batching renders pool mixes inefficient in terms of byte and delay overhead. To reduce latency, continuous mixes (CM) are proposed to delay individual messages by random amounts of time.For example, in the Stop-and-Go mix (SG-mix) [12], random delays added to the messages follow an exponential distribution.SGmix was later shown to provide maximum sender anonymity measured by entropy among continuous mixes [21], assuming Poisson arrivals.On top of delaying individual messages, link padding (LP) techniques such as independent [13] and dependent [14] link padding can further reduce latency and enhance anonymity by allowing insertion of dummy packets to the output traffic to match predefined transmission schedules.However, continuous mixes and link padding techniques are designed for timing-based traffic analysis attacks.By themselves, they protect only packet timing information and assume that packets are already padded to the same size. In conclusion, mixes and link padding shape the input traffic to match either fixed or predefined transmission schedules.By changing the schedules, they can offer tunable anonymity measured by entropy.Nonetheless, the act of padding all packets to the same size to avoid compromising anonymity makes these mechanisms inefficient in terms of byte overhead. 2) Traditional Internet: Traffic analysis attacks in traditional Internet, such as webpage identification in HTTPS connection [22], spoken language detection [23] and conversation transcript reconstruction [24] in encrypted VoIP calls, focus more on inferring private information within a single flow.This differs from identifying source-destination pairs in anonymous networks, and the subject of traffic privacy in this context often adopts different privacy measures. Wright et al. [15] and Iacovazzi and Baiocchi [16] use accuracy of flow classification as their privacy measure.The former proposes a countermeasure that hides only packet size information by shaping the source packet-size distribution to look like a target distribution, while minimizing byte overhead.The latter extends the same idea but their design considers additional masking for packet timing information.They also propose a partial masking algorithm where the tradeoff between privacy guarantee and masking cost is controlled by masking how much fraction of the entire traffic flow. Mutual information (MI) between the original and shaped traffic is often used as a privacy measure for shaping mechanisms as well.For example, an ON-OFF shaping policy is developed by Feghhi et al. [25] to guarantee perfect privacy (0 MI) for packet timing information.Mathur and Trappe [26] study the fundamental tradeoff in shaping mechanisms between delay, padding and level of unlinkability (measured by MI).They show that combining delay and padding can offer much higher privacy protection than either approach alone. 3) IoT Networks: Like traditional Internet, inference attacks on encrypted IoT traffic aim at recovering private information (e.g., user activities and health conditions, IoT device types and operating states, etc.) within a single packet stream.However, the aforementioned characteristics of IoT networks inspire the re-design of efficient traffic shaping mechanisms and the quest for more suitable privacy measures. Apthorpe et al. [17] propose a stochastic traffic padding (STP) scheme to hide time periods of user activities in a smart home setting with reasonable padding overhead.They measure privacy by the accuracy of identifying user activity periods, and trade off padding overhead with privacy by controlling the percentage of padding periods.Alshehri et al. [18] consider IoT device identification attacks based on typical packet-size sequences and propose padding packets with bytes of uniform random noise to guarantee approximate (, )-differential privacy (DP) [27].Our earlier work [19] proposes a packet padding obfuscation mechanism satisfying pure -DP [28] and optimizes its padding overhead.The mechanism prevents a last-mile eavesdropper from inferring about IoT device types and operating states based on packetsize distributions of encrypted network traffic.The novelty of this paper in the design of network traffic shaping scheme compared to the previous ones is twofold: i) it is the first shaper with pure -DP guarantees using packet fragmentation and queueing operations besides padding, dummy packets insertion and delaying and ii) it optimizes for overhead efficiency while providing tunable worst-case privacy guarantees for packet sizes as well as timing information which extends our previous work [19]. C. Advantages of Differential Privacy Differential privacy [29] has emerged over the last decade as a compelling framework for measuring the worst-case privacy risk in various applications.We believe that designing traffic shaping mechanisms with DP guarantee satisfies the various privacy requirements desired by IoT networks. Concretely, the framework of DP under continuous observation [30] allows us to develop a new privacy model on traffic shaping mechanisms that ensure the same worst-case privacy level at any time during transmission.On the contrary, the information-theoretic (entropy and MI) and accuracy-based privacy measures are inherently average measures at the packet stream level, and lack information about worst-case privacy loss [31] at every time instance. Information-theoretic and accuracy-based privacy measures also rely heavily on the prior distribution of the original traffic.Under these, shaping mechanisms designed and optimized for a particular source flow (e.g., SG-mix for Poisson arrivals) may perform poorly in terms of privacy guarantees [32] on unstable and unpredictable real traffic.The privacy guarantees may also fall apart when attackers are equipped with other side information to update their prior beliefs.Conversely, DP does not depend on the prior distribution and has been shown to be resistant to arbitrary side information [33].A shaping mechanism, if designed with DP, will guarantee the same level of privacy that is invariant with the changes in the original traffic or the attacker's side information. Lastly, DP offers directly tunable privacy parameters that can accommodate the heterogeneity in network conditions and user demands for privacy.Many of the other shaping mechanisms, however, require the choice of target traffic distributions to indirectly control their privacy guarantees. D. Challenges The listed advantages motivate us to design shaping mechanisms for encrypted IoT network traffic under the framework of DP.Table I summarizes the differences between our proposed method and existing traffic shaping mechanisms.In order to counter traffic analysis attacks on packet sizes as well as timing information and to design overhead-efficient DP shapers, we face the following challenges, • Formal privacy model on encrypted IoT traffic.We need to set up an appropriate IoT network traffic model, and more importantly develop a formal privacy model on shaping mechanisms with DP guarantees for protecting both packet sizes and timing information.• Easy-to-find shaper that is privacy-preserving and overhead-efficient.We need to design a shaping mechanism that not only satisfies the formal privacy model but also consumes minimum overhead due to limited resources in IoT networks.Moreover, we want to find such privacy-preserving and overhead-efficient shaper easily. E. Contributions Our work makes the following contributions to the subject of traffic privacy and countermeasure design in IoT networks, • We set up an abstract discrete IoT packet stream model inspired by a smart home setting in Section II.However, the methodology developed here after is generally applicable to other IoT applications.• We develop an event-level DP model on infinite packet streams in Section III.The model defines tunable privacy guarantees for both packet sizes and timing information.• In Section IV-A, we design a shaping mechanism under the event-level DP model.The mechanism satisfies a firstcome-first-served (FCFS) queueing discipline.To reduce the shaping overhead needed for privacy protection, we allow the use of delay, fragmentation and dummy traffic to shape the outgoing packet stream leaving a local area IoT network (e.g., a smart home).In special cases, the mechanism recovers previous methods in the literature.• We formulate the problem of finding event-level DP and overhead-efficient shaper in Section V-A as a constrained optimization: we minimize the expected queue size across time imposed by the shaper (as a proxy for the expected delay per packet) under given privacy and transmission efficiency levels.We further show that the optimal shaper can be easily found by convex programming.• In Section VII, we conduct comprehensive evaluations on the privacy-overhead tradeoffs of different shapers on both synthetic data and packet traces from actual IoT devices.For IoT packet traces, we further show how shapers optimized with the assumption of independent and identically distributed (i.i.d.) network traffic perform differently on actual (probably bursty) traffic.We believe that this work establishes a novel framework for building resource-efficient network traffic shaping systems with strong, formal and tunable privacy guarantee.The privacy-overhead tradeoffs carry meaningful indications for the design of future privacy-aware IoT systems. II. SYSTEM MODEL A. Overview A prototypical realization of the IoT system [34] consists of a gateway system that manages networked IoT devices.For example, Fig. 1 shows a typical realization of the smart home IoT network.It consists of a WiFi access point (also acting as a packet switch/home gateway) that allows multiple heterogeneous monitoring devices to transmit information onto the application server in the form of data packets. Each device has three modes of operation: sensing, update, and silence.In the sensing mode, a device can sense one or several types of user activities (or events, e.g., a Nest camera can detect the motion of a user or whether the user is checking the camera feed) and subsequently transmit eventindicating traffic (e.g., a large packet or a short burst with distinctive size) to its application server.In the update mode, a device routinely sends packets containing status updates such as energy consumption levels and firmware versions.Lastly, devices send nothing in the silence mode. We illustrate in Fig. 2 a sample of aggregate packet stream during a 1-min time window as an input to the packet switch.Colored spikes indicate packet arrivals at different times.The smaller ones are status updates.The larger ones correspond to the event-triggered packets, for example, the 270B packet at 7s indicates that some user motion is detected by the camera and the 1117B packet at 30s marks the sleep onset of the user.The packet/burst sizes triggered by a particular type of event at various times are much alike and those triggered by different types of events are quite distinguishable [35].Subahi and Theodorakopoulos [36] survey and provide an extensive list of correspondences between user interactions with IoT mobile apps and the generated packet sizes/sequences.Denote + = {1, 2, . . ., } as the set of all observable events in an IoT network and {0}∪ + .When event ∈ + (e.g., a person going to sleep) happens, it triggers a device (e.g., a Sense Sleep monitor) to send an event packet of size > 0. Here, we model the event-indicating traffic as a single packet with distinctive size in order to abstract away from the details of event traffic 1 .We also consider the aforementioned status updates as traffic generated by a special type of event to ignore the distinction.Let = 0 represent a null event which stands for the silence mode, i.e., the absence of any events or updates.Denote A + = { 1 , 2 , . . ., } as the set of all possible event packet sizes in bytes.We assume without loss of generality (w.l.o.g.) that 0 < 1 < 2 < • • • < .Let A { 0 } ∪ A + where 0 = 0 represents packets of size zero in the case of null events, and [ 0 , 1 , . . ., ] . B. Event Packet Stream We model the event packet stream arriving at the packet switch as a discrete-time 2 sequence { } of packets with variable sizes 1 , 2 , . . .drawn from the set A. Time slots are indexed by the subscripts = 1, 2, . . .and we assume one packet per slot (including zero packet).We use = ( 1 , 2 , . . ., ) to denote the -prefix of the event packet stream from the first to the -th time slot.Let T {1, 2, . . ., } and denote I { ∈ T : > 0} as the set of arrival times of event packets ∈ A + during time period to diffentiate from T . We think of the packet sizes as i.i.d. with distribution () on A. Specifically, ( ) ∈ (0, 1) denotes the probability of event ∈ + (or the null event = 0) occurring in each time slot.It also measures the rate at which ∈ occurs over the infinite horizon, so that, 1 For example, if the event-triggered network traffic is a short burst, we can represent it by its burst size .Additionally, if the packet/burst sizes triggered by a particular type of event change slightly at different occurring times, we can use the average packet/burst size to symbolize the event traffic. 2 Time can be made discrete by specifying a time resolution.The transmission/processing times of variable-size packets differ, which can potentially be exploited by an eavesdropper.Here, we assume that this minute kind of timing information is destroyed by the inherent variability of network delay. 𝝀 [ 0 , 1 , . . ., ] .We can denote the arrival rate of all event packets ∈I at the switch as, In addition, we measure the input byte rate (expected number of bytes per time slot) to the packet switch as, It is useful to distinguish event packet streams based on their arrival and input byte rates.By an "elephant" ("mouse") flow hereafter we mean an event packet stream with a high (resp.low) arrival rate Λ and input byte rate .We then describe heavy traffic (e.g., aggregated from a larger number of IoT devices) as an elephant flow, and vice versa.In Section VII, we will show how mouse/elephant flows affect the privacyoverhead tradeoffs of our proposed shaping mechanism. C. Adversary Model We assume that a last-mile eavesdropper in Fig. 1 observes the packet stream coming out of the gateway and is interested in identifying the ongoing user activities within the household.Due to packet encryption, they can only observe the timing and sizes of successive packets.We suppose that the adversary can also obtain the set A + from other sources. In this work, we further assume an event-level adversary who is interested in the type and timing of an event/activity.Concretely, the adversary wants to know whether event ∈ + or ∈ + (event type) happened given that they observed something in time slot .For example, they can infer based on the transmitted packet size whether the user is checking the Nest camera's live feed (142B packet) vs being detected for motion (270B packet).The adversary can also discover the timing of an event based on the packet observing time.A 1117B packet observed at time (10pm) instead of (11pm) informs the adversary that the user is going to sleep at 10pm, since a 1117B packet is exclusively generated by the Sense Sleep monitor when it detects the user's sleep onset. Without traffic shaping, a simple receive-and-forward packet switch will output the exact same arrival which immediately gets exposed to the eavesdropper.This allows them to easily identify any event { ∈ : = } happened at any time and fully uncover the ongoing user activities, violating privacy. D. Shaping Mechanism To prevent the eavesdropper from inferring about private user activities, we design network traffic shaping mechanisms M at the packet switch to obfuscate the packet sizes and timing information.For ease of analysis, we let M : A → D satisfy a FCFS queuing discipline.It takes as input a length- prefix of the event packet stream ∈ A waiting to be served in order of arrival and outputs a same-length packet stream ∈ D for arbitrary .Here, can be specified beforehand as a fixed session time, or we can think of it as . . . FCFS Queue < l a t e x i t s h a 1 _ b a s e 6 4 = " 0 7 T 2 c z 3 i I e 4 8 t 5 w 9 growing indefinitely.The shaped output , which should convey less private user information, is also a sequence of random variables ∈T ∈ D = { 0 , 1 , 2 , . . ., } where D is the set of output packet sizes in bytes and we assume w.l.o.g. that 0 In this work, we let the shaping mechanism M perform "surgery" on packets from IoT devices such as splitting and reassembling, padding and delaying to map to , where can be smaller than .We assume an intermediate platform (e.g., a router connecting multiple local area IoT networks) that is trusted and shared by many households which can restore these manipulated packet streams before forwarding them to the intended application servers.In Section VIII, we will discuss how this assumption could be relaxed and leave for future work the design of systems which can handle untrusted settings or the absence of such platform. Since disguises the true arrival , the adversary becomes uncertain about the user activity instance.For example, if > 0 and = 0, the adversary would falsely believe that nothing has happened at time .For the entire shaped packet stream { }, we denote its output byte rate as, We also let be the size of the FCFS queue just after departure .If +1 > + +1 , M consumes +1 − +1 − amount of dummy bytes/packets to form the output +1 .Otherwise, M buffers the remaining bytes (if any) in the queue.Fig. 3 illustrates the dynamics of the FCFS queue during shaping over 3 consecutive time slots when the departures have the same size for example.Packets 1 , 2 , 3 encounter delays of 1 = 1, 2 = 3 = 0 time slots respectively and 3 − ( 1 + 2 + 3 ) dummy bytes are added for shaping. In light of resource constraints in IoT networks, we want to design M and minimize its shaping overhead while protecting privacy.To this end, we introduce importance overhead measures in the sequel.We will use them in the optimization of shapers later in Section V, as well as in the performance comparisons between different shapers in Section VII. E. Overhead Measures Transmission efficiency.We define the transmission efficiency of shaping mechanisms as the total number of information bytes (i.e., bytes in the input event packet stream) divided by the total number of bytes in transmission (i.e., bytes in the shaped output packet stream) during time period , where (3) and (4) are the byte rates of { } and { } respectively 3 .The extra byte rate (dummy bytes per time slot) needed to shape to for arbitrary is − = (1/ − 1) .We say that a shaper is more transmission efficient (higher ) if it needs less dummy traffic (lower 1/ − 1) to shape the same input traffic (same ).Delay overhead.We measure the delay overhead of a shaper as the average waiting time each event packet ∈I spends in the FCFS queue over the infinite horizon, where denotes the -th event packet in the input packet stream, and |I| (the cardinality of I) quantifies the total number of event packets during , that |I| → ∞ as → ∞.Queue size.By the FCFS queueing discipline, the evolution of the queue size over time depicted in Fig. 3 can be well described by the discrete-time version of Lindley's equation [37], assuming that we start with an empty queue 0 = 0. Define the average queue size across time as, It is well known from queueing theory that to prevent the queue from accumulating indefinitely, we need the following stability condition [38], This ensures for the shaping mechanism that both the average waiting time per packet and the average queue size across time converge to finite expectations ( The stability condition on the transmission efficiency in terms of the input and output byte rates is analogous to that on the server's utilization factor [39] in terms of customer arrival and departure rates. F. Assumptions To summarize, we make the following assumptions to create a model of the IoT traffic shaping system: • The input packet stream consists of variable-sized packets and is i.i.d.across time. • The traffic shaper maps to an output packet stream by performing "surgery" on packets in .• The adversary observes the output packet stream to make inference about .In the following section, we will first present the definition of DP on data streams and formalize our privacy model for 3 The same multiplier in the numerator and denominator is cancelled out.event packet streams.Particularly, our privacy guarantee will not depend on the i.i.d.assumption on : they will hold for any realization of .The i.i.d.assumption is only needed to construct convex programs to solve for the overhead-optimal shapers efficiently, which is of practical use.Moreover, we will demonstrate the advantage of our proposed shaper optimized under i.i.d.assumption when performed on real correlated traffic in Section VII-C.In Section IV onward we will look at shaping mechanisms with different guarantees according to the established DP model on event packet streams. III. PRIVACY MODEL As discussed in Section I-C, DP offers a quantifiable worstcase measure of privacy risk that is resistant to the change of prior distribution and the adversary's side information.We aim to design traffic shaping mechanisms M with DP guarantees to meaningfully and rigorously protect private information of continuously generated IoT network traffic and trade off privacy with shaping overhead.This section reviews the definition of DP on data streams.We then apply the definition to event packet streams to establish a formal privacy model on M. A. Differential Privacy on Data Streams In the framework of DP, a trusted curator holds sensitive information from a group of users, creating a dataset ∈ X where X denotes the universe of possible datasets.We think of the dataset as containing a set of rows with each row corresponding to a single user's data record.Two datasets , X ∈ X are called neighboring if they differ in a single row, i.e., a single user's data. In the settings of continuous observation [30], the dataset is updated by data streams, and the curator has to generate outputs continuously.A data stream is a time-ordered sequence of symbols 1 , 2 , . . .drawn from a domain S, where each symbol represents an event. ∈ S corresponds to the event happened in time slot and = ( 1 , 2 , . . ., ) denotes the -prefix of the data stream.Entries in are therefore associated with events, or actions taken by the users. Two differential privacy models -event-level and user-level privacy [30] are proposed for data streams.The former hides a single event whereas the latter hides all the events of a single user.The difference between the two privacy models lies in the definition of adjacency between stream prefixes.Specifically, and S are event-level adjacent if there exists at most one time slot that ≠ S ; and S are user-level adjacent if for any user ≠ S for arbitrary number of time slots. Let M : S → O denote a mechanism that takes as input a stream prefix ∈ S of arbitrary length and randomly outputs a transcript in some measurable set ⊆ O where O = range(M) is the output universe.DP on data streams is defined for the randomized mechanism M as follows. Definition 1.A randomized mechanism M operating on data streams satisfies event-level (user-level) -differential privacy, if for all measurable sets ⊆ O, all event-level (resp.userlevel) adjacent stream prefixes , S and all , it holds that, A T < l a t e x i t s h a 1 _ b a s e 6 4 = " A 4 3 p Z Differential privacy on data streams ensures that the distribution of the output reveals limited information about the input : for any other event or user-level adjacent input S , the output under S has a similar distribution to that under .The maximum distance between output distributions is bounded by in log scale, therefore -DP provides a strong worst-case guarantee.It also controls the tradeoff between the false-alarm (Type I) and missed-detection (Type II) errors for an adversary trying to make a hypothesis test between and S [40].Smaller means greater indistinguishability between output distributions given adjacent inputs and hence less privacy risk.We achieve perfect privacy when = 0, and essentially the output becomes independent from the input. B. Privacy Model on Event Packet Streams In the IoT setting, we are interested in scenarios where the data set is updated by the event packet stream { }.For this work, we focus on the event-level privacy model.As variable packet sizes and timing are different kinds of information, we further define separate notions of event-level adjacency for packet stream prefixes and à .Definition 2. Two packet stream prefixes and à are 1) event-level packet-size adjacent, if there is at most one time slot in which 0 < ≠ à .2) event-level packet-timing adjacent, if there is at most one event (non-zero) packet size appearing in both and à but in 2 different time slots ≠ respectively, that The adjacency definition captures the type and timing information an event-level adversary is interested in.We visualize 2 event-level adjacent pairs of and à in Fig. 4, with Fig. 4a showing the difference in the event packet sizes, hence the type of an event (e.g., motion detection vs checking camera feed) and Fig. 4b displaying the different timing of an event (e.g., user going to bed at 10pm vs 11pm).In order to obfuscate such differences from the adversary, we want the same random output by the shaping mechanism M : A → D to have similar probabilities coming from either or à .Applying Definition 1 to the event-level adjacent packet stream prefixes, we establish the privacy model for the randomized shaping mechanism M as follows. Definition 3. Let be given and let M : A → D denote a randomized shaping mechanism that takes as input a length- prefix of the event packet stream and outputs a same-length packet stream .Then M is event-level packet-size/packettiming -differentially private if for all measurable sets ⊆ D , all event-level packet-size/packet-timing adjacent stream prefixes , à and all , it holds that, Additionally, we use the following definition to abbreviate the privacy guarantees of the shaping mechanism M. Definition 4. A randomized shaping mechanism M is eventlevel ( , )-DP if it is event-level packet-size -DP and packet-timing -DP according to Definition 3. We say that a shaping mechanism is event-level private if it offers both event-level packet-size and packet-timing privacy.Note that however, an event-level private shaping mechanism cannot hide the presence or absence of a user.Heavy traffic may be generated in the presence of an active user and nothing is generated when the user is absent, creating packet stream prefixes that are user-level adjacent according to Section III-A.Guaranteeing user-level privacy then requires the shaping mechanism to send a lot of dummy traffic even when the user is absent [17].This is generally too costly in terms of shaping overhead, and the effort may easily be in vain when the user's location can be inferred from other sources.The event-level privacy model over infinite packet streams are therefore applicable and practical when faced with an eventlevel adversary. C. Memoryless Shaper There are two classes of shaping mechanisms: with memory and memoryless.Designing an overhead-efficient randomized shaper with memory involves finding the overhead-minimal stochastic mapping from all possible ∈ A to ∈ D .This requires performing very high dimensional thus computationally expensive optimization for large and quickly becomes intractable as → ∞.Nonetheless, the saving on the shaping overhead is only marginal [16]. In this paper, we focus on designing event-level DP shapers that are memoryless: ( | , −1 ) = ( | ).This simplifies the construction, optimization and analysis of the shaper with significantly reduced search space.Moreover, the current departure of the memoryless shaping mechanism only leaks information about the current arrival, whereas shaping mechanisms with memory leak additional information about all the past arrivals through the current output.The memoryless shaper can then be treated as a sequence {M } of independent mechanisms M : → , and we have, Next, we will show how to design a memoryless shaping mechanism that satisfies the event-level ( , )-DP guarantee to mask the packet sizes and timing information. IV. TRAFFIC SHAPING MECHANISMS We start by proposing a memoryless shaping mechanism called DPS (differentially-private shaper).We denote it by M DPS and prove its event-level ( , )-DP guarantee according to Definition 4. Furthermore, we show that under our IoT traffic model, different settings of ( , ) let the mechanism recover some existing shapers proposed in the literature.Specifically, we name 2 other shaping mechanisms PST (perfect privacy for both packet sizes and timing, by setting , = 0) and PPS (perfect privacy for packet sizes only, by setting = 0, = ∞) and denote them by M PST and M PPS , respectively.We will also highlight the differences in their privacy guarantees and overhead measures. A. Differentially Private Shaping Mechanism The DPS mechanism is a sequence {M By controlling the dependency between and via the channel , the DPS mechanism can make different choices of given to control the privacy leakage and shaping overhead.To guarantee DP for event packet streams following Definition 4, we let the channel satisfy a more stringent privacy model -local differential privacy (LDP) [41], [42]. The goal of designing an -LDP channel is to ensure that the adversary's likelihood of guessing that the packet size is ∈ A over ã ∈ A does not increase, multiplicatively, more than after seeing the obfuscated packet size ∈ D. Therefore, in each time slot, the adversary is limited in inferring about the actual user activity instance from the eavesdropped packet size.The shaper M DPS adopting an LDP memoryless channel has the following privacy guarantee. Proof.See Appendix A. The intuition behind the set of constraints is that, i) given 2 input packet sizes ≠ , the channel will output the same packet size with similar probabilities, whose ratio is bounded by or /2 ; ii) for protecting timing information, the privacy budget is equally divided into the 2 time slots where event-level packet-timing adjacent prefixes and à differ, as shown in Fig 4b. The output byte rate and transmission efficiency level of the DPS mechanism given are, The complete tunability of privacy levels ( , ) in protecting the packet sizes and timing information makes the DPS mechanism well adaptable to heterogeneous network conditions and user demands for privacy.With this, a traffic shaping system can control the privacy parameters to be able to interpolate between a system that guarantees perfect privacy and one without any privacy protection.We can easily verify that setting , = ∞ in the constraints allows to become an identity matrix (assuming A = D w.l.o.g.).The shaper with an identity channel matrix outputs = , ∀ and offers no privacy protection.In the sequel, we look at two other nontrivial settings in the perfect privacy regime. B. Perfect Privacy Shaping Mechanism The PST mechanism is a special case of the DPS mechanism by setting , = 0 in the constraints (14).By simple algebra, this setting forces = , ∀, ∈ ; ∀ ∈ .Then the channel matrix becomes rank-one: = 1 • , with all rows equal to the same probability vector = [ 0 , 1 , . . ., ], which defines a probability distribution () on D with ( ), ∀ ∈ .We can describe PST as, Essentially, the PST mechanism chooses departures ∼ independently from the arrivals in every time slot and generates a standardized packet stream that won't leak any information about private user activities. Proof.See Appendix B. The kind of standardization performed by PST is similarly introduced as the "fixed pattern masking" by Iacovazzi and Baiocchi [16].The output byte rate and transmission efficiency level of the PST mechanism given are, C. Perfect Privacy for Packet Sizes Only In many cases, network conditions or user demands for privacy may place constraints on the delay and byte rate overhead which preclude perfect event-level privacy for both packet sizes and timing.In these scenarios, an alternative kind of shaping mechanisms relax the privacy guarantee to focus on perfect event-level privacy for packet sizes only.Here, we describe such a shaping mechanism PPS: it standardizes only the sizes in the event packet stream without changing the timing information.That is, PPS mechanism will output packets only when event packets ∈I arrive at the switch.Then, the output packet sizes ∈I are drawn i.i.d.from a chosen distribution () on D similarly defined as () in the PST mechanism, with = [ 0 , . . ., ] denoting the probability vector.If there is no arrival ( = 0), nothing is sent out ( = 0).We describe the PPS mechanism by, The PPS mechanism is another special case of DPS with = 0, = ∞.One can easily show that it corresponds to a channel matrix with the first row as [1, 0, . . ., 0] and the rest of the rows all equal to .While preventing privacy leakage via packet sizes, PPS permits the timing information to remain unchanged and is equivalent to Wright's traffic morphing method [15] when packet fragmentation is allowed. Proof. See Appendix C. The output byte rate and transmission efficiency level of the PPS mechanism given are, Comparing ( 21) with ( 18) and ( 22) with ( 19), we see that for Λ ∈ (0, 1) (2) and if = , PPS = Λ PST < PST and PPS = PST /Λ > PST .In general, a traffic shaping mechanism can be more transmission efficient for satisfying a less restrictive privacy guarantee. V. DELAY-OPTIMAL SHAPERS In order to efficiently utilize the limited resources in IoT networks, it is crucial to optimize the DPS, PST and PPS mechanisms to incur minimal shaping overhead.In this section, we formulate the problem of finding overhead-optimal shapers as constrained optimizations: for a given transmission efficiency level ∈ (0, 1) that ensures queue stability , we wish to find optimal * , * and * that minimize the delay overhead Expressing and analyzing E[ ] as a function of * , * or * is however intractable.Because of packet splitting and padding, the waiting time of event packets depends heavily on both the past and future arrivals and departures.For example, using Fig. 3 again for illustration, and consider the constant departures as random draws from by the PST mechanism.The arrival 1 = waits 1 = 1 time slot because: i) the queue was depleted before 1 , ii) 1 = < 1 is drawn with probability • < , ∈D (), iii) 2 = arrives with probability , and iv) 2 = > ( 1 + 2 − 1 ) = ( + − ) is drawn with probability • > ( + )/2, ∈D ().Due to such complicated dependency, it is beyond the reach to establish a clear relationship between E[ ] and . Conversely, the discrete-time Lindley's equation ( 7) describing the evolution of queue size in terms of arrival and departure pairs ( , ) provides a straightforward mathematical model that allows for subsequent analysis.We rewrite it as, where − is i.i.d.across time by the i.i.d.assumption on and the memoryless property of the mechanisms.For = / ∈ (0, 1), we have According to [43], the expected queue size E[ ] can then be very efficiently and well approximated with the use of Wiener-Hopf factorization (WHF) method. Based on Little's law [44], the smaller the queue size is, the less waiting time the event packets will experience.We then propose minimizing the expected queue size across time E[ ] as a proxy for the minimization of the expected delay per event packet E[ ]4 to find the delay-optimal shapers.We will also justify this choice by empirical results in Section VII.More A. Delay-Optimal DPS For given values of ( , ) and ∈ (0, 1) and a given set of output packet sizes D, we can find the optimal max , 2 -LDP channel matrix * that minimizes the expected queue size E[ ] by solving the following optimization problem, The first constraint makes sure that the transmission efficiency DPS ( 16) is at least .The second constraint ensures a stable FCFS queue so that the optimal value is finite.The third constraint enforces a max , 2 -LDP channel matrix for the DPS mechanism to satisfy event-level ( , )-DP according to Proposition 1.The last 2 constraints restrict to be right stochastic, where is element-wise comparison. 1) Solution Overview: To utilize the DPS mechanism, the shaping system first gathers prior information about the traffic statistics of networked IoT devices.It infers about the set of possible packet sizes A and their arrival rates by observing the traffic offline for a period of time, that the devices monitor user activities and generate traffic as they are but the system does not shape the traffic for output just yet.The system can discretize the traffic by the minimum packet interval to ensure one packet per time slot and calculate as the empirical probability mass function (PMF) on A. Once a user specifies the transmission efficiency ∈ (0, 1) and privacy preferences ( , ), the system solves P DPS to find the optimal channel matrix * , or * D | A (|), where D can be chosen to be the same as A. Now the system shapes the input traffic through a FCFS queue following the example in Fig. 3, whose input-output relationship is governed by * .That is, in each time slot , it samples an output size given arrival size according to (13).Algorithm 1 describes how the DPS mechanism processes its input and output for shaping packetized traffic during each time slot . Every time the network of IoT devices changes or the user specifies a different set of privacy and efficiency parameters, the system updates A, and solves for * again for shaping. B. Delay-Optimal PST and PPS We argued in Section IV that PST () and PPS () shapers are special cases of the DPS mechanism.To find the delayoptimal PST (PPS) shaper, we can solve the same optimization problem (P DPS ) for the optimal * with = = 0 (resp. = 0, = ∞) and then infer about the corresponding optimal * (resp. * ).Alternatively, we can reduce the number of optimization variables by directly constructing optimization problems (P PST ) on and (P PPS ) on .For simplicity of writing, we summarize the optimization of DPS, PST and PPS mechanisms in Table II along with the differences in their design, privacy guarantees and overhead measures.It turns out that solving for the optimal shapers are convex programs.Proposition 4. Given the set of output packet sizes D and a transmission efficiency level ∈ (0, 1), the optimization problems (P DPS ), (P PST ) and (P PPS ) are convex programs. Proof.E[ ] is convex in , or and all constraints in (P DPS ), (P PST ) and (P PPS ) are affine.See Appendix D. C. Privacy-Overhead Tradeoff We can obtain the minimum achievable E[ ] for varying privacy parameters ( , ) and transmission efficiency level , hence establishing the privacy-overhead tradeoff.The following theorem provides an important qualitative description of the privacy-overhead tradeoff, and its validity will be shown in Section VII under diverse experimental setups. Theorem 1.Given the set of output packet sizes D and transmission efficiency level ∈ [ / , 1), the minimum achievable E[ ] in (P DPS ) increases when i) ( , ) decreases with fixed and ii) increases with ( , ) fixed. Proof.This follows from Proposition 4 and standard results based on strong duality and sensitivity analysis [45].See Appendix E for a detailed proof. A. Deterministic Policy for PST and PPS A folk theorem in queueing theory states that the deterministic service-time distribution with unit mass on a given mean service time minimizes delay when service and interarrival times are mutually independent.Humblet [46] proved this formally using Jensen's inequality, the convexity of the max(, 0) function in the Lindley's equation of customer waiting times in a FCFS queue, and the independence between inter-arrival and service times. In a similar fashion, we enforce a FCFS queueing discipline for the shapers and describe the evolution of the queue size by the discrete-time Lindley's equation (7) in terms of arrival ( ) and departure ( ) sizes.By analogy, we can make the same statement for the design of shaping mechanisms: when departure sizes are chosen independently from arrival sizes , then for a given transmission efficiency level or equivalently an expected output packet size * E [ ]/, the deterministic policy that outputs packets with constant size = * , ∀ minimizes the expected queue size E[ ].As are i.i.d.across time, PST chooses independently from , and PPS selects ∈I independently from ∈I 5 , enforcing the deterministic policy on PST or PPS should on average accumulate a shorter backlog in the queue than their nondeterministic counterparts. Let PST* and PPS* be the deterministic versions of PST and PPS mechanisms given .That is, PST* generates ∈T = * and is in essence the discrete-time version of the traffic shaper by Apthorpe et al. [47] that maintains a constant departure rate in the network traffic leaving a smart home.Likewise, PPS* outputs ∈I = * /Λ.Based on the above statement, if * , * /Λ ∈ D, then the optimal solutions to (P PST ) and (P PPS ) are exactly delta distributions: * () = ( * ) and * () = ( * /Λ).In real systems, however, may be set arbitrarily by resource-constrained users and the resulting * , * /Λ may not be meaningful values for packet sizes (e.g., not an integer or exceeding the maximum transmission unit). In Section VII, we will evaluate the effects of both PST and PST* (PPS and PPS*) mechanisms as baseline approaches to guaranteeing perfect event-level privacy (resp.perfect eventlevel packet-size privacy).In the perfect-privacy regime, there naturally exists a tradeoff between the transmission efficiency level and the expected queue size E[ ].Here, we show such tradeoffs for PST/PST* and PPS/PPS* mechanisms in Fig. 5 for one of the experimental setups from Section VII.We make the following observations, • More dummy traffic helps deplete the queue: decreasing leads to decreasing E[ ]. • The deterministic policies PST*/PPS* (black asterisk/blue cross lines) indeed result in smaller average queue sizes than the non-deterministic policies PST/PPS (red circle/green plus lines).• PPS/PPS* achieve a less restrictive privacy guarantee than PST/PST* by introducing some dependency between the input and output.This yields significant savings on both the delay (smaller queue E[ ]) and byte rate overhead (higher efficiency ).As the DPS mechanism interpolates between the perfectprivacy PST and PPS shapers and the non-private shaper (e.g., with an identity channel matrix), we are interested in how its privacy-overhead tradeoffs compare to those of the baseline approaches, and will show the comparisons in the sequel. B. Pad-Only Policy For extremely low-latency networks and delay-sensitive traffic (e.g., smart healthcare devices monitoring the health conditions of users should have timely communication with the intended application servers), we prefer zero-delay shaping mechanisms to protect privacy by adopting the pad-only policy: ≥ , ∀.Here, we take a closer look at the effect of enforcing the pad-only policy on the PST, PPS and DPS mechanisms, and denote them as PST 0 , PPS 0 and DPS 0 , respectively, with the superscript marking 0 delay. 1) PST 0 & PPS 0 : It's easy to see that the PST 0 and PPS 0 mechanisms essentially output packets with the largest size = for ∀ ∈ T and ∀ ∈ I, respectively.That is, () = () = ( ).As a result, their transmission efficiency levels in (19) and ( 22) change to, We see that the PST 0 shaper is very transmission inefficient, and more so when the incoming traffic is a mouse flow (i.e., the arrival rate of event packets Λ = ∈ + is small which lowers the numerator and hence PST 0 ) than an elephant flow.This is intuitive since a mouse flow represents a scenario where private user activities only happen sporadically across time (contrary to an elephant flow) and presents more "variability" in the network traffic.An adversary can exploit the variability for more effective inference attacks.Shaping a mouse flow to guarantee perfect event-level privacy then becomes more costly in terms of dummy traffic. The same is not necessarily true for the PPS 0 shaper since it preserves the timing information and the demand for dummy traffic is less affected by Λ (a small Λ reduces both the numerator and denominator in (25) where PPS 0 can increase or decrease).We will empirically validate this phenomenon in Section VII and show that it is generally true for privacypreserving shapers with or without the pad-only policy. 2) DPS 0 : Since the input and output alphabets A and D are ordered in increasing sizes, enforcing the pad-only policy on the DPS mechanism means adding a structural constraint on the channel matrix , that we only allow non-zero entries in the "upper triangle" (assuming D = A w.l.o.g.), Note that max , 2 -LDP bounds the pair-wise ratios in each column of by either or 2 according to (14).For finite , < ∞, if any entry in a column is 0, then that whole column has to be 0, otherwise , become ∞.Along with the structural and right stochastic constraints, the channel matrix for a pad-only ( , )-DP shaper can only have all ones in the last column, pushing , to be 0. The (0, 0)-DPS 0 mechanism then becomes identical to the PST 0 mechanism.By analogy, the (0, ∞)-DPS 0 mechanism is equivalent to the PPS 0 shaper.The DPS mechanism encapsulates both PST and PPS mechanisms with or without the pad-only policy. In the following section, we will evaluate the privacyoverhead tradeoffs of DPS, PST/PST*, PPS/PPS* mechanisms on different types of traffic under various settings of privacy parameters ( , ) and transmission efficiency level .We need not explore values of lower than PST 0 or PPS 0 which already yield 0 delay for the PST or PPS shaper. VII. EXPERIMENTAL RESULTS We experiment 6 on synthetic data and packet traces from 3 smart home IoT devices (Sense Sleep monitor, Nest camera and WeMo switch) [35].For synthetic data, we simulate packet traces with sizes drawn i.i.d.from Zipf distribution with PMF: Zipf (; , ) = (1/ )/ =1 (1/) , which characterizes the frequency of rank- element out of a population of elements.We assume that packet size has rank + 1, ∀ ∈ .We choose the exponent, or the scale parameter ∈ [0.01, 1, 5] for Zipf PMF, and set the possible packet sizes to be A = [0, 32,64].In this work, we assume D = A and leave for future work the design and optimization of D. For IoT devices, we discretize the packet traces into 1s time slots and keep only the event packet sizes (e.g., 270B and 142B packets from the Nest camera triggered by motion detection and checking camera feed, respectively).We then extract packet size PMFs7 from the preprocessed traces for optimizing the shapers. By setting different values of the scale parameter in Zipf PMF, we are essentially synthesizing packet ranging from to elephant flows.For smaller , the Zipf PMF has a heavier tail, putting more probability mass on larger packets, thus creating heavier input traffic that exemplifies an elephant flow (higher = 0 ).An elephant flow with small also identifies increased traffic from a larger number of IoT and versa. A. Empirical Privacy-Overhead Tradeoffs In Section V-C, we formally characterized the privacyoverhead tradeoff in Theorem 1 between the minimum achievable E[ ], transmission efficiency and privacy levels ( , ).To validate their relationship, as well as compare the empirical privacy-overhead tradeoffs of different shapers, we will solve (P DPS ), (P PST ) and (P PPS ) under varying , ( , ) and packet size PMFs to find the optimal distributions * , * and * .We then use discrete-event simulation [48] to calculate the empirical overhead measures W (6) and Q (8) after running the optimized shapers DPS ( * ), PST ( * ) and PPS ( * ) on packet traces (synthetically generated or from IoT devices) for sufficiently large = 100000 time slots.We note that the empirical measure Q is always consistent with the expected value E[ ] estimated by WHF8 method, we omit the matching results due to space limit. In what follows, we let ∈ [0.01, 0.1, 1, 2, 5] and compare: • (, )-DPS vs. PST/PST* shapers for ∈ ( PST 0 , 1)9 , • (, ∞)-DPS vs. PPS/PPS* shapers for ∈ ( PPS 0 , 1), in terms of their empirical privacy-overhead (i.e., Q / W --) tradeoffs, with results shown in Fig. 6 and Fig. 7, respectively.We stress that the lower (smaller Q / W ) and closer to the right (higher ) the tradeoff curves are, the less shaping overhead the mechanisms require for privacy protection.We then make the following observations, 1) The general trend of Q -- tradeoffs (Fig. 6a-6c less impactful on the privacy-overhead tradeoffs of the shapers that only protect packet sizes information.The last 3 observations extend the same arguments for shapers with pad-only policy in Section VI-B1. B. Increasing Number of IoT Devices As observed, heavier input traffic to the shapers yields better privacy-overhead tradeoffs.We want to understand whether increased traffic from a larger number of IoT devices will have the same effect.We therefore optimize the shapers on packet size PMF from a single device (e.g., Sense Sleep monitor), and the aggregated PMFs 11 from more devices, and plot their privacy-overhead tradeoffs in Fig. 8.We can see that the tradeoff curves get lower and closer to the right from Fig. 8a to Fig. 8c.The shapers optimized for traffic aggregated from more IoT devices require less shaping overhead than those optimized for individual traffic flows. C. Performance on Bursty Traffic When we shape real IoT traffic that is often unpredictable or bursty using shapers optimized for a different source distribution, a critical aspect for evaluating the effect of shaping is to see how the privacy-overhead tradeoff changes.We discussed in Section I-C that shapers designed with DP maintain the 11 same privacy guarantees despite the change in source distributions (e.g., from i.i.d. to bursty).Yet it remains to see how such change affects the amount of shaping overhead. To this end, we solve for the optimal DPS and PST/PST* mechanisms based on Nest camera's packet size PMF 12 .We then run 2 packet streams through the optimized shapers: one is synthetically generated by i.i.d.draws from the PMF and the other one is the original bursty traffic.We plot the corresponding privacy-overhead tradeoffs in Fig. 9a and 9b, respectively.We see that traffic burstiness doesn't hurt the delay overhead by much when the shapers have enough dummy traffic at disposal ( < 0.25), but creates longer backlogs in the queue for all the shapers with limited dummy traffic ( > 0.25).In the latter region, however, the DPS mechanism introduces less additional delay overhead than PST/PST* shapers. D. Event Packet Streams Before/After Traffic Shaping In Fig. 10, we visualize the original event packet stream of Nest camera against its shaped traffic in the same window of 200 time slots by running it through the PST, PST* and (, )-DP shapers, respectively.We see from Fig. 10a that in time slots ∼70 and ∼320, there are two 270B packets corresponding to motion detection, and from time slot ∼130 to 280, there's a long burst of 142B packets indicating a time period when the user is checking the camera feed.From the output of PST/PST* shapers in Fig. 10b and 10c, we couldn't detect the presence of such event-indicating traffic.In Fig. 10d, the output of the (, )-DP shaper with = 5 obscures the events of motion detection, though reveals the period of checking camera feed to some extent.This highlights the limitation of the event-level DP model on infinite packet streams. VIII. LIMITATIONS AND FUTURE DIRECTIONS One way to overcome the limitation of the event-level DP model in hiding long bursts of event packets is to extend the definition of event packet stream.That is, we can expand from single-packet events to burst-of-packets events by buffering the bursts and merge the enclosed packets, and the length of an indivisible time slot can be specified as the maximum duration of event bursts.We can then optimize and run a DP shaper on such redefined event burst stream.This is like running the clock for the gateway output at a slower cycle than the network behind it -the shaper generates packets at a slower rate (but with bigger payloads), imposing a fixed amount of delay (the cycle length) at baseline.A downside of this approach is that the cycle length as well as the onset times of event bursts are subject to the user's real-time interactions with IoT devices and they are oftentimes impossible to know in the design phase. Another interesting extension of the event-level privacy is to look at the -event privacy model [49] over infinite packet streams, which protects any event sequence occurring in a window of consecutive time slots.It can protect any temporally constrained user activities from being disclosed, just as the period of checking camera feed.This approach is rather different from the extension to event bursts since the event privacy model protects any event burst (no longer than length ) irrespective of its onset time. In fact, by sequential composition [50], an (, )-DP shaper can be shown to guarantee -event • -DP, yet the privacy leakage grows linearly with the length of the time window .One way to address this is to design a shaping mechanism with memory: ( | , −1 ) = ( | [−+1: ] , [−+1:−1] ), so that sublinear privacy leakage is achievable by carefully chosen departures accounted for the dependency across time windows of length .Yet this is challenging due to a much higher dimensional optimization space. Hiding long bursts of events falls under the broader subject of dealing with correlated traffic.The optimization of our proposed DPS mechanism relies on the assumption that is i.i.d.across time to be efficiently solved by convex programming and the WHF method.However, correlation in the input traffic can potentially be modeled and utilized to further improve the overhead efficiency of shaper design.For example, if both the system designer and the adversary know that a user sleeps strictly between 8pm-11pm, thus triggering an event packet stream with 1117B packet (generated by Sense Sleep monitor) only observable between 8pm-11pm, then a time-constrained shaper can be designed to only shape the traffic between 8pm-11pm for reduced overhead. The time window exemplifies the constraint specification in the framework of Blowfish privacy [51] to restrict the set of realizable due to correlation.Comparably, can also be drawn from a distribution class (e.g., Markov chains) following Pufferfish privacy [52] for which more overheadefficient shapers can possibly be designed.However, these privacy models will be hard to use in practice.On one hand, it would be very device/user dependent and then the privacy and overhead guarantees would be contingent on the device/user behaving typically.On the other hand, estimating the correlation from traffic data and optimizing shapers may become computationally expensive and even intractable. Another assumption that our proposed shaping mechanism relies on is an intermediate platform trusted and shared by many households to reverse the packet "surgery".To relax this assumption, we can design and deploy the DPS mechanism on a device level: it can be implemented to shape the outgoing network traffic of individual IoT devices.We can think of each device with its own dedicated FCFS queue and delay-optimal DP shaper.Designing a traffic shaping system in this way motivates the study of optimal allocation of privacy budget and network resources, as well as optimal scheduling of device outputs in a local area IoT network. IX. CONCLUSION In this work, we motivate the need for designing network traffic shaping in IoT networks under the framework of DP.We establish a rigorous event-level DP model on discrete event packet streams and propose an event-level ( , )-DP shaping mechanism which utilizes a discrete memoryless max , 2 -LDP channel to protect both packet sizes and timing information from traffic analysis attacks.Under special settings of ( , ) and deterministic/pad-only policies, the DPS mechanism becomes equivalent to previous shaping schemes proposed in other contexts.All shapers work by generating the output packet stream in ways that are either independent or dependent from the input traffic.The dependency introduced by the channel offers the DPS mechanism more degrees of freedom in trading off privacy for shaping overhead. We empirically evaluate all shapers on synthetic data and packet traces from actual IoT devices.Under various types of input traffic, we discover interesting fundamental privacyoverhead tradeoffs: increased traffic from a larger number of IoT devices makes user privacy protection easier.The DPS mechanism not only enhances the privacy-overhead tradeoffs of the PST/PST* and PPS/PPS* mechanisms, but also handles bursty traffic better.This novel prototype for building a privacy-preserving and overhead-efficient traffic shaping system enables users to adapt to their privacy demands and network conditions.It serves as a foundation for understanding and defending against more sophisticated traffic analysis attacks with strong, formal and tunable privacy guarantee. We have the first lines in ( 27) and ( 28) from the memoryless property (12) and the last inequalities from the set of constraints (14).By Definition ( ) + ∈ T\I log 1 1 = 0. Hence the PPS mechanism guarantees perfect event-level packet-size privacy, or (0, ∞)-DP by Definition 3. D. Proof of Proposition 4 Proof.All the constraints in (P DPS ) are affine.To reason about the convexity of the objective E[ ] as a function of , we see that = − , = 1, 2, . . .are a sequence of i.i.d.random variables parameterized by the same stochastic mapping .As a result, { ()} satisfies strong stochastic convexity (SSCX) in [53].The recursive Lindley's equation (23) involves only the max(, 0) function and the + operator, both are increasing and convex functions.By the preservation of convexity [45], is an increasing and convex function of ( The same argument applies to (P PST ) and (P PPS ).As PST () and PPS () shapers are special cases of the DPS mechanism (), E[ ] is also convex in or .The constraints on and are linear as well.Therefore, the optimization problems (P DPS ), (P PST ) and (P PPS ) are convex programs. E. Proof of Theorem 1 Proof.Following Proposition 4 that (P DPS ) is a convex program with all affine constraints, we first show strong duality based on Slater's condition [45,Ch. 5.2.3] that there always exists a feasible point.We then perturb and ( , ) in the original problem and argue about the changes in the optimal value E * [ ] by sensitivity analysis [45,Ch. 5.6]. For ≥ 0, any rank-one channel matrix = 1 • with arbitrary probability vector satisfies the privacy and right stochastic constraints.Then the first 2 constraints in (P DPS ) reduce to, < ≤ /. It's easy to see that we can always find a probability vector such that = / ∈ ( , ] for ∈ [ / , 1) with ≥ and 0 = 0. Therefore, feasibility hence strong duality holds for the convex program (P DPS ). The other constraints in P DPS not involving and must always hold, but in the sequel we do not make them explicit wherever applicable.Let (, ) be the Lagrange dual function and F be the feasible set of the original unperturbed problem (P DPS ), (, ) = inf The inequality in (32) follows from (31) and the constraint C • ( + Δ) ≤ for feasible C. Likewise, the simplification to the inequality (33) results from the constraint C ≥ and c ≤ 1. In the perturbed problem, (33) holds for any feasible C, so does the optimal C * .Meanwhile, * (0, 0) is the optimal value of the original unperturbed problem.Thereby, • If we increase the privacy guarantee by decreasing (Δ < 0) with fixed, then the minimum expected queue size across time E * [ ] increases.• If we increase the transmission efficiency level (Δ > 0) with fixed, then E * [ ] increases as well. One may want to show the convexity of * (Δ, Δ) in Δ or Δ following the standard procedure in sensitivity analysis.This is false, however, as the support of * (Δ, Δ) is not convex in either Δ or Δ.We omit the proof due to space limit.The intuition is that when we convert ( PDPS ) to its standard form, Δ and Δ do not appear on the right hand side of the inequality constraints. Fig. 1 : Fig. 1: Illustration of a smart home traffic shaping system. Fig. 2 : Fig. 2: Aggregate packet stream arriving at the packet switch from Sense Sleep monitor (red) and Nest camera (green). Fig. 3 : Fig. 3: Illustration of the FCFS queue during shaping over 3 consecutive time slots with same-size departures. y j T w t 5 K 2 I B q y t C m k 7 c h e I s v L 5 N G u e R d l M r 3 l 8X K e R Z H D o 7 h B M 7 A g y u o w B 1 U o Q 4 M + v A M r / D m S O f F e X c + 5 q 0 r T j Z z B H / g f P 4 A 5 g 6 N e Q = = < / l a t e x i t > ÃT < l a t e x i t s h a 1 _ b a s e 6 4 = " 0 M a 0 o R g 2 D 9 2 r x G N M f x b A T I C r u 0 U = " > A A A B 8 n i c b V B N S w M x E M 3 W r 1 q / q h 6 9 B I v g Q c p u F f R Y 8 e K x Q r + g X Us 2 m 2 1 D s 8 m S z A p l 6 c / w 4 k E R r / 4 a b / 4 b 0 3 Y P 2 v p g 4 P H e D D P z g k R w A 6 7 7 7 R T W 1 j c 2 t 4 r b p Z 3 d v f 2 D 8 u F R 2 6 h U U 9 a i S i j d D Y h h g k v W A g 6 C d R P N S B w I 1 g n G d z O / 8 8 S 0 4 U o 2 Y Z I w P y Z D y S N O C F b 0 5 4 L w 4 7 8 7 H o r X g 5 D P H 6 A + c z x 9 J 5 5 E v < / l a t e x i t > (a) Packet-size adjacent.(b) Packet-timing adjacent. Fig. 4 : Fig. 4: Illustration for different pairs of event-level adjacent packet stream prefixes and à for = 5. Proposition 1 . The shaping mechanism M DPS a max , 2 -LDP memoryless channel satisfying the following constraints, Fig. 5 : Fig. 5: Tradeoffs between expected queue size E[ ] and transmission efficiency level for PST/PST* and PPS/PPS* mechanisms.The lowest efficiency levels PST 0 (24) and PPS 0 (25) correspond to the PST and PPS shapers with padonly strategy, both yielding E[ ] = 0.The corresponding data points are not shown in the plot since the y-axis is in log scale. ) retains in the W -- tradeoffs (Fig.6d-6f).Minimizing E[ ] indeed serves as a good proxy for minimizing E[ ].2) Q / W increases when decreases with fixed and when increases with fixed.This verifies the privacyoverhead tradeoff characterized in Theorem 1. Guaranteeing (, ∞)-DP (Fig.7a/7b/7c) instead of (, )-DP (resp.Fig.6d/6e/6f) vastly reduces the shaping overhead.3) Shapers become less overhead-efficient to guarantee the same level of privacy for a mouse flow than an elephant flow.As the input traffic changes from an elephant flow to a mouse flow (i.e., the scale parameter of the Zipf PMF increases from the left to the right plots), the shaping overhead increases (i.e., the tradeoff curves get higher and closer to the left).4) The advantage of (, )-DPS over PST/PST* in terms of preserving shaping overhead is more evident when we're dealing with an elephant flow than a mouse flow.That the curves in Fig.6a/6d are further apart 10 than those in Fig.6c/6f.Intuitively, it's easier to hide among heavier traffic than lighter traffic.5) The advantage of (, ∞)-DPS over PPS/PPS*, however, behaves in the opposite fashion.The relative distances between the tradeoff curves now increases from Fig.7ato 7c).The arrival rate Λ of the input traffic becomes Performed on synthetically and i.i.d.generated packet stream.Performed on original bursty traffic. Fig. 10 : Fig. 10: Comparison between the original event packet stream from Nest camera and shaped traffic by PST, PST*, and (, )-DPS mechanisms.In particular, we set = 0.62 for all the shapers and = 5 for the DPS mechanism. with a max , 2 -LDP memoryless channel satisfying (14) guarantees event-level ( , )-DP.B.Proof of Proposition 2Proof.For event-level packet-size or packet-timing adjacent prefixes and à , and any realization of the mechanism output d ∈ D , we have DP (M PST ) = =1 log ( ) ( ) = 0.By Definition 3, PST guarantees perfect event-level privacy, or (0, 0)-DP.C.Proof of Proposition 3Proof.For two event-level packet-size adjacent prefixes and à according to Definition 2, we have DP (M PST ) = ∈I log ( ) TABLE I : Comparison between existing network traffic shapers and our method in terms of privacy and overhead. TABLE II : Comparison between shaping mechanisms in the design and optimization, privacy guarantees and overhead measures. Algorithm 1: DPS mechanism Input: Arrival packet at time ; * Output: Departure packet at time Cache: FCFS queue with bytes at time 1 if (arrival packet ) then 2 ← size( ); Pad( , − dummy bytes); Push departure packet out.19 end importantly, we will show that E[ ] enjoys a nice property of being convex in the optimization variables , or .
2021-12-01T02:15:42.956Z
2021-11-29T00:00:00.000
{ "year": 2021, "sha1": "a47c90d60f9056ab9abdf67c908083436550941b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2111.14992", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a47c90d60f9056ab9abdf67c908083436550941b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
232382876
pes2o/s2orc
v3-fos-license
The Combination of Tissue-Engineered Blood Vessel Constructs and Parallel Flow Chamber Provides a Potential Alternative to In Vivo Drug Testing Models Cardiovascular disease is a major cause of death globally. This has led to significant efforts to develop new anti-thrombotic therapies or re-purpose existing drugs to treat cardiovascular diseases. Due to difficulties of obtaining healthy human blood vessel tissues to recreate in vivo conditions, pre-clinical testing of these drugs currently requires significant use of animal experimentation, however, the successful translation of drugs from animal tests to use in humans is poor. Developing humanised drug test models that better replicate the human vasculature will help to develop anti-thrombotic therapies more rapidly. Tissue-engineered human blood vessel (TEBV) models were fabricated with biomimetic matrix and cellular components. The pro- and anti-aggregatory properties of both intact and FeCl3-injured TEBVs were assessed under physiological flow conditions using a modified parallel-plate flow chamber. These were perfused with fluorescently labelled human platelets and endothelial progenitor cells (EPCs), and their responses were monitored in real-time using fluorescent imaging. An endothelium-free TEBV exhibited the capacity to trigger platelet activation and aggregation in a shear stress-dependent manner, similar to the responses observed in vivo. Ketamine is commonly used as an anaesthetic in current in vivo models, but this drug significantly inhibited platelet aggregation on the injured TEBV. Atorvastatin was also shown to enhance EPC attachment on the injured TEBV. The TEBV, when perfused with human blood or blood components under physiological conditions, provides a powerful alternative to current in vivo drug testing models to assess their effects on thrombus formation and EPC recruitment. Introduction Cardiovascular diseases are among the leading causes of mortality and morbidity worldwide. These are caused by aberrant platelet activation caused by endothelial dysfunction and exposure of plasma to collagen-and tissue factor-rich atherosclerotic plaques. However, it is not possible to study this practically or ethically in human patients. Most studies assessing the effect of drugs on cardiovascular disease rely on animal models to predict and explain their effects in humans [1]. Different animal species have been used to evaluate certain features of cardiovascular disease, such as zebrafish, pigs, rabbits, and rodents. Mice have become the animal of choice for disease modelling given their genetic similarity to humans, their fast breeding rate, and well-established methods for creating genetic knock-outs [2,3]. Additionally, intravital microscopy allows the real-time examination of thrombus formation on artificial vessel injuries in response to ferric chloride (FeCl 3 ) or laser injury [4]. These arterial thrombosis models have become popular for examining the molecular mechanisms underlying thrombus formation and how these can be impacted by drug treatments. The interpretation of the data obtained from murine thrombosis models is complicated by the use of anaesthetics. A survey of investigators performing intravital microscopy in murine thrombosis models found that ketamine, xylazine, and pentobarbital are the most commonly used anaesthetics [5]. However, previous studies have demonstrated that each of these anaesthetics can have an inhibitory effect on platelet function [5][6][7]. For instance, ketamine inhibited platelet aggregation through the suppression of IP 3 formation and also by inhibiting thromboxane synthase activity [7,8]. Additionally, ketamine can also interfere with endothelial nitric oxide production, as well as smooth muscle Ca 2+ signalling [9,10]. This suggests that the use of ketamine in intravital microscopy studies could create a baseline inhibition of platelet function as well as modulation of normal haemostatic properties of the vessel wall, which could overestimate the effect of genetic knockouts or drug treatments on normal haemostatic responses. These shortcomings provide an opportunity to create alternative thrombosis models by recreating normal haemostatic conditions by flowing human blood through human tissue-engineered arterial constructs. Tissue-engineered arteries were initially produced to use as alternatives to autologous vessels for vascular grafting. Vascular tissue engineering was pioneered in 1986 by Weinberg and Bell, who generated the first tissueengineered blood vessels (TEBVs) by culturing vascular cells on a collagen-based scaffold [11]. Nearly 40 years later, there are few TEBVs currently being used in clinical application. However, great progress has been made in improving the biomimicry of TEBVs. A number of previous studies have demonstrated that it is possible to generate tissue-engineered arteries through a variety of methods that can withstand normal arterial blood flow conditions whilst replicating the functional properties of the native arteries [12][13][14]. This has been achieved through using a variety of approaches including the use of a variety of scaffold material both synthetic (e.g., polyvinyl alcohol and gelatin [15]) and natural extracellular matrix molecules (collagen, elastin [16]). The properties of these scaffolds can be further modified to increase their mechanical strength through compression or chemical crosslinking or made more porous by freeze-drying [16]. Furthermore, the use of bioreactors has been influential in producing ideal culture conditions for vascular cells to ensure they assume the cellular phenotypes found in vivo [17]. These approaches have created TEBVs that possess a number of desirable properties such as the ability to support physiological spiral laminar flow [15], to mechanically withstand physiological arterial blood pressures [15,16], and to support the growth of a healthy endothelial cell lining [17]. This is consistent with the key parameters required for a substrate for clinical vascular grafting. These studies have commonly assessed the ability of the TEBVs to withstand activation of haemostatic and inflammatory responses of blood cells flowing through them. However, to utilise TEBVs as an animal-free alternative to current in vivo arterial thrombosis models requires a demonstration that they are capable of eliciting appropriate cellular reactions upon damage and that drug treatments are able to modulate that response. Previously, we have utilised tissue engineering approaches to create human arterial models that replicate the normal haemostatic properties of the intimal and medial lining of human arteries [18]. This includes the use of an electrospun polylactic acid (PLA) nanofiber scaffold to create an intimal layer construct that provides contact guidance to ensure that the endothelial cells can be aligned in the direction of flow, similar to the native artery. The medial layer construct is formed by human coronary artery smooth muscle cells cultured within a collagen hydrogel. Musa et al. (2016) showed that their tissue-engineered blood vessels are able to replicate the anti-and pro-aggregatory properties of native arteries when the intimal layer is intact and absent, respectively [18]. Our real-time spectrofluorimetry measurements of cytosolic Ca 2+ signalling provided a sensitive method to assess platelet activation upon exposure to the tissue-engineered constructs. These results clearly demonstrate that the intimal, medial, and full blood vessel constructs replicate the in vivo ability to modulate platelet function [18]. The thrombus formation upon the surface of the construct indicated by aggregated DiOC 6 -labelled platelets under a fluorescent microscope can be visualized in the consequent examination. However, these studies were performed under non-physiological mixing conditions. We have not previously examined the ability of the constructs to support thrombus formation under physiologically relevant shear stress from perfusion of platelets. The flexibility of a layer-by-layer fabrication approach in tissue engineering in conjunction with a perfusion device offers a great opportunity to study endothelial dysfunction and repair mechanisms. Endothelial function is an important and independent predictor for the severity of cardiovascular disease. An impaired endothelium is a key driver in the development of cardiovascular disease [19]. Circulating bone marrowderived endothelial progenitor cells (EPCs) have been found to correlate to endothelial function and to aid in neovascularisation and re-endothelialisation of injured vessels, maintaining vascular function and homoeostasis [19,20]. In models of myocardial infarctions and arterial injury, EPCs have been shown to localize preferentially to sites of vascular lesions, after which they divide, proliferate, and become incorporated into the endothelial layer of existing vessels, and promote the outgrowth of new vascular networks. These cells also have an effect on surrounding cells by producing angiogenic growth factors [21,22]. The most common drugs used to treat/prevent the development of cardiovascular disease are statins, with atorvastatin being the most well-known. This drug has been in use for decades, and its effects have been extensively studied. Some of these include reduction of the accumulation of esterified cholesterol into macrophages, increase of endothelial nitric oxide (NO) synthase, reduction of the inflammatory process, increased stability of the atherosclerotic plaques, and restoration of platelets activity and of the coagulation process [23,24]. Despite the known pleiotropic actions of atorvastatin, there is currently limited data on the impact this drug has on EPC ability to mediate endothelium repair. In this study, we aimed to examine whether our tissue-engineered (TE) human arterial models were able to mimic the pro-and anti-aggregatory properties of the damaged and intact artery under physiological flow conditions. We also aimed to examine whether our tissue-engineered arterial constructs could support EPC recruitment and whether this could be modulated by drug treatments. This was achieved by incorporating the constructs within a commercially available parallel-plate flow chamber and perfusing them with washed human platelet suspensions at arterial shear stresses. We examined whether this biomimetic test model system could be used as a potential alternative to in vivo drug testing models in thrombosis and EPC homing. This system was used to perfuse platelets and various cell populations over the TE constructs at physiologically relevant or pathological shear stress, allowing the real-time profiling of their interactions, and for the evaluation of changes in both the surface and structure of the blood vessel, as well as changes in the perfusate. The impact of ketamine on platelet activation and the effect of atorvastatin on EPC homing when EPC being exposed to TEBVs with a FeCl 3 -induced lesion were investigated. Through these studies, we demonstrated that human tissue-engineered arterial constructs, when perfused with human blood or freshly prepared washed human platelet suspension under physiological conditions, provide a human model system that can be used to study the effect of drugs without the potential confounding impact of species differences and use of anaesthetics. Fabrication of 3D Tissue-Engineered Blood Vessel Constructs Fabrication of 3D tissue-engineered intimal layers (TEILs), media layers (TEMLs), and the complete tissue-engineered blood vessel was achieved using human umbilical vein endothelial cells (HUVECs) and human cardiac artery smooth muscle cells (HCASMCs), both obtained from GIBCO, Life Technologies. Cells were cultured with medium 200 and 231, respectively, also obtained from GIBCO, Life Technologies, and used between passage numbers 2 and 5. The construction of the TEIL, TEML, and TEBV constructs was performed using our previously described methodology [18], as such these protocols are outlined briefly below. Electrospinning Aligned nanofibers were made by dissolving Poly-L,D-lactic acid (96% L/4% D, inherent viscosity of 5.21 dL/g, Purac BV, Gorinchem, the Netherlands) (PLA) in a 7:3 mixture of chloroform and dimethylformamide (DMF) (Sigma, Welwyn Garden City, UK) into 2% solution. The operational parameters of nanofiber fabrication followed the established protocol [25]. In brief, this 2% PLA solution was deposited onto detachable metal collectors, comprised of two partially insulated steel blades (30 cm × 10 cm), and connected to a permanent copper plate with a steel wire. The two steel blades had a gap of 5 cm between them where the fibers were deposited. Deposition of the fibers involved connecting the permanent plate to a negative electrode, and a syringe containing the solution was connected to a positive electrode. The PLA was extruded through an 18G needle and delivered at a rate of 0.025 mL/min. The electrodes were electrified with a power supply charged at ±6 kV (Spellman HV, Pulborough, UK). Nanofibers were collected and affixed onto acetate frames and were sterilized by UV irradiation thrice per side before use in culture. The nanofiber diameter was measured as~500 µm and the mat thickness~3 µm [25]. The porosity of the mat was smaller than 1 µm since no endothelial cells were observed to migrate through the nanofiber layer [18]. TEML Assembly To create TEML constructs, HCASMCs, at a density of 5 × 10 5 cells /mL, were mixed with neutralized 3 mg/mL type I collagen (Corning) solution. Two hundred microlitres of this solution was loaded onto 0.5 cm × 2.0 cm filter paper frames, which fit the dimensions of the parallel-plate flow chamber. The formed TEML constructs were used when the HCASMCs attained typical spindle-shaped morphology and reached confluence. TEMLs were cultured in whole medium 231, with media changes every 2 days for up to 10 days. TEIL Assembly To prepare TEIL constructs, a neutralized acellular collagen gel (3 mg/mL), having the same dimensions detailed above, was formed first. Once the gel set, aligned PLA nanofibers [18], coated in 10 ng/mL fibronectin, were placed on the surface of the gel. HUVECs were then seeded at a density of 2 × 10 5 cells/mL on the nanofibers. The TEIL samples were cultured in whole medium 200, with media changes every 2 days, for 10 days to allow attainment of normal cell morphology and surface area coverage. TEBV Assembly This model was a combination of the TEIL and TEML. TEML was created first and HUVECs seeded, as previously described, on fibronectin-coated PLA nanofibers after HCASMCs attained desired spindle-shaped morphology. The complete TEBV was returned to culture with HCASMC and HUVEC whole media mixed 7:3. The schematic for TEML, TEIL, and TEBV assembly is shown in Figure 1. Perfusion Gaskets To facilitate the perfusion of our 3D vascular models, a specialized gasket was created. The gasket was manufactured using polydimethylsiloxane (PDMS). A circular ring with a diameter of 30 mm was first cut, then a 25 mm (length) × 5 mm (width) × 3 mm (depth) rectangular opening in the centre of the circle ( Figure 2). Parallel-Plate Flow Chamber and Shear Stress The assembled flow chamber was used to generate laminar flow to exert physiological shear stress on the intimal surface of the TEBV. The dimensions of the gasket used comprise and determine the dimensions of the flow chamber. The equation used to determine the shear stress generated on the endothelial surface of the TE constructs is: where u is the viscosity of the fluid being perfused (1.5 Cp), Q being the flow rate (Q; either 0.077 cm 3 /s and 0.007 cm 3 /s), b the width of the gasket opening (5 mm), and h being the height between the upper surface of the construct and top plate of the chamber (2.5 mm). The two fluid-flow rates used provided shear stresses of 22.2 dyne/cm 2 and 2.2 dyne/cm 2 for the performed experiments which are consistent with arterial and venous shear rates respectively [26,27]. Lesion Models To mimic vascular injury, a FeCl 3 lesion was created on the TEBV by dipping a 1 mm 2 square of filter paper in 10% FeCl 3 and placing this onto the upper surface of the TE constructs for 1 min. After this, the TE constructs were washed with PBS/HBS to eliminate excess FeCl 3 , then topped up with fresh media. This mode of lesioning was also applied to TE constructs for EPC perfusion. Platelet Preparation This study was approved by the Keele University (UK) Research Ethics Committee (MH-200155, 1 May 2018). Blood was donated by healthy, drug-free volunteers who gave written informed consent. Blood was obtained by venepuncture. The blood was mixed with acid citrate dextrose (ACD; 85 mM sodium citrate, 78 mM citric acid, and 111 mM D-glucose) at a ratio of 5:1. Platelet-rich plasma (PRP) was obtained by soft descent centrifugation at 725 g for 8 min. After centrifugation, the PRP was collected and treated with aspirin (50 mM) and apyrase (0.1 U/mL). PRP was again centrifuged at 450 g for 20 min, then resuspended in supplemented HEPES-buffered saline (HBS; pH 7.4, 145 mM NaOH, 10 mM HEPES, 10 mM D-glucose, 5 mM KCl, 1 mM MgSO 4 ) to reach a platelet density of 2 × 10 8 cells/mL. The HBS was supplemented with 1 mg/mL bovine serum albumin (BSA), 1.8 mg/mL glucose, 0.1 U/mL apyrase, and 200 µM CaCl 2 . Prior to perfusion, the assembled perfusion system without the TE construct was perfused with 1% BSA solution for 1 h to prevent unwanted platelet adhesion to the components of the perfusion system. The gasket and the spacers were incubated overnight with 1% BSA solution at room temperature. EPC Isolation and Culture EPCs were isolated by collecting 60 mL of whole blood from healthy volunteers. To prevent coagulation, the blood was split into 2 falcon tubes with 5 mL of ACD in each. The blood-anticoagulant mix was then split into 15 mL falcon tubes and centrifuged at 700 g for 8 min. After centrifugation, the sequence of layers occurred as follows (seen from top to bottom): plasma, enriched cell fraction (interphase consisting of lymphocytes/peripheral bone marrow cells (PBMCs), erythrocytes, and granulocytes. The plasma fraction was carefully discarded, leaving approximately 0.5-1 mL above the interface. The enriched cell fraction was pooled into one tube and diluted 1:1 with PBS. To further separate the cell, i.e., eliminate residual plasma and Red Blood Cells (RBCs), the pooled fraction was carefully layered over 8 mL of Ficoll-Paque, ensuring no mixing of the layers, and centrifuged again for 20 min at 400 g. After centrifugation, any residual red blood cells are below the separation medium, and the enriched cell fraction should be immediately above it with diluted plasma and platelets above this. After isolation, the cell-rich fraction was diluted with PBS, then centrifuged again at 400 g for 10 min. After centrifugation, the supernatant was discarded, and the resultant pellet was resuspended in 2 mL of complete EPC media. The cell suspension was then split into 2 wells of a 12-well plate that had been coated with 2.5 µg/cm 2 fibronectin. On day 1, the contents of the wells were agitated and transferred to new wells. This was repeated for the next 3 days. Media was changed daily for the first 7 days then every 2-3 days for up to 20 days. Carboxyfluorescein succinimidyl ester (CFSE) dye was used to label EPC cells at a concentration of 2 µL/mL of cell suspension. Cells were incubated for 15 min at 37 • C, then centrifuged for 3 min at 300 g. The pellet was re-suspended in 5 mL of fresh supplemented media. The cell suspension was allowed to rest for 30 min at 37 • C before loading into the perfusion system. DiOC 6 To facilitate visualization of platelet adhesion and aggregation upon the TE constructs under flow conditions, platelets were labelled with DiOC 6 , a fluorescent membrane dye. Blood was mixed 5:1 with ACD (anticoagulant). The anticoagulant was mixed with the membrane dye to make a final concentration of 1µM, prior to the addition of whole blood. This mixture was then incubated for 10 min at room temperature, then centrifuged to obtain PRP. Centrifugation was done at 1500 g for 8 min. The resultant PRP was then treated with 100 µM aspirin and 0.1 U/mL apyrase. This was followed by a centrifugation wash at 350 g for 20 min. The platelet pellet was then re-suspended with supplemented HBS, creating a final cell density of 2 × 10 8 cells/mL. Ketamine Treatment To assess if the platelet responses might be affected by ketamine treatment, experiments were performed in which both platelets and the TE constructs were pre-treated with ketamine before the perfusion of platelets on TE constructs. TEMLs were incubated with 1 mM ketamine (Narketan) or HBS for 1 h at 37 • C, after which they were perfused with washed DiOC 6 -labelled platelets incubated with either 300 µM ketamine or an equivalent volume of the vehicle, or HBS, under the same conditions stated above. Platelet aggregation on the perfused TE constructs was evaluated by fluorescence microscopy (Leica MSV269) under excitation wavelength of 485 nm and emission of 501 nm. As a static comparative, TE constructs were treated with 1 mM ketamine and placed in a cuvette with DiOC 6 -labelled platelets treated with ketamine or HBS for 15 min at 37 • C. Whilst untreated TEML constructs were placed atop untreated platelet samples as the control. Platelet Aggregometry Platelet aggregometry was performed using a modification of the previously published technique [28]. Following platelet incubation with TE constructs, 200 µL of the platelet suspension was transferred into a 96-well plate and then placed into a plate reader prewarmed to 37 • C (BioTek Synergy 2 microplate, Winooski, VT, USA). Baseline absorbance readings were taken once at a wavelength of 600 nm, obtaining an absolute absorbance reading post TE construct incubation. In the present assay, the plate reader was set up to use a fast-shaking mode between absorbance readings to aid in sample mixing. Atorvastatin Treatment Following FeCl 3 lesioning, the TE constructs were incubated with 60 µg/mL atorvastatin calcium trihydrate (Sanofi) for 3 and 5 h at 37 • C. This was followed by the perfusion of CFSE-labelled EPCs (without atorvastatin in EPC perfusate as control) for 45 min in the parallel-plate chamber. Images were taken using a Leica inverted microscope (Leica MSV269) under an excitation wavelength of 485 nm and emission of 501 nm. Cell attachment was quantified with ImageJ. Statistics and Data Analysis Values stated are mean ± SEM of the number of observations (n) indicated. Analysis of statistical significance was performed using a two-tailed Student's t-test as well as twoway analysis of variance (ANOVA), confirmed using the Brown-Forsythe test. p < 0.05 was considered statistically significant. Results The adhesion and aggregation of platelets by exposure to the TE constructs under dynamic flow conditions were assessed by monitoring fluorescence from DiOC 6 -labelled human platelets upon the surface of the TE constructs (solid phase activation), as well as monitoring activation of platelets remaining within the solution by platelet aggregometry (liquid phase activation). Three types of TE constructs were evaluated, TEIL, TEML, and TEBV, with the acellular collagen hydrogel as a negative control as we have previ-ously demonstrated this to not elicit platelet activation [18]. The homing effect of drug atorvastatin on EPC attachment on these TE constructs under dynamic flow conditions was assessed. Cells in TE Constructs Attained Typical Morphology and Organisation Consistent with our previous findings, it was possible to generate TEIL, TEML, and TEBV constructs with HUVECs and HCASMCs showing typical normal cellular morphology when grown atop (HUVECs) or within (HCASMCs) the collagen hydrogel scaffold using our previously published layer-by-layer fabrication technique [18]. Figure 3 demonstrates the effect of aligned nanofibers on HUVEC alignment, with cells showing more organised growth/orientation compared to culture flasks. Meanwhile, smooth muscle cells maintain spindle-shaped morphology while embedded in the collagen gel (data not shown [18]). TEML Supports Shear-Dependent Platelet Aggregation under Physiological Flow Conditions We first assessed whether the TEML is able to support platelet aggregation under two different shear stresses indicative of those found in the arterial (22.2 dyne/cm 2 ) and in the venous circulation (2.2 dynes/cm 2 ; [29]). The TEML construct is a model for an endothelium-denuded blood vessel and therefore can be used to assess pro-aggregatory properties of the medial layer. Figure 4A,B show that acellular collagen gels did not trigger platelets' activation under both low and high shear stress rate, consistent with our previous findings under static conditions [18]. When the TEML constructs were exposed to platelets perfused at venous shear stresses, sporadic platelet adhesion was observed ( Figure 4C). The strong platelet aggregation observed under arterial shear stresses demonstrates that shear stress and fluid flow influence platelet adhesion ( Figure 4D). These studies are consistent with a number of previous studies that have demonstrated a role for sheardependent platelet aggregation in driving thrombus formation [30,31]. The platelets adhered and significantly aggregated when exposing to the media layer of the constructs ( Figure 4D). Since no aggregation was observed on the acellular hydrogels ( Figure 4A,B), platelet aggregation should be triggered by neo-collagen produced by the embedded HCASMCs. This corresponded well with previous studies that demonstrated that the pro-aggregatory properties of the TEML were mainly attributed to the native collagen secretion of the SMCs [32] and is consistent with our previous work under stirred conditions [18]. Additional synthesis and secretion of thrombogenic molecules may also contribute to platelet activation and aggregation. TEBVs Prevent Platelet Aggregation under Physiological Flow Conditions The endothelial lining of the native human artery produces platelet inhibitors such as nitric oxide (NO) and prostaglandins to prevent thrombosis. Upon vascular damage, the loss of the antithrombotic endothelial lining, and the exposure of the prothrombotic properties of the medial layer, triggers thrombus formation. To test whether the TEBV is able to replicate this endothelium-dependent modulation of the haemostatic properties of the construct under physiological flow conditions, TEBVs with differences in the integrity of the endothelial cell layer were exposed to human platelet suspension under arterial shear stresses. In these experiments, we used (i) a TEBV with a fully confluent endothelial layer, (ii) a TEBV with a partially confluent endothelial layer, and (iii) a TEBV in which an intimal injury was triggered with FeCl 3 , a common injury model used in murine thrombosis models [33]. Real-time fluorescence imaging revealed that TEBVs with an intact, confluent endothelial layer did not show platelet aggregation upon their surfaces over a 15 min period of perfusion ( Figure 5I (A)). In contrast, the TEBV with a partially confluent endothelial layer exhibited limited platelet aggregation. Platelets adhered only in areas that were not covered with endothelial cells, allowing their direct contact with the media layer. However, not all the areas lacking endothelial coverage showed platelet aggregation, suggesting that the endothelial cells on the TEIL may be capable of secreting sufficient anti-thrombotic molecules to prevent platelet aggregation ( Figure 5I (B,C)). Dramatic contrast observations of platelets' aggregation behaviours on FeCl 3 -lesioned TEBV samples were revealed as shown in Figure 5I (D), in which massive platelet aggregates are attached to the constructs. The insert picture shows the dense morphology of the aggregated platelets, and the low magnification image shows the overall localisation of the aggregates. The strip-like aggregate location was larger than the lesion area (1 mm 2 ), implying that cytokines produced by lesioned endothelial layer and exposed subendothelial proteins (collagens in our model) could be circulated away from the lesion site, triggering aggregation away from the lesion site. FeCl 3 -mediated endothelial cell injury allowed exposure of platelets to the pro-aggregatory medial layer in the construct, providing a reliable FeCl 3 -triggered arterial injury model. These qualitative results correspond well with the comparison of the quantitative aggregation state of the platelets before and after perfusion, in which FeCl 3 injury could be seen to trigger aggregation of platelets within the platelet suspension ( Figure 5II). In regards to the integrity of the endothelial layer, it can be observed that measurements of the absorbance, before and after perfusion, do not present a significant difference in the TEBV with a full endothelial layer. This indicates that platelet aggregation is prevented by the presence of an intact endothelial layer. On the contrary, significant differences in platelet aggregation were observed in FeCl 3 -treated TEBVs. This is consistent with platelets producing and releasing autocrine activators to recruit platelets to the growing thrombi, thus triggering platelet aggregation within the platelet suspension. Ketamine Inhibited Platelet Aggregation at Arterial Shear Stress The data above demonstrate that the TEML, and the FeCl 3 -treated TEBV, trigger platelet aggregation through the exposure of the pro-aggregatory medial layer in the endothelial denuded section of these constructs. Thus, our TEBV perfusion system can be used in conjunction with the FeCl 3 injury model, which is commonly used in murine thrombosis models [33]. As our human TEBV model does not require the addition of anaesthetics to our blood samples, experiments were performed to assess if the addition of ketamine, the most commonly used anaesthetic in murine thrombosis models, could artificially alter the platelet aggregatory responses seen. As the TEML provides a more pronounced aggregatory response due to the absence of an endothelial lining, we used this system to investigate whether ketamine could impact thrombus formation by human platelets under physiological flow conditions. DiOC 6 -labelled platelets were treated with 300 µM ketamine prior to perfusion over the TEML surface under arterial shear stress (22.2 dynes/cm 2 ). Significant platelet aggregation was found in the TEML perfused at high shear stress without ketamine treatment ( Figure 6). The platelets treated with ketamine were found to lose their ability to adhere to the TEML surface. These results indicate that ketamine-treated platelets ( Figure 6I (B)) were less reactive than untreated platelets ( Figure 6I (A)), which showed significant aggregation on the surface. Ketamine not only significantly inhibited platelet aggregation on the construct surface but also inhibited their activation within the surrounding platelet suspension, as demonstrated by the aggregation state of platelets before and after exposure to the TEML ( Figure 6II). This is likely due to the known effect of ketamine inhibiting platelet Ca 2+ signalling [34,35]. This would prevent dense granule secretion, which would in turn prevent the activation of these cells in the perfused platelet suspension. To further confirm the inhibitory effect of ketamine on platelets, the platelets treated or untreated with ketamine were exposed to corresponding TEMLs in cuvette holders under continuous magnetic stirring. Representative images are shown in Figure 7. Fluorescent imaging of the TEML surface exposed to DiOC 6 -labelled platelets showed that samples treated with ketamine had fewer platelet aggregates appearing on the TEML surface ( Figure 7D-F). The untreated samples ( Figure 7A-C) displayed greater adhesion, as well as formation of multiple platelet aggregates. Atorvastatin Increases EPC Attachment Atorvastatin has been reported to increase circulating numbers of EPCs [23]. To evaluate whether atorvastatin also has an effect on the recruitment and attachment of perfused EPCs, our three TE construct variants were lesioned with FeCl 3 , then incubated with 60 µg/mL atorvastatin for 5 h. Constructs were then perfused with EPCs for 45 min at 22.2 dynes/cm 2 . EPCs were perfused at a density of 1 × 10 4 cells/mL. After perfusion, constructs were imaged and attached cells quantified (Figure 8). Across all the models shown here, it was evident that atorvastatin incubation increased the number of cells that attached to the lesioned surfaces of the constructs ( Figure 8II). We have previously demonstrated the pro aggregatory properties of the TEML, and Figure 8 suggests that atorvastatin increases these pro-aggregatory properties. The data presented here suggest that without either the endothelial (TEML) or medial (TEIL) layers, the response is stronger than when both layers are present, suggesting a synergistic effect between the medial and intimal layers in terms of modulating cell recruitment. The almost steady state of attachment of EPCs on the TEML suggests that the FeCl 3 lesion is not the driving factor for cell attachment but rather time-dependent signalling/cytokine production by the representative cells and the presence of atorvastatin. Discussion Being able to rapidly and effectively screen novel and pre-existing drug therapies for the treatment of cardiovascular disease in a human model system would provide a significant advance in our abilities to treat patients at risk of a thrombotic event. In this paper, we demonstrate that our human TEBVs are able to effectively trigger both the activation of thrombus formation and EPC recruitment to vascular injury under physiological flow conditions. Through the use of this humanised experimental system, we should be able to better determine effective drug concentrations and combinations prior to clinical trials. Additionally, it would reduce our need to perform costly, ethically challenging preclinical trials on animals, which require the use of anaesthetics that may significantly impact the results of these trials. Thus, tissue engineered human blood vessel models show promise in improving the translational potential of preclinical studies of drug delivery, drug action, and drug discovery in pharmaceutical research. Through developing PDMS sample holders and perfusion gaskets, we were able to successfully modify a commercially available parallel flow chamber to incorporate our previously described TEBV. This allowed us to examine whether these constructs are able to support thrombus activation and EPC recruitment under physiological flow conditions. In this study, we confirmed that, similar to static conditions, our acellular type I rat collagen hydrogel is unable to support significant human platelet activation under physiological flow conditions. However, our TEML constructs have been shown to produce type I and III neo-collagen that is able to trigger significant platelet activation [18]. Here we demonstrate that these are able to support significant platelet activation under arterial shear stresses but not under those more typically found in large veins. This is consistent with previous work [36,37] demonstrating that thrombus formation is regulated by shear-dependent platelet aggregation, thus confirming that our TEBV can replicate the normal haemostatic properties of native blood vessels. In contrast, the presence of an intact intimal layer completely blocks the activation of human platelets perfused over the surface of the TEBV. However, when the endothelial layer was impaired, either by incomplete coverage of endothelial cells (via a shortened culture period) or via FeCl 3 -induced injury, the TEBV constructs were able to trigger an effective haemostatic reaction consistent with that seen in the native artery ( Figure 5). FeCl 3 -injured TEML also displayed platelet aggregation under flow, an observation that was absent on uninjured constructs. Location of FeCl 3 application had no impact on this observation, demonstrating that the observed effect is due to endothelial damage caused by FeCl 3 and not via the presence of FeCl 3 itself [33,38]. This can be concluded as no enhancement of platelet activation and aggregation observed upon FeCl 3 treatment of the endothelial-free TEML constructs (data not shown). The data presented here support the traditional explanation that FeCl 3 -induced injury results in thrombus formation in murine arteries through endothelial denudation, facilitating platelet activation upon the exposure of sub-endothelial collagen. These findings demonstrate that this test model can be used to study platelet inhibition and activation in a convenient, operator-friendly, and dynamic manner. The image analysis of the adhered platelets on the TE constructs, and assessment of platelet aggregation in liquid phase using aggregometry, allows the extraction of both qualitative and quantitative datasets to assess ex vivo human thrombus formation. This study demonstrated that ketamine exerted a strong negative effect on platelet aggregation and activation. This corresponds well with previous in vitro and in vivo studies that have investigated the underlying mechanisms of ketamine's inhibition of platelet function [6]. In our experiments, we observed that ketamine almost completely inhibits thrombus formation upon the TEML construct. It is possible that this effect is caused by the inhibition of dense granule secretion by ketamine, as autocrine signalling molecules released from here are known to be crucial to the recruitment of circulating platelets onto the surface of forming thrombi [7,39]. This inhibitory effect could artefactually alter the size and structure of thrombi seen in current animal thrombosis models, potentially leading to an overestimation of the effectiveness of putative anti-platelet therapies. This is consistent with previous findings that show that the use of different anaesthetics differentially impacts the efficacy of integrin αIIbβ3 blockers in murine thrombosis models [39], thus providing initial evidence that our model system is a potential alternative to current in vivo studies. A more detailed side-by-side comparison examining the impact of different anaesthetics on the thrombotic response in current in vivo models, and in the TEBV model presented here, will be required to fully validate the model system. These findings highlight the value of using tissue-engineered human blood vessels for drug testing. By using human cells and eliminating the need for anaesthesia, we should be able to accurately model the processes of haemostasis and vascular repair to improve the translational potential of any findings. Additional advantages of our model include a reduction in the use of animal thrombosis models, elimination of the need for costly intravital microscopy equipment, and a lower cost of TE construct production compared to housing mouse colonies. We also used the TEBV to demonstrate that atorvastatin enhances recruitment and attachment of perfused EPCs. Although previous studies have demonstrated that atorvastatin increases circulating numbers of EPCs [19], there has been limited study on its ability to modulate EPC recruitment to the damaged vascular wall, which may partly underlie the beneficial effects of this drug in preventing acute cardiovascular events. Possible mechanisms by which atorvastatin might enhance EPC recruitment to the damaged TEBV is via increased bioavailability of NO [40] or activation of matrix metalloproteinase-2 and 9 (MMP2 and MMP9) [41]. It is also possible that the SDF-1 CXCR4 axis is also involved in recruiting EPCs to sites of vascular injury. This theory is supported by the findings by Luo et al., 2018 [42]. They found that NO production was induced by SDF-1, which triggers multiple signalling pathways, resulting in chemokine-induced changes of the EPC cytoskeleton leading to enhanced cell migration [42]. The value of our models is that various doses/concentrations of different drugs, as well as combinations of drugs, can be tested faster than conventional models and also allows for real-time monitoring without measures such as anaesthesia being needed. The ease of assembly also makes it possible to combine multiple cell types and can be adapted for any species. In comparison to other in vitro models [43,44], our models also allow for real-time visualisation of cell attachment due to the nature of the perfusion chamber used, containing both a top and bottom window, as well as the open-faced nature of the models themselves. Conclusions The layer-by-layer assembly of human blood vessel models provides a convenient and reliable research tool to investigate the interaction of blood components, such as platelets and circulated progenitor cells, with a blood vessel. This provides a simple method to assess the impact of a variety of drug interactions on haemostasis and vascular repair. The parallel-plate flow chamber, plus the adjustable dimensions of the PDMS gasket, enabled incorporation of 3D tissues into the chamber, permitting their exposure to perfusion with blood components at different physiological shear stresses. The labelled cells allow the monitoring of cellular activation and adhesion in real-time. The intact, confluent intimal layer can inhibit platelet activation, whilst the partially formed intima did not. The medial layer with newly formed collagen triggered platelet aggregation, whilst collagen gel made from rat skin collagen type I did not. The presence of a similar concentration of ketamine used in current in vivo thrombosis models inhibited the ability of human platelets to adhere to the TEML surface. The tissue-engineered vessels could also be used to demonstrate that atorvastatin is able to enhance the homing capabilities of EPCs by improving their ability to adhere to the damaged tissue-engineered blood vessel under flow conditions. Here we show that the combination of tissue-engineered arterial constructs and a parallel-plate flow chamber can be used to effectively simulate the haemostatic and vascular repair processes ex vivo. Therefore, these results indicate that this model system is able to provide a potential alternative to in vivo testing models. This will also permit us to test the predicted mechanisms of action of a selected anti-thrombotic drug without the need for animal models and may be modified further to simulate other clinical conditions.
2021-03-29T05:25:17.932Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "9dbc974a2aaa98e91f7e99c927716b02fedba545", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4923/13/3/340/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9dbc974a2aaa98e91f7e99c927716b02fedba545", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
49315518
pes2o/s2orc
v3-fos-license
Microwave ablation with continued EGFR tyrosine kinase inhibitor therapy prolongs disease control in non‐small‐cell lung cancers with acquired resistance to EGFR tyrosine kinase inhibitors Background Although patients with EGFR‐mutant non‐small‐cell lung cancer (NSCLC) benefit from treatment with EGFR‐tyrosine kinase inhibitors (TKIs), outcomes are limited by the eventual development of acquired resistance. We conducted a retrospective study to evaluate the efficacy and feasibility of EGFR‐TKI therapy beyond focal progression, associated with microwave ablation. Methods Patients with metastatic EGFR‐mutant NSCLC treated with EGFR‐TKIs at our institutions from May 2012 to December 2017 were identified. Patients with single lesion progression, treated with MWA, and continually administered EGFR‐TKI therapy until further progression, were included in the study. Initial response to target therapy, median progression‐free survival (PFS1), and first progression site were recorded. The median time to progression after local therapy (PFS2) was also assessed. Overall survival was calculated from the initiation of EGFR‐TKIs to the date of final follow‐up or death. Results Fifteen out of 205 patients (10%) satisfied the inclusion criteria. Local therapy was well tolerated, and complete ablation was performed in 11 (73.3%) patients. The median PFS1 was 9.5 months (range 6–41), and the median PFS2 was 8 months (range 3–24). The corresponding 6 and 12 month PFS rates were 73.3% and 26.7%, respectively. Median overall survival was 23 months (range 15–64). Conclusion The longer disease control observed in our patients suggests that continuation of EGFR‐TKI beyond focal progression associated to microwave ablation is an efficacious therapeutic strategy. Introduction EGFR-tyrosine kinase inhibitors (TKIs), such as gefitinib, erlotinib, or afatinib, are the standard first-line therapy for patients with advanced and metastatic non-small cell lung cancer (NSCLC) harboring sensitive EGFR mutations. 1 Compared with standard chemotherapy, EGFR-TKI treatment has shown a significant improvement in progression-free survival (PFS), objective response rates (ORR), and quality of life in multiple prospective phase III studies. [2][3][4][5] However, patients who initially respond to treatment often ultimately develop acquired resistance to EGFR-TKIs, with median PFS of 9-13 months. [2][3][4][5] Despite clinical evidence of progression on EGFR-TKI therapy, growth can often be indolent and asymptomatic, and may not necessitate an immediate switch in therapy. Continued EGFR inhibition appears to provide continued clinical benefit, particularly when the disease is controllable with local therapy options, such as radiotherapy, surgery, or both. [6][7][8][9][10] Microwave ablation (MWA), as a local therapy strategy, has been used as an alternative treatment for patients with advanced and medically inoperable stage I NSCLC. 11,12 This study describes our experience of using local MWA while continuing the same targeted therapy to treat EGFRmutant (MT) metastatic or recurrent NSCLC patients with disease progression confined to a single site. Patients and eligibility We conducted a retrospective analysis of NSCLC patients in our institutions between May 2012 and December 2017 who developed focal disease progression in a single lesion during therapy with an EGFR-TKI and were then continuously treated with the EGFR-TKI combined with locoregional MWA to the site of progression until further progression was observed. The inclusion criteria were: (i) histologically confirmed EGFR-MT recurrent or metastatic stage IIIB/IV NSCLC; (ii) a tumor harboring an EGFR mutation (examined either through direct sequencing or allele-specific PCR assays) known to be associated with objective clinical benefit (partial response [PR] or stable disease [SD] longer than 6 months) from treatment with an EGFR-TKI (such as gefitinib, erlotinib, afatinib); (iii) focal disease progression in a single lesion while on continuous treatment with an EGFR-TKI; and (iv) willing to provide written informed consent. The internal review board approved this retrospective study. Treatment methods All patients enrolled were orally administered 150 mg erlotinib and 250 mg of gefitinib or 40 mg of afatinib daily. Patients underwent routine chest and abdominal computed tomography (CT) scans or positron emission tomography scans every one to two months to assess the local response according to Response Evaluation Criteria in Solid Tumors (RECIST). 13 Additional procedures including CT, magnetic resonance imaging, and bone scintigraphy were applied to evaluate metastatic sites. Patients continued oral EGFR-TKI during MWA intervals until disease progression, death, or the appearance of intolerable toxicity. If their oncologist and interventional radiologist deemed it safe, patients underwent a biopsy at the site of their progressive disease before MWA to elucidate mechanisms of acquired drug resistance. Microwave ablation For MWA, we used a commercially available system (ECO-2450B MWA, ECO Microwave Institute, Nanjing, China) and a 14-gauge cooled-shaft antenna (FORSEA, Vision Microwave Electronic Institute, Nanjing, China). The output power was generally set at 50-70 W. If the tumor could not be covered by one ablation session according to the size, location, and geometry, multiple sequential ablations were performed to achieve complete necrosis. Following treatment, CT scanning was again performed to evaluate the immediate necrotic conditions after ablation and to examine whether there were any complications, such as bleeding or pneumothorax. Response evaluation Primary technical success was defined as a complete lack of enhancement in the ablation zone on initial follow-up contrast CT. A thin (< 5 mm), symmetric rim of peripheral enhancement at the ablation zone was considered to indicate benign peritumoral enhancement. Irregular nodular enhancement (> 15 HU) at the ablation site was considered to indicate recurrent or residual disease. The response to EGFR-TKIs was assessed according to RECIST version 1.1. Statistical analysis Progression-free survival was determined according to Kaplan-Meier method. First PFS (PFS1) was measured from the time of initiation of targeted therapy to first progression of disease. Second PFS (PFS2) was measured from the date of focal progression until further progression of disease (defined by RECIST) or death from any cause. Overall survival (OS) was calculated from the date of initiation of the EGFR-TKI to the date of death. OS was censored at the date of the last visit for patients whose deaths could not be confirmed. SPSS version 16.0 (SPSS Inc., Chicago, IL, USA) was used for statistical analysis. Table 1. Patient characteristics The median age of the included patients was 53 years (range 29-81); eight (53%) patients were female; 11 were never smokers; and 33% (5/15) had received at least one chemotherapy regimen before commencing EGFR-TKI treatment. All patients had an Eastern Cooperative Oncology Group (ECOG) performance status (PS) of 0-2 before MWA was performed. Fourteen patients had adenocarcinomas, and one patient had squamous cell carcinoma. All patients harbored EGFR-sensitive mutations (9 exon 19 deletions, 5 exon 21 L858R mutations, and 1 exon 18 G719X). Seven patients were treated with erlotinib 150 mg/day (two patients in the CTONG 0901 clinical trial), six with gefitinib 250 mg/day, and two with afatinib 40 mg/day (both in the LUX-LUNG 6 clinical trial). Patient characteristics and treatment history are listed in Table 1. Ten patients underwent repeat tumor tissue biopsies to elucidate the mechanisms of acquired drug resistance to EGFR-TKI before MWA. Six (60%) patients acquired the exon 20 EGFR mutation T790M, two (20%) developed MET amplification, and one developed small cell histologic transformation. The other biopsy did not reveal any new mutations. Response to therapy, survival, and toxicity At the first response assessment, two patients (13%) had achieved a complete response (CR) to treatment, eight (53%) a PR, and five (34%) had SD. The cutoff date for follow-up was December 2017, and the median follow-up duration was 17 months from the initial TKI therapy to physician assessment of progressive disease (PD) (range 9-64 months). At the time of the data cutoff, seven patients (46.7%) exhibited physician assessed PD and four (16.7%) had died. Ten patients (67%) first experienced progression in the lung, four (28%) in the liver, and one (7%) in the left adrenal gland during PFS1. The median time from PFS1 to the start of MWA was 2.8 weeks. The median PFS 1, measured from the start of frontline TKI therapy until focal disease progression, was 9.5 months (range 6-41) (Fig 1). All 15 patients were treated with MWA to the single site of focal PD. Fifteen patients underwent 15 MWA sessions corresponding to 20 antennas for 15 progressed sites. The mean size of the metastatic sites was 3.3 cm (range 1.5-6.8 cm). Complete ablation was observed in 11 (73.3%) patients, and incomplete ablation in 4 (26.7%). Pain was the most common complication, occurring in 33.3% (5/15). Postoperative pneumothorax occurred in two cases (7.1%) but did not require chest tube drainage. Most MWA procedures were well tolerated. No patients died during the procedure or within 30 days after MWA. The median PFS2, measured from the date of diagnosis of focal progression until further progression (defined by RECIST) or death from any cause, was 8 months (range 3-24). The 6 and 12 month PFS2 rates were 73.3% and 26.7%, respectively (Fig 2). The median OS was 23 months (range 15-64). The most common adverse event caused by TKIs was grade 1 or 2 rash. No grade III or IV toxicities were reported. The toxicities observed during the first EGFR-TKI treatment continued but did not worsen during the study period. Discussion EGFR-TKIs currently represent the cornerstone of therapy for NSCLC. Premature discontinuation of EGFR-TKI therapy may result in rapid progression or disease flare, with reintroduction of TKI therapy leading to decreased tumor growth. 14,15 Many patients diagnosed with EGFR-MT cancers can safely continue the original therapy with their original EGFR inhibitor beyond the first signs of radiographic progression. A retrospective analysis of 41 of patients treated with first-line erlotinib showed that 21 (50%) were able to delay a change in systemic therapy for > 3 months after RECIST progression and 21% could delay a treatment switch for more than 12 months. 16 The prospective clinical trial ASPIRATION showed that patients could safely receive treatment with erlotinib for a median of 3.1 months after initial progression and again, patients who responded well to first-line erlotinib were Figure 1 Median progression-free survival 1 (PFS1), measured from the initiation of frontline tyrosine kinase inhibitor therapy until focal disease progression, was 9.5 months (range 6-41). Figure 2 Median progression-free survival 2 (PFS2), measured from the date of diagnosis of focal progression until further progression or death from any cause, was 8 months (range 3-24). Thoracic Cancer 9 (2018) 1012-1017 most likely to benefit from post-progression therapy. 17 This study confirmed the prospective feasibility of continuing erlotinib therapy in selected patients following RECIST PD without undue toxic effects. In some cases, when patients with EGFR-MT NSCLC progress on EGFR-TKIs, but the progression only occurs in a limited number of sites (oligoprogressive disease), it may be reasonable to consider locoregional therapy to the sites of resistant disease and continue the original EGFR-TKI. This approach is supported by several retrospective analyses. In a cohort of 27 patients with central nervous system (CNS) and/or extra-CNS oligoprogressive disease, Weickhardt et al. reported that continuing EGFR-TKI therapy after disease progression in association with local ablative therapy (surgery and radiotherapy) postponed the initiation of second-line chemotherapy by 3 months in 19 patients and 12 months in 8. 7 Moreover, Yu et al. reported that patients with EGFR-MT lung cancers with acquired resistance to EGFR-TKI therapy can be treated with either radiation, surgery, or radiofrequency ablation to the progressive sites and continue on the original targeted therapy for a median of 10 months. 8 Radiation therapy of isolated CNS progression, 6 a single RECIST progressive lesion, 18 or PD of skeletal regions 10 in patients with EGFR-MT NSCLC treated with EGFR-TKIs and continued systemic administration of the TKI have been reported, and 80 days to 10.9 months additional disease control achieved. Microwave ablation has been reported as a local therapy for patients with continued EGFR-TKI in advanced NSCLC that developed extra-CNS oligoprogressive disease during EGFR-TKI treatment. In a retrospective study, Ni et al. compared the outcomes in two groups of patients who had progressed on EGFR-TKI treatment. Thirty-nine patients received MWA as local therapy for the progression sites and continued on the same TKIs (MWA group), while 26 patients switched to cytotoxic chemotherapy after progression. 19 The results show that continuing EGFR-TKI therapy with MWA beyond disease progression significantly prolonged OS (MWA group median OS 27.7, cytotoxic chemotherapy group median OS 20.0 months). Similar to Ni et al., we found that maintaining the EGFR pathway blockade, despite focal disease progression, is a fundamental strategy; we observed a PFS2 of 8 months with 6 and 12 month PFS rates of 73.3% and 26.7%, respectively. It is plausible to consider that a TKI-resistant clone may develop in the progression sites of disease, which are only a small fraction of the total alleles, while the remainder of the cancer burden remains sensitive to EGFR-TKI therapy. 20 Patients with EGFR-MT disease who progress often experience a disease flare when the EGFR-TKI is discontinued, 15 and re-challenge of these patients with the same EGFR-TKI after only a short time off therapy can lead to a second response. 21,22 This theory could partly explain the effectiveness of local ablative therapy while continuing the same targeted therapy after acquired resistance. The two most frequent mechanisms of resistance to EGFR-TKIs are the T790M mutation in exon 20 of the EGFR gene and MET amplification. 23 Re-biopsy after development of acquired resistance and genomic analysis of progression sites should be included as routine because they may provide useful information for tailoring subsequent treatment strategies. In conclusion, MWA with continuous administration of EGFR-TKIs after the determination of PD in a single progression site might represent an effective treatment option. Further prospective clinical trials are required in patients who develop local progression to confirm whether the treatment used in the present study is beneficial. Disclosure No authors report any conflict of interest.
2018-07-03T20:13:54.383Z
2018-06-20T00:00:00.000
{ "year": 2018, "sha1": "5e18bfe00b93f95c47ef8196fb05acdc2f45ef70", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1759-7714.12779", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5e18bfe00b93f95c47ef8196fb05acdc2f45ef70", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3033726
pes2o/s2orc
v3-fos-license
Studies on heavy metal removal efficiency and antibacterial activity of chitosan prepared from shrimp shell waste Chitosan, a natural biopolymer composed of a linear polysaccharide of α (1–4)-linked 2-amino 2-deoxy β-d glucopyranose was synthesized by deacetylation of chitin, which is one of the major structural elements, that forms the exoskeleton of crustacean shrimps. The present study was undertaken to prepare chitosan from shrimp shell waste. The physiochemical properties like degree of deacetylation (74.82 %), ash content (2.28 %), and yield (17 %) of prepared chitosan indicated that that shrimp shell waste is a good source of chitosan. Functional property like water-binding capacity (1,136 %) and fat-binding capacity (772 %) of prepared chitosan are in total concurrence with commercially available chitosan. Fourier Transform Infra Red spectrum shows characteristic peaks of amide at 1,629.85 cm−1 and hydroxyl at 3,450.65 cm−1. X-ray diffraction pattern was employed to characterize the crystallinity of prepared chitosan and it indicated two characteristic peaks at 10° and 20° at (2θ). Scanning electron microscopy analysis was performed to determine the surface morphology. Heavy metal removal efficiency of prepared chitosan was determined using atomic absorption spectrophotometer. Chitosan was found to be effective in removing metal ions Cu(II), Zn(II), Fe(II) and Cr(IV) from industrial effluent. Antibacterial activity of the prepared chitosan was also determined against Xanthomonas sp. isolated from leaves affected with citrus canker. Introduction Chitosan is one of the most important derivatives of chitin, which is the second most abundant natural biopolymer found on earth after cellulose (No and Meyers 1989) and is a major component of the shells of crustaceans such as crabs and shrimps. Chitosan can be obtained by Ndeacetylation of chitin and it is a co-polymer of glucosamine and N-acetylglucosamine units linked by 1-4 glucosidic bonds (Fig. 1). Chitosan is a fiber-like cellulose only but unlike plant fibers, it possesses some unique properties including the ability to form films, optical structural characteristics, and much more. Chitosan have the ability to chemically bind with negatively charged fats, lipids, and bile acids and this ability is because of the presence of a positive ionic charge (Sandford 1992). In acidic conditions (pH \ 6), chitosan becomes water soluble that enables the formation of biocompatible and very often biodegradable polymers with optimized properties in homogenous solutions. Chitosan being a non-toxic, biodegradable, and biocompatible polysaccharide polymer have received enormous worldwide attention as one of the promising renewable polymeric materials for their extensive applications in industrial and biomedical areas such as paper production, textile finishes, photographic products, cements, heavy metal chelation, waste water treatment fiber, and film formations (Rathke and Hudson 1994). It can also be used in biomedical industries for enzyme immobilization and purification, in chemical plants for wastewater treatment, and in food industries for food formulations as binding, gelling, thickening, and stabilizing agent (Knorr 1984). As chitosan can be readily converted into fibers, films, coatings, and beads as well as powders and solutions, further enhance its applications. The functional properties of chitosan are dependent on its molecular weight and its viscosity (No and Lee 1995). The presence of both free hydroxyl and amine groups enables chitosan to be modified readily to prepare different chitosan derivatives (Kurita 2001) that give some sophisticated functional polymers with exquisite properties quite different from those of the synthetic polymers. With its positive charge, chitosan can be used for coagulation and recovery of proteinaceous materials present in food processing operations (Knorr 1991). Chitosan has largely been employed as a non-toxic flocculent in the treatment of organic polluted wastewater and as a chelating agent for the removal of toxic (heavy and reactive) metals from industrial wastewater ). There are some metals which exist in aqueous solutions as anions like chromium (Rhazi et al. 2002). Chitin can be effectively extracted from prawn shells following deprotienization using 5 % NaOH and demineralization using 1 % HCl. Low molecular mass chitosan samples with degree of deacetylation (DD) [64 % and Mw of the major component \104 can be obtained by treating the chitin with 50 % NaOH at 100°C for up to 10 h (Mohammed et al. 2012). At pH close to neutral the amine groups of chitosan binds to metal cations. At lower pH it is able to bind more of anions by electrostatic attraction as chitosan gets more protonated (Guibal 2004). Chitosan can be readily used as a biosorbent as it is cheaply available cationic biopolymer. Chitosan has antimicrobial activity, haemostatic activity, anti-tumor activity, accelerates wound healing, can be used tissue-engineering scaffolds and also for drug delivery (Burkatovskaya et al. 2006). The antimicrobial activity and antifungal activity of chitosan is largely because of its polycationic nature (Ziani et al. 2009 andChoi et al. 2001). It displays broad spectrum of antibacterial activity against both gram positive and gram negative bacteria and also antifungal activity against Aspergillus niger, Alternaria alternata, Rhizopus oryzae, Phomopsis asparagi, and Rhizopus stolonifer (Guerra-Sánchez et al. 2009;Zhong et al. 2007;Ziani et al. 2009). Chitin exhibited a bacteriostatic effect on gram-negative bacteria, Escherichia coli ATCC 25922, Vibrio cholerae, Shigella dysenteriae, and Bacteroides fragilis (Benhabiles et al. 2012). The current research is to prepare chitosan from shrimp shell waste and to estimate the prepared chitosan by both Fig. 1 Showing the molecular structure of chitosan (Jayakumar et al. 2011) qualitatively and quantitatively. Other applications such as antibacterial activity against Xanthomonas sp. and metal removal efficiency have also been studied for the prepared chitosan. Materials Shrimps shell waste material was collected from local market of Vellore. The chemicals such as Hydrochloric acid and Sodium hydroxide pellets were procured from Hi-Media Laboratory, Mumbai. Distilled water was used throughout the process. Preparation of chitosan The shrimp shells obtained from the local market of Vellore shown in Fig. 2a were first suspended in 4 % HCl at room temperature in the ratio of 1:14 (w/v) for 36 h. This causes the Demineralization of shells after which they were washed with water to remove acid and calcium chloride. Deproteinization of shells was done by treating the demineralized shells with 5 % NaOH at 90°C for 24 h with a solvent to solid ratio of 12:1 (v/w). After the incubation time the shells were washed to neutrality in running tap water and sun dried. The product obtained was chitin. Chitosan preparation involves the deacetlyation of the obtained chitin (Dutta et al. 2004). Deacetylation of chitin involves the removal of acetyl groups from chitin and that was done by employing 70 % NaOH solution with a solid to solvent ratio of 1:14 (w/v) and incubated at room temperature for 72 h. Stirring is mandatory to obtain a homogenous reaction as shown in Fig. 3a-c. The residue obtained after 72 h was washed with running tap water to neutrality and rinsed with deionized water. It was then filtered, sun dried and finely grinded shown in Fig. 4a. The resultant whitish flakes obtained after grinding are chitosan as shown in Fig. 4b. Determination of chitosan yield Yield was determined by taking the dry weight of shrimp shells before treatment and the dry weight of prepared chitosan. Determination of ash content The ash content of the prepared chitosan was determined by placing 0.5706 g of chitosan into previously ignited, cooled, and tarred crucible. The samples were heated in a muffle furnace preheated to 600°C for 6 h. The crucibles were allowed to cool in the furnace to \200°C and then placed into desiccators with a vented top. Crucible and ash was weighed (AOAC 1990). Determination of moisture content Moisture content was determined by employing the gravimetric method (Black 1965). The water mass was determined by drying the sample to constant weight and measuring the sample after and before drying. The water mass (or weight) was the difference between the weights of the wet and oven dry samples. Then moisture content was calculated using the following relationship: % Moisture content ¼ Wet weight ðgÞ À Dry weight ðgÞ Â 100 Wet weight ðgÞ Determination of solubility The solubility of prepared chitosan was determined by taking 200 mg of chitosan and then adding 200 ml of water and the same method was followed with 1 % acetic acid solution. FTIR analysis The samples of prepared chitosan were characterized in KBr pellets by using an infrared spectrophotometer in the range of 400-4,000 cm -1 . The DD of chitosan was determined using a Fourier Transform Infra Red (FTIR) instrument with frequency of 4,000-400 cm -1 . The following equation (Struszczyk 1987) was used, where the absorbance at A1629.85 and A3450.65 cm -1 are the absolute heights of absorption bands of amide and hydroxyl groups, respectively. Determination of WBC This property of chitosan was determined by using the modified method of (Wang and Kinsella 1976). For waterbinding capacity (WBC) 0.5 g of chitosan sample was taken in a centrifuge tube of 50 ml which was weighed initially and then adding 10 ml of water, and mixing on a vortex mixer for 1 min to disperse the sample. The contents were later left at ambient temperature for 30 min with intermittent shaking for 5 s every 10 min and then centrifuged at 3,200 rpm for 25 min. After centrifugation the supernatant was decanted and the tube was weighed again and WBC was calculated using the following relationship. WBC ð%Þ ¼ water bound (gÞ initial sample weight (gÞ Â 100 Determination of FBC Fat-binding capacity of prepared chitosan was calculated using the modified equation of (Wang and Kinsella 1976). For FBC 0.5 g of chitosan sample was taken in a 50 ml centrifuge tube which was weighed initially and 10 ml of gingelly oil was taken followed by mixing on a vortex mixer for 1 min to disperse the sample. The contents were later left at ambient temperature for 30 min with intermittent shaking for 5 s every 10 min and then centrifuged at 3,200 rpm for 25 min. After the centrifugation the supernatant was decanted and the tube was weighed again and FBC was calculated using the following relationship. XRD analysis The prepared chitosan was characterized by X-ray diffraction (XRD) technique using an X-ray diffractometer (Bruker Germany, D8 Advance, 2.2 KW Cu Anode, Ceramic X-ray) with CuKa radiation (k = 1.5406 Ǻ ). The measurement was in the scanning range of 5-70 at a scanning speed of 50 s -1 . SEM analysis Chitosan prepared from shrimp shell waste was examined by scanning electron microscopy (SEM) having a magnification range of 5,000 and accelerating voltage 20 kV. Determination of heavy metals removal efficiency in industrial effluents by AAS 0.1 g of chitosan was mixed with 40 ml of industrial effluent (obtained from leather industry located at Ranipet) and its pH was measured, followed by an incubation of 3 h at 22°C. The original effluent was used as control. The industrial effluent mixed with chitosan was kept for incubation for 3 h at 22°C. The contents were then centrifuged at 7,000 rpm (revolutions per minute) for 5 min, and supernatant was filtered using Whatman filter paper no 2. The metal ions namely Cr(IV), Fe(II), Zn(II) and Cu(II) were analyzed for their residual metal concentration using atomic absorption spectrophotometer (AAS) (VARIAN, AA240). The standards of these metals were prepared (Gamage and Shahidi 2007). Determination of inhibitory activity of chitosan against Xanthomonas sp. Leaves showing symptoms of cankerous growth were plucked from lemon tree (Citrus limon) and were surface sterilized with sodium sulfite for consecutively seven times followed by distilled water. Leaves were crushed in mortar and pestle with 1 ml deionized water and the extract obtained was spread on nutrient agar plates supplemented with 5 % sucrose. Method employed for evaluating the antimicrobial activity was growth inhibition in liquid medium. The antimicrobial effect of prepared chitosan was studied in liquid nutrient medium. Nutrient broth supplemented with 5 % sucrose was used. The flasks were marked as blank that contained only the media, control (standard) that had the bacterial culture only and the test which contained the prepared chitosan and Xanthomonas sp. cultures. The freshly grown inoculums were allowed to incubate in the presence of 0.2 g of chitosan to observe the bacterial growth pattern at 310 K (37°C) and 150 rpm. In liquid medium, growth of Xanthomonas sp. was indexed by measuring the optical density (OD). Optical density measurements were carried out at k max = 600 nm after every 1 h interval up to 24 h. Graph was plotted to interpret the results. Yield The prepared chitosan had a percentage yield of 17 % as shown in Table 1, which was at par when compared to the percentage yield obtained by (Brzeski 1982) who reported 14 % yield of chitosan from krill and showed no significant difference to the percentage yield of 18.6 % from prawn waste (Alimuniar and Zainuddin 1992). Ash content The prepared chitosan had an ash content of 2.28 % as shown in Table 1, which when compared to commercial chitosan which had an ash content of 2 % shows that the chitosan prepared had a standard percentage of ash content which can be used for commercial applications, as the ash content in chitosan is an important parameter that affects its solubility, viscosity and also other important characteristics. Moisture content The moisture content of chitosan obtained from shrimp shells was measured to be 1.25 % as shown in Table 1, which is in agreement with (Islam et al. 2011 andHossein et al. 2008) who reported moisture content in the range of 1-1.30 obtained from brine shrimp shells. Although Li et al. (1992) reported that commercial chitosan products may contain \10 % moisture content. Solubility The prepared chitosan from shrimp shells waste was found to be soluble in 1 % acetic acid solution and partially soluble in water as shown in Table 1. Degree of deacetylation In the present study DD of the prepared chitosan was found to be 74.82 % (Table 1). It is an important parameter that influences other properties like solubility, chemical reactivity and biodegradability. DD of the commercially available chitosan has values that range between 75 and 85 %. The value of DD depends on various factors such as the source and procedure of preparation and the values ranges from 30 to 95 % (Martino et al. 2005) and also on the type of analytical methods employed, sample preparation, type of instruments used, and other conditions may also influence the analysis of DD (Khan et al. 2002). WBC and FBC Water-binding capacity and FBC are functional properties that vary with the method of preparation. Chitosan prepared from shrimp shells in the present study has WBC and FBC of 1,136 and 772 % (Table 1) and these are in agreement with studies reported by (Rout 2001). XRD analysis The XRD pattern of chitosan prepared from shrimp shells waste illustrates two characteristic broad diffraction peaks at (2h) = 10°and 20°that are typical fingerprints of semicrystalline chitosan as shown in Fig. 5 (Bangyekan et al. 2006). The XRD pattern of standard chitosan procured from Sigma Aldrich shows similar peaks as shown in Fig. 6. The peaks around 2h = 10°and 2h = 20°are related to crystal I and crystal II in chitosan structure (Ebru et al. 2007;Marguerite 2006) and both these peaks attributes a high degree of crystallinity to the prepared chitosan (Julkapli and Akil 2007) as shown in Fig. 5. FTIR analysis The structure of the prepared chitosan was confirmed by FTIR analysis. The spectra of chitosan shows a broad absorption band in the region of 3,450.65 cm -1 that corresponds to OH stretching vibrations of water and hydroxyls and NH stretching vibrations of free amino groups as shown in Fig. 7. The band observed at 2,924.09 and 2,852.72 corresponds to asymmetric stretching of CH 3 and CH 2 in the prepared chitosan (Guo et al. 2005). The intensive peak around 1,629.85 cm -1 corresponds to bending vibration of NH 2 which is a characteristic feature of chitosan polysaccharide and also indicates the occurrence of deacetylation (Zhang et al. 2011 andRadhakumary et al. 2003). SEM analysis The SEM micrograph illustrates the morphology of the prepared chitosan from shrimp shells. The micrographs showed non-homogenous and non-smooth surface as shown in Fig. 8. Heavy metals removal efficiency in industrial effluents by AAS Industrial effluent obtained from Ranipet region contained traces of heavy metals namely Cu(II), Zn(II), Fe(II) and Cr(IV) that was confirmed by AAS. The results indicated that the prepared chitosan has the ability to adsorb the metal ions that were present in the industrial effluent as shown in Table 2. Out of all the metal ions Cu(II) was best absorbed Zinc toxicity causes problems like nausea, vomiting, diarrhea and also sometimes abdominal cramps (Elinder and Piscator 1979). Zinc present in the industrial effluent was successfully absorbed by the prepared chitosan to around 86.15 % AAS results strongly indicates the removal of metal ions where the sample 1 used is the untreated one and sample 2 is the treated one. Fe(II) which is responsible for the unpleasant organoleptic properties in drinking water (Muzzzarelli et al. 1989) has also been absorbed by the prepared chitosan to less percentage of around 65.2 %. The efficacy of chitosan treated and untreated effulents were shown in Fig. 9. Therefore it can be concluded that the prepared chitosan has the potential to be used as a adsorbent in the treatment of industrial wastewater. Inhibitory activity of chitosan on Xanthomonas sp. Optical density (OD) measurements were indexed and turbidity was observed to evaluate the inhibitory activity of Cu copper, Cr chromium, Fe Iron, Zn zinc chitosan. Growth of Xanthomonas sp. was inhibited by the chitosan in the liquid medium. Very less turbidity was there in the test flask which contained both the chitosan and the organism. Whereas, the standard (control) that had only the organism was turbid and by the increase in OD readings showed that there was growth. Graph was plotted to determine the difference in the growth pattern of test ant control as shown in Fig. 10. No activity was observed in the blank. Conclusion The present study observations indicate that chitosan has been successfully prepared from shrimp shell waste. The functional, physiochemical properties, XRD and FTIR of the prepared chitosan showed that it can be used commercially and can be supplemented in food and drug preparation. The prepared chitosan was found effective in removing metals from industrial effluent and the result clearly indicated that the metal ion percentage was reduced to mere negligible level. Inhibition in growth of Xanthomonas sp. was observed in presence of chitosan prepared from shrimp shells. Since chitosan has the potential to be used as an antibacterial agent to control plant diseases. Method employed in this study prepares chitosan in a very economical way and it can also be a way to control pollution as shrimp shell waste is being used which is otherwise discarded.
2016-05-12T22:15:10.714Z
2013-05-26T00:00:00.000
{ "year": 2013, "sha1": "58780070e515e1434a6484173604217fe7b71b7a", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s13205-013-0140-6.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "58780070e515e1434a6484173604217fe7b71b7a", "s2fieldsofstudy": [ "Chemistry", "Environmental Science", "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
268435343
pes2o/s2orc
v3-fos-license
Development of Basic Housekeeping Virtual Reality Learning Module For Students . The Virtual Reality (VR) Basic Housekeeping Module helps students to gain and experience practical experience even though they are studying from home. Polimedia as a vocational campus that prioritizes practice in transferring knowledge to students must look for new methods so that students continue to gain knowledge. The VR learning module development method that the author uses is the Waterfall SDLC (System Development Life Cycle ) method. The system design stages in this research are explained through process design using cases and sequence diagrams. In the design, the results are in the form of a VR application. Testing the system interface was declared good based on an assessment using 3 aspects (usability aspects, user aspects and interaction aspects). Introduction In the field of education, technology plays an important role.One example is virtual reality technology, which gives students a simulation to learn science in a setting that looks like reality.Virtual reality is a computer simulation technology that gives users the possibility to interact with the surrounding environment, a real environment that is imitated, exists only in the user's imagination, where objects can be explored as if in the real world.Consequently, virtual reality technology allows users to engage with the environment around them as if they were doing so in the real world.Virtual reality is increasingly being used in learning, as are visual (images), audio and video (multimedia) learning media. The technology in question is "Virtual Reality" (VR).Students who become users can be active during learning because students are able to be involved in the process of anatomy learning activities themselves.They focus more on activities because by using VR technology students experience it directly as if they were in the real world doing something without any distractions.That's why there is a need for an application that makes learning easier, such as 'Virtual Reality' . JICOMS This application is a virtual reality simulation of basic housekeeping learning for hotel management study program students.This application will simulate a hotel room and the stages of housekeeping activities according to the correct procedures. The main objective of this research is to create an early development basic housekeeping learning module in the form of a virtual reality application.It is hoped that in the future this learning module can also be developed in other courses to support virtual practical learning before students practice directly in real life. Material and Methods The system development stages are carried out by applying the steps contained in the System Development Life Cycle (SDLC).These stages include: Systems Analysis, is the process of analyzing and defining problems and possible solutions for organizational systems and processes.System Design, includes the process of designing output, input, file structure, programs, procedures, hardware and software needed to support the system.System Development and Testing, build the software needed to support the system and carry out testing accurately.Installing and testing hardware and operating software System Implementation, is the transition stage from the old system to the new system, carrying out training and guidance as necessary. Operations and Maintenance, are stages carried out to support information system operations and make changes or additional facilities.System Evaluation, evaluating the extent to which the system has been built and how well the system has been operated The design stage is carried out to provide a complete design of the system to be built.The system design stages in this research are explained through process design using use cases and sequence diagrams. Result Based on the design carried out in this study, the results obtained are in the form of a Housekeeping VR module application.The following is the resulting application output: Title Screen.In this page, there are introduction and instruction to start the module. Hotel Room Basic Housekeeping Simulation.After we choice to start the programs, we are now in the simulation step by step basic housekeeping procedure Menu Action Choice.In each step of procedure, we have to chose the action for the correct basic housekeeping procedure. Discussion Testing of the application is carried out by testing the process and face-to-face system. Process testing is carried out by running each process provided in the system to ensure that there are no errors in both the data processing and the calculation of the VR Application.Based on the results of the tests carried out, the system has been able to run properly and there are no errors in any data processing.Testing of the system interface is carried out through a questionnaire given to students of the hotel management study program. Questionnaires were given to 30 students of the hotel management study program. The results of the system interface assessment questionnaire are as follows: DOI 10.18502/keg.v6i1.15452 Conclusion Based on the tests of Basic Housekeeping VR Application, it is concluded that the system testing based on process criteria has no errors (errors) while the program is DOI 10.18502/keg.v6i1.15452 Table 1 : System Interface Assessment Aspects.
2024-03-17T17:12:21.281Z
2024-03-07T00:00:00.000
{ "year": 2024, "sha1": "f5dd417c0d825c155479c15c6b16b24b409187c5", "oa_license": null, "oa_url": "https://knepublishing.com/index.php/KnE-Engineering/article/download/15452/24403", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7df334b54e367389dcbe780cad94a16f0c1cbfd5", "s2fieldsofstudy": [ "Education", "Engineering", "Computer Science" ], "extfieldsofstudy": [] }
246917130
pes2o/s2orc
v3-fos-license
Linear polarization holography Polarization holography is a newly researched field, that has gained traction with the development of tensor theory. It primarily focuses on the interaction between polarization waves and photosensitive materials. The extraordinary capabilities in modulating the amplitude, phase, and polarization of light have resulted in several new applications, such as holographic storage technology, multichannel polarization multiplexing, vector beams, and optical functional devices. In this paper, fundamental research on polarization holography with linear polarized wave, a component of the theory of polarization holography, has been reviewed. Primarily, the effect of various polarization changes on the linear and nonlinear polarization characteristics of reconstructed wave under continuous exposure and during holographic recording and reconstruction have been focused upon. The polarization modulation realized using these polarization characteristics exhibits unusual functionalities, rendering polarization holography as an attractive research topic in many fields of applications. This paper aims to provide readers with new insights and broaden the application of polarization holography in more scientific and technological research fields. Introduction In traditional holography 1 , owing to the intensity distribution of two interference waves, including amplitude and phase, being recorded, only the same components of the polarization state of the two interference waves are considered. However, the actual polarization states of the two interference waves are ignored. In contrast, in polarization holography, the amplitude and phase of the two waves as well as the polarization states of the two waves are recorded 2−8 . For this reason, polarization holography is expected to have more abundant reconstruction char-acteristics and a wide range of applications. Polarization holography is a new research field. The previous theory is based on Jones matrix formalism 9 , wherein the angle between two lights that interfere with each other should be small, and the results are limited under the paraxial approximation 10 . In the subsequent decades, there have been few reports on this topic. However, since the tensor theory of polarization holography, wherein the response of the recording material to the polarized wave is considered as a tensor, was proposed by K Kuroda 11 , research on polarized holography has garnered attention, and has made significant progress. D Babara et al. applied the theory to a data storage system and found that the recording density can be increased by modulating the state of polarization 12, 13 . A Wu et al. discovered the conditions of the null reconstruction effect (NRE) employing circular polarization holography in 2015, wherein no reconstruction wave was observed there even when the Bragg condition was satisfied 14 . The NRE was experimentally observed by T Todorov et al. in 1986 15 and theoretically explained by Huang et al. using paraxial coupled mode theory in 1995 16 . S H Lin et al. also observed this phenomenon under a paraxial approximation 17 , wherein the experiment and theory incorporated a small cross angle between the signal and reference wave, which resulted in the NRE. However, in addition to the small cross angle, certain other factors that resulted in the NRE were determined by X Tan et al. Consequently, the NRE of the polarization hologram was also observed with a large recording angle, successfully overcoming the hinderance encountered owing to the need for paraxial approximation. 19 and elliptical 20 polarization holography in 2015 and 2019, respectively. The condition of faithful reconstruction effect (FRE), wherein the polarization state of the reconstructed wave is maintained similar to that of the signal wave, is another important research target. In tensor theory, the polarization state of the reconstructed wave is affected by the exposure energy under general conditions. Typically, a balanced exposure condition is used. J Zang 21 , P Qi 22 , A Wu 14 , X Xu 23 , Y Zhang 24 , and Z Huang 25,26 , detected the balance point by introducing complementary polarization states in the case of linear, circular, and elliptical polarization holography in the period ranging from 2015 to 2021. However, certain problems were encountered and the balance point could not be determined accurately because the condition has a close relationship and setup limitation with the recorded energy and the intersection angle. Consequently, J Y Wang et al. proposed a method for measuring the balance point in 2021 27 , and achieved polarization modulation of the reconstructed wave under unbalanced exposure conditions. However, in certain special cases, the polarization characteristics of the reconstructed wave can be independent of the exposure energy. In 2017, J Zang et al. used a particular crossing angle of 90° with a linearly polarized wave to eliminate the exposure energy requirement 19 . Y Hong et al. proposed a new method to achieve FRE under an arbitrary crossing angle, and an arbitrary polarization state of the signal 28 . Between 2020 to 2021, P Qi, Z Huang and J Y Wang studied the FRE that is independent of exposure energy under an interference angle of 90° 29,30 and arbitrary interference angle 31 . There have been several applications in this regard. Considering the NRE of linearly polarized wave, which is independent of exposure energy, J Zang successively proposed dual-and four-channel polarization multiplexing methods 19,32 , whereas considering the FRE of linearly polarized wave, L Huang proposed a method to generate the vector beams 33 . Until now, with regard to tensor theory, FRE and NRE of polarization holography with two linear, circular and elliptical polarized waves have been researched. However, the details of their polarization holography behaviors under different polarized states are yet to be analyzed and discussed. Consequently, this paper attempts to analyze and subsequently discuss the reconstruction characteristics of polarization holography recorded by two linearly polarized state waves. Linear polarization is an essential state, and any complex polarized state can be combined by a linearly polarized state. Many studies on reconstruction characteristics have been reported, for example, the study by S H Lin et al. 17 , multichannel recording at 90° by J Zang et al. 19,32 , and the study of orthogonal reconstruction effect (ORE) by C Wu et al. 34 . Further, it has also been applied to data storage 35−43 , and polarization holography for high-density storage for which the representative works are the dual-and four-channel polarization multiplexing scheme proposed by J Zang. The scheme can separate and reconstruct the data pages loaded into the channel by adjusting the polarization state, which deeply taps the value of polarization information of light. In this review, the characteristics and behavior of linear polarization holography have been introduced in detail, including the principle involved and applications. The rest of the paper has been organized into 3 sections. Section Tensor theory of linear polarization holography introduces the theoretical basis for linear polarization holography and derives the expression of the reconstructed wave. The reconstruction characteristics were divided into two categories according to whether they were affected by exposure energy. Section Reconstruction characteristics independent of exposure energy introduces the reconstruction characteristics that are independent of the exposure energy. By adjusting the polarization state in the polarization holographic recording and reconstruction, the unbalanced influence of the exposure energy on the change in refractive index and the birefringence effect of the polarization-sensitive medium can be eliminated. Further, the polarization characteristics of the reconstructed wave under different interference angles were analyzed. Subsequently, certain applications have been introduced, using FRE, ORE, and NRE to achieve multichannel polarization multiplexing, and to generate vector beams. In polarization multiplexing, the amplitude and phase information of the light can be multiplexed simultaneously. In addition, combining multiplexing technologies, such as angle, shift, and wavelength, can double the density of holographic storage. Thus, by designing the dynamic exposure system, a vector beam with polarization difference distribution can be generated. However, owing to its polarization and phase singularity, a dark hole was observed in the center. Section Reconstruction characteristics related to exposure energy introduces the reconstruction characteristics affected by exposure energy, which have been divided into balanced and unbalanced exposure conditions. Under balanced conditions, certain interesting polarization characteristics have been introduced under different interference angles, whereas, under unbalanced conditions, a method for measuring the exposure response coefficient of polarization-sensitive media has been introduced. Finally, this article provides conclusions and future application prospects. Tensor theory of linear polarization holography According to the tensor theory of polarization holography, which was proposed by K Kuroda and based on a simple molecular model 11 , in the recording process, as shown in Fig.1(a), the signal and reference waves interfere with each other, and the interference field can be described as where G + and G − are the vector amplitudes, respectively, q + and q − are the wave vectors of the signal and reference waves, respectively, and r is the position vector. The dielectric tensor of the recording material after exposure can be defined as where n 0 is the refractive index of the material before exposure, and 1 is the unit tensor. Further, scalars A and B are coefficients of photoinduced isotropic and anisotropic refractive index changes in the material, which are correlated to the intensity and polarization grating, respectively, and their values are influenced by the exposure energy 21,27 . In the reconstruction process, as shown in Fig. 1(b), the retrieved reference wave is represented as F − exp(ik − ·r), where F − is the vector amplitude and k − is the wave vector of the reading wave. Further, according to the tensor theory of polarization holography 11 , the reconstructed wave can be represented as where k + is wave vector of the reconstructed wave. Equations (3) and (4) are only the general expressions for the reconstructed waves by polarization holography, and the physical meaning must be revealed according to the actual polarization state of the waves. In this paper, we discuss the case of a linearly polarized wave only. Assume that the two orthogonal basic unit vectors of linearly polarized waves are p and s, as shown in Fig. 2. They are perpendicular and parallel to the incident plane, respectively. During the recording and reconstruction process, the signal, reference, and reading waves are linearly polarized wave, and α, β, and γ are the orientation angles between their polarized wave vector directions and p direction, respectively. Therefore, the polarization states can be expressed by the cosine and sine components of the p and s directions, respectively. where p + , p − and p' − are the basic unit p vectors of the signal, reference, and reading waves, respectively, as shown in Fig. 3. These basic unit vectors can be represented as According to the principle of holography, if the Bragg condition is satisfied with the same wavelength during the recording and reconstruction processes, the following equations should also be established: where θ' and θ are the angles of the signal and reference waves, respectively. Here θ = θ + + θ − , as shown in Fig. 3. As the basic unit p vector is perpendicular to the wave vector k. Equations (12) and (13) can be transformed into: where p' + is the basic unit p vectors of the reconstructed wave. Further, the polarization states of the reconstructed wave can be expressed as 31 : Equation (16) is the general expression for the waves reconstructed by the polarization holography of full linearly polarized waves. There are two parts in Eq. (16). The first part is the faithful reconstructed wave of the signal G + , whose amplitude is modified by Bcos(β −γ). In contrast, the second part, which is interwoven by all wave components, is the extra content of the reconstructed wave, where the ratio of the dielectric tensor coefficients A to B affects the polarization state of the reconstructed wave, and A/B is defined as the exposure response coefficient 27 . As the physical meanings and the reconstruction characteristics are not sufficiently clear, Recording material they should be discussed from the perspectives of certain special cases of their angles and combinations of each other. The polarization characteristics of the reconstructed waves were divided into two categories. The first category is the reconstruction characteristics that are independent of the exposure energy, which implies that the polarization state of the reconstructed light is not affected by the exposure energy (A/B=any). In contrast, the second category is the reconstruction characteristics that are related to the exposure energy, that is, the polarization state of the reconstructed wave changes with the exposure energy, and the A/B value is required to satisfy a specific one. The polarization-sensitive media used were all PQ/PMMA (phenanthrenequinone-doped polymethyl methacrylate) 44−48 and the optical properties of the materials used in the different experiments varied according to the preparation process. Reconstruction characteristics independent of exposure energy The polarization state of the reconstructed wave is independent of the exposure energy. Consequently, its theoretical expression incorporates the s-and p-polarized components in the expression of the reconstructed wave with the same dielectric tensor coefficients A and B. To identify its physical properties, three cases were considered: θ =90°, θ =0°, and 0° < θ < 90°. hen the interference angle is 90°, the FRE or NRE independent of the exposure energy can be realized under the most relaxed conditions by the reconstructed wave. Further, the interesting reconstruction characteristics can be applied to multichannel polarization multiplexing to generate vector beams. The applications related to these characteristics have been described in section Application. However, only cubic materials can be used to satisfy the requirement of a 90° interference angle, which is caused by the refractive index of the PQ/PMMA material (n=1.492). Under this condition (θ = 90°), Eq. (16) becomes (17) Herein, the reconstructed wave consists of two parts: the signal component and the s-polarized component affected by the exposure energy (A + B) and polarization angle. Till one among the signal, reference, and reading waves comprises only a p-polarized component, or the signal wave is s-polarized, a faithful reconstructed wave of the signal G + can be obtained. The reconstruction characteristics of this case are summarized in Table 1. Among these polarization characteristics, the s-and ppolarized components in the expression of the reconstructed wave consist of the same dielectric tensor coefficient, thereby rendering FRE independent of the exposure energy 31 . Further, the amplitude of the reconstructed wave is affected by the polarization angles of the reference and reading waves; if the two are the same, the amplitude of the reconstructed wave is the largest, while the amplitude is the smallest when their states are orthogonal to each other. In particular, NRE that is independent of exposure energy can be realized when pure p-polarized waves are present in the signal, reference, and reading waves, respectively (rows 2, 3, and 4 in Table 1) 29,49 . The NRE was verified by J Zang 19,32 through an experiment, wherein the hologram was recorded at an interference angle of 90° by p-polarized signal and reference waves. Consequently, the hologram was read using a linearly polarized wave with a polarization angle of 0-360°, and the data obtained are shown in Fig. 4. As shown in Fig. 4, the normalized diffraction efficiency (NDE) attains a maximum value for reading wave polarization angles of 0°, 180°, or 360° (p-polarized). In contrast, minimum value is obtained at 90° or 270° (spolarized). In addition, the ratio of the maximum to the minimum value of the NDE is SNR = 64:1. Furthermore, the NRE of linear polarization holography was verified in this experiment. In 2020, P Qi verified the FRE and NRE of a signal wave with an arbitrary polarization angle at 90° interference angle 29 , and the effect of the polarization angle of Recording Reconstruction reading wave on the reconstructed wave is illustrated in Fig. 5. The parameters β and α that were used in their study have the same meaning as α and γ in this paper. In the experiment, the signal waves with different polarization angles were used (three cases are introduced in this paper: α = 0°, 45°, and 90°), and the reference wave was p-polarized (β = 0°). The results show that for the reading wave with an arbitrary polarization angle, the NDE was related to the polarization angle of the reading wave and proportional to the value of cos 2 γ ( Fig. 5(a)). NRE was realized when the reading wave was orthogonal to the reference wave. Moreover, as shown in Fig. 5(b), FRE can be realized under a reading wave with an arbitrary polarization angle. The conclusion arrived by P Qi is consistent with that of row 3 in Table 1. hen the interference angle is 0°, the reconstructed wave can obtain the highest diffraction efficiency, and no NRE exists. Simultaneously, FRE or ORE independent of the exposure energy can be achieved under the relatively relaxed conditions. Moreover, these reconstruction characteristics can still be applied to polarization multiplexing, resulting in advantages in miniaturized optical devices because the interference angle between the signal and reference path is zero 36,37,50,51 . Under this condition (θ = 0°), the two beams are coaxial, and Eq. (16) becomes On analyzing Eq. (18), the expression of the reconstructed wave is found to be complex, which is the coupling state of the signal, reference, and reading waves. However, polarization modulation that is independent of exposure energy can be achieved by controlling the polarization state for recording and reconstruction. Therefore, under the condition of coaxial recording, the reconstructed wave follows these laws. When the polarization angle of the signal, reference, and reading waves satisfy a special relationship (same or orthogonal), as shown in Table 2, the reconstruction wave can achieve FRE or ORE that is independent of the exposure energy. Consequently, with an aim to arrive at the conclusions in Table 2, we conducted simulations, focusing on the variation of the polarization angle and NDE of the reconstructed wave with the polarization angle of the reading wave. The simulation results are shown in Fig. 6. As shown in Fig. 6, when the polarization angle of the reference wave is orthogonal to that of the signal wave (red line), the polarization angle ( Fig. 6(a)) and NDE ( Fig. 6(b)) of the reconstructed wave exhibited a linear change under the reading wave with different polarization angles. Moreover, this linear change is independent of the exposure energy. In such recording conditions, FRE or ORE can be realized as long as the polarization angle of the reading wave is the same or orthogonal to that of the reference wave. This result is the same as that in rows 1 and 2 in Table 2. The blue line in Fig. 6 represents another situation, that is, the polarization angle of the signal wave is the same as that of the reference wave. Under these conditions, the polarization angle ( Fig. 6(a)) and NDE ( Fig. 6(b)) of the reconstructed wave exhibited a nonlinear change owing to the change in the polarization angle of the reading wave. Further, two exposure conditions were simulated (A/B=10, A/B=0. 3), and it was found that the linear change was related to the exposure energy (A/B). In a similar manner, FRE or ORE can still be achieved when the polarization angle of the reading wave is the same or orthogonal to that of the reference wave, which is the same as that in rows 3 and 4 of Table 2. hen the interference angle is between 0° and 90°, NRE that is independent of the exposure energy does not exist. Moreover, strict conditions must be satisfied for the reconstruction wave to achieve FRE or ORE that is independent of exposure energy, and simultaneously, the polarization and interference angles must be constrained. The conditions required to achieve FRE or ORE under a general interference angle are listed in Table 3 31 . To realize FRE or ORE independent of the exposure energy, the four conditions listed in Table 3 must be satisfied. Further, the polarization angle of the reading wave should also satisfy the corresponding conditions. When θ = 90°, the conditions required for faithful reproduction in Table 3 are consistent with those in Table 1, and the ORE in Table 3 becomes NRE. In contrast, for θ = 0°, the conditions required for FRE and ORE in Table 3 are consistent with those in Table 2. Thus, it can be concluded that Table 3 reflects the general laws of the Recording Reconstruction Fig. 7. In the experiment, a signal wave with polarization angles of 60° and 120° was used, while the reference and reading waves had the same polarization angles. The polarization angles of the reference and reading waves were changed simultaneously (0 -180°) for multiple experiments, wherein every 10° polarization angle was changed into an unexposed area on the PQ/PMMA material. The parameters γ, α, and β used in their study have the same meanings as α, β, and γ used in this paper. In Fig. 7, it can be observed that when the reconstructed wave achieves FRE, the polarization angles of the reference and reading waves correspond to the theoretical values calculated in Table 3. Consequently, this experimental conclusion proves that in a polarization-sensitive medium, for a fixed interference angle, there always exists two reading waves with different polarization angles, which can result in the reconstructed wave achieving FRE independent of the exposure energy. The ORE of a polarization hologram in special cases was studied by C Wu 34 and verified for various interference angles (θ = 15.8°, 26.2°, 38.1°, and 58.5°). In the experiment, the signal wave was s-polarized, while the reference wave was p-polarized. Figure 8 shows the changes observed in the s-and p-polarized components in the reconstructed wave with the polarization angle of the reading wave under various interference angles. The parameters of α used in their study is the same as γ used in this paper. In Fig. 8, it can be observed that when the fast-axis angle of HWP2 is 45°, 135°, 225°, or 315° (the reading wave is s-polarized), the normalized power of the s-polarized component in the reconstructed wave is the lowest, while that of the p-polarized component is the highest, thereby indicating that the ORE was realized. Further, the normalized power of the ORE was found to be inversely proportional to cos 2 θ. Thus, these phenomena conform to the ORE law listed in Table 3. We now discuss the polarization characteristics of the reconstructed waves that are independent of the exposure energy under various interference angles. Among them, FRE can be realized at any interference angle. In contrast, the ORE can be realized at an interference angle other than 90°, while the NRE can only be realized at an interference angle of 90°. Moreover, the FRE condition at a 90° interference angle was observed to be the most relaxed, and the underlying cause of this phenomenon is the NRE at an interference angle of 90°. That is, the NRE can be superimposed with FRE without affecting the polarization state of the reconstructed wave, which consequently increases the degree of freedom of the conditions required to realize FRE. Furthermore, the diffraction efficiency of the reconstructed wave produced by ORE was found to decrease with an increase in the interference angle, attaining the lowest value at an interference angle of 90 °, that is, NRE occurs. 17,19,32,83 mechanism in polarization holography. Thus, we now introduce polarization multiplexing technology using FRE or ORE. The scheme is presented in Table 4, wherein the amplitude information of the reconstructed wave is determined by the signal wave. Channel Recording Reconstruction As shown in Table 4, the FRE is employed for recording the intensity and polarization holograms at the same position on a polarization-sensitive medium. In the reconstruction process, two holograms were read using an s-polarized wave, and subsequently, the two recorded holograms were faithfully reconstructed. The amplitude information of the respective signal waves are carried by the s-and p-polarized components in the reconstructed wave. Thereafter, the two types of amplitude information can be separated using PBS to realize the dual-channel effect. The researches of D Barda and T Ochiai are based on this effect 12,13 , as well as for the dual-channel polarization multiplexing implemented by C Li and W D Koek using circularly polarized waves 81,82 . Consequently, the ORE can be used to achieve a similar dual-channel multiplexing technology provided the reading wave is ppolarized. Regarding the method using FER or ORE polarization multiplexing can be achieved only if the intensity and polarization holograms in the reconstructed wave are separated. This offers the advantage of the ability to change the interference angle, and the highest diffraction efficiency can be obtained under the coaxial interference condition, which is conducive to improving the SNR 36,37 . However, in general, the induced refractive index change of the intensity hologram is greater than the birefringence effect of the polarization hologram (A/B>−1) 21,27 , which implies that the reconstructed image quality of the intensity hologram is better than that of the polarization hologram. This phenomenon was observed in the research of D Barda 12 . To solve this problem, in 2017, J Zang proposed a pure polarization holographic dual-channel polarization multiplexing scheme based on the NRE of linearly polarized wave 19 . The experimental scheme is presented in Table 5 19 . Channel Recording Reconstruction For convenience, the physical symbols were unified. The symbols U S , U R , F, and U F used by J Zang 19 represent the signal wave G + , reference wave G − , reading wave F − , and reconstructed wave F + , respectively, while the symbols α and β represent the dielectric tensor coefficients A and B, respectively. All the amplitude information of the reconstructed wave originates from the signal wave. As shown in Table 5, two orthogonal polarization holograms are recorded at the same position as that of the PQ/PMMA material. When the reading wave is s-polarized, the first hologram obtained is NRE, whereas the second hologram is FRE, which carried the amplitude information of the second hologram. The first hologram can be extracted separately when the reading wave is ppolarized. This scheme offers the advantage of presenting two reconstructed images with the same dielectric tensor coefficient, implying that the two holograms can obtain the same NDE when they are being reconstructed. The experimental device used for polarization multiplexing is shown in Fig. 9. In a similar manner, owing to the NRE of circularly polarized waves, polarization multiplexing can be realized at an interference angle of 0°1 4 . Therefore, in these research 17,83 , recording two orthogonal circularly polarized holograms under the condition of paraxial approximation can aid in the realization of dualchannel polarization multiplexing technology. Moreover, the principle of dual-channel polarization multiplexing is the same as that of linear polarization holography. In the experiment, the wavelength of the laser used was 532 nm and the cubic PQ/PMMA material was used with a bottom surface of 10 mm ×10 mm. Further, the interference angle was 90°. In addition, in the recording stage, the signal wave used in the first hologram was spolarized, while the reference wave was p-polarized. Whereas, in the second hologram, the polarization states of the signal and reference waves were interchanged. Further, in the reconstruction process, two holograms were irradiated with a reading wave that satisfied Bragg conditions, and the generated reconstructed wave was split by PBS2 and consequently captured by two PMs. By changing the polarization angle of the reading wave (0-360°), the data obtained are shown in Fig. 10. The first hologram was s-polarized in the reconstruc-ted wave (Zang's article mistakenly considered it as second hologram). The NDE attained a maximum value when the reading wave was in the vicinity of 0°, 180°, and 360° (p-polarized), and a minimum value when the reading wave was approximately 90° or 270° (s-polarized). In contrast, the second hologram was p-polarized, and its corresponding changes were opposite to that of the first hologram; that is, when the NDE of one hologram was the largest, for the other hologram it was the smallest, thereby realizing the dual-channel function of polarization multiplexing. In addition, the best SNR was approximately 18∶1, which was attained when the polarization angle of the reading wave was 180°. Based on the conclusion of the dual channel results, in 2019, J Zang proposed a four-channel polarization multiplexing technology. The four-channel method is also based on the NRE of linear polarization holography. This scheme is presented in Table 6 32 . Channel Recording Reconstruction As shown in Table 6, J Zang used an interference angle of 90°, and the four holographic channels used were H n (n=1, 2, 3, 4). First, the amplitude data pages were loaded into the signal waves of channels H 1 and H 2 , and Fig. 11. As shown in Fig. 11, J Zang used an interference angle of 90° and the laser wavelength was 532 nm. The laser was divided into a signal path (s-polarized) and a reference wave (p-polarized) through PBS1. Further, the signal wave was divided into two orthogonal polarization channels using PBS2. The SLMs were placed in the two orthogonal polarization channels to load the amplitude information, while the HWP3 was adjusted to render the two channels of equal intensity. Consequently, in the experiment, HWP2 was adjusted to make the reference wave s-polarized, as shown in Fig. 12(a) and 12(b), and the A and B patterns were loaded on SLM1 and SLM2 in the signal path, respectively. Subsequently, after combin-ing through the BS, the first hologram was recorded with the reference wave of s-polarized in PQ/PMMA. Thereafter, the reference wave was p-polarized, as shown in Fig. 12(c) and 12(d), and the C and D patterns were loaded on SLM1 and SLM2 in the signal path, respectively. The second hologram was recorded at the same position using the same operation. The experimental data are shown in Fig. 12. The experimental results are shown in Fig. 12(e-h). Patterns A and B were received on CMOS1 and CMOS2 when reading the two holograms using the s-polarized wave, respectively. In contrast, when reading with p-polarized, the A and B patterns disappeared, and C and D patterns were received on CMOS1 and CMOS2, respectively. The contrast of the reconstructed B pattern was found to be lower than that of the recorded pattern when compared with the image before reconstruction, as shown in Fig. 12(a-d). This is because pattern B is an intensity hologram while the patterns A, C, and D are polarization holograms. Consequently, the difference in diffraction efficiency between the two types of holograms caused the B pattern to be overexposed 21,27 . J Zang recorded two holograms at the same position as the PQ/PMMA material, which included four different amplitude data pages. Thus, by changing the polarization state of the reading wave, four data pages with different amplitudes can be separated and reconstructed. In addition, the reconstructed image exhibited high fidelity and good contrast and can be used for holographic storage, thereby realizing four-channel polarization multiplexing technology. Vector beams Vector beams have been used in many fields, including 33 , designed a dynamic exposure system to successfully record, and read out the vector beams in a cubic PQ/PMMA material using the FRE of linear polarization. The device that generates the vector beams is shown in Fig. 13. The experiment uses a laser with a wavelength of 532 nm, and the beam is divided into signal (s-polarized) and reference (p-polarized) paths following its passage through the PBS. The interference angle between the signal and reference waves was 90°, and the intensities were 1 mW and 500 μW, respectively. In the signal path, the dynamic exposure system comprised an HWP and an angle aperture of 0.2°. Further, the polarization angle of the linearly polarized wave that passed through the angle aperture was changed by rotating the HWP. Thus, with the rotation of the angular aperture (0-360°), the vector beam was gradually recorded in the PQ/PMMA material. During the recording process, the polarization angle of the signal wave constantly changed, while the reference wave was fixed to p-polarized. Whereas, during the reconstruction process, a p-polarized reading wave was used to illuminate the hologram and the generated vector beam was received by CMOS. It is evident from Table 1 and the conclusions of P Qi 29 that under these recording and reconstruction conditions, the signal wave can be faithfully reproduced for any polarization angle. Therefore, in theory, with the rotation of the HWP and angle aperture, the various polarization states after passing through the angle aperture can be recorded in PQ/PMMA and subsequently faithfully reconstructed. The recording time was defined as the time required for the angle aperture to rotate by 360°. In the experiment, the order of the vector beams recorded in the PQ/PMMA material can be changed by controlling the relative rotation speed of the HWP and angle aperture, which is expressed as 33 : where θ p is the polarization angle after passing through the angle aperture, m is the order of the vector beams, θ H is the polarization angle after the HWP, and θ 0 is a constant that describes the initial polarization state at θ H =0. To generate m-order vector light, the rotation speed of the HWP must be m/2 times that of the angle aperture. Thus, using this method, L Huang recorded and reconstructed the first-and second-order vector beams in PQ/PMMA bulk material (this article introduces the second-order vector beams). The second-order vector beams can be obtained when the rotating speed of the HWP is the same as that of the angle aperture (speed: 1°/s). The intensity and polarization distributions of the reconstructed wave are shown in Fig. 14. A polarization singularity can be observed at the center of the beam for the vector beams, and the field intensity distribution of the field is annular 97 . As shown in Fig. 14(f), the intensity distribution of the reconstructed wave obtained from the experiment was consistent with the simulated value. Further, as evident in Fig. 14(g-j), the polarization characteristics of the vector beams were tested by adding P3 to the reconstructed path, and the beams was divided into four lobes. The higher the order, the more lobes, and the corresponding experimental results are consistent with the simulated values. Thus, this experiment indicates that use of polarization holography to record and generate a vector beam in a polarizationsensitive medium is feasible. Reconstruction characteristics related to exposure energy As discussed previously, certain polarization characteristics of reconstructed wave are independent of the exposure energy. However, in general, the polarization state of the reconstructed wave is affected by the exposure energy. In the expression of the reconstructed wave, as expressed in Eq. 16, the dielectric tensor coefficients A and B are the values related to the exposure energy. Thus, upon confirmation of the recording and reconstruction conditions, the polarization state of the reconstructed wave can be determined by the value of A/B. Further, balanced or unbalanced exposure conditions can be used to realize the polarization modulation of the reconstructed wave. Balanced condition The balanced condition is Equation (20) indicates that the contribution of the intensity hologram to the reconstructed wave is equal to that of the polarization hologram. However, these contributions depend on the material, intensity, and exposure energy. When Eq. (20) is satisfied, Eq. (16) can be simplified as where G' + is the orthogonal state of the signal wave. This result indicates that a new linearly polarized state is reconstructed. In 2014, J Zang studied this and explained the influencing factors of reaching a balanced condition of exposure 21 . In the experiment, the laser wavelength was 532 nm, and the interference angle was 41°. Further, both the reference and reading waves were s-polarized, and the signal wave adopted three types of linearly polarized waves with polarization angles of 90°, 0°, and 45°, respectively. The experimental data are shown in Fig. 15. Figure 15(a) and 15(b) exhibit pure intensity and pure polarization holograms, respectively. Certain obvious differences exist between the two holograms, which is the primary reason why achieving FRE of reconstructed wave under normal condition is a challenge. Further, it is also the primary reason for the difference in the quality of reconstructed images of different channels in polarization multiplexing 12,32 . As shown in Fig. 15(c), when the polarization angle of signal wave is 45°, both the intensity and the polarization holograms contribute to the reconstructed wave. However, the difference in intensity and polarization holograms causes the s-polarized component in the reconstructed wave to be larger than the 210009-13 p-polarized component, thereby rendering the realization of the FRE a challenge. J Zang highlighted that the reason for the huge difference in PQ/PMMA film is the difference between the two types of photo-induced effects: induced refractive index change and birefringence. The intensity hologram is affected by the effect of induced refractive index change ( Fig. 15(a)). In contrast, the polarization hologram is affected by the birefringence effect (Fig. 15(b)). Consequently, K Kuroda proposed that when the two effects are balanced, the polarization state of the signal wave can be correctly reconstructed. As shown in Fig. 15(c), when the exposure energy is approximately 11 J/cm 2 , the s-polarized component is equal to the p-polarized one. Moreover, the two photoinduced effects are balanced, and the theoretical expression in the tensor polarization holography is A/B= -1. Thus, this experiment demonstrates that one of the ways to achieve FRE involves controlling the exposure energy to balance the two effects. In the case of the balanced condition of exposure, the polarization characteristics of the reconstructed wave follow the same law at various interference angles. The reconstructed wave is expressed as Eq. (21). Further, the FRE, ORE, and NRE are all determined by the polarization state of the reference and reading waves, which are unrelated to the polarization state of the signal wave. Thus, the interference angle only affects the diffraction efficiency of ORE. In Table 7, in the case of the balanced condition of exposure, the reconstructed wave can realize FRE when the polarization angle of the reading wave is the same as that of the reference wave. In contrast, when the polarization angle of the reading wave is orthogonal to that of the reference wave, the reconstructed wave can realize ORE in the case of non-orthogonal interference (θ ≠ 90°), and can achieve NRE in the case of orthogonal interference (θ = 90°). Further, the diffraction efficiency of ORE is inversely proportional to the interference angle. Thus, in the case of the balanced condition of exposure, the reconstructed wave exhibits certain interesting reconstruction characteristics, which can be utilized for applications in the field of optical functional devices 98−101 . Consequently, we divided it into three cases: θ = 90°, θ = 0°, and 0° < θ < 90°. θ = 90°I n the case of the balanced conditions of exposure and 90° interference angle, the polarization angle of the reconstructed wave is not related to the reading wave, and has the same polarization angle as that of the signal wave. However, the diffraction efficiency of the reconstructed wave is affected by the polarization angle of the reading wave, which results in the NRE of the reconstructed wave. When the interference angle is 90°, only the first term in Eq. (21), remains. On replacing G + with its components form we obtain To visually analyze the reconstruction characteristics, Eq. (22) was simulated. Figure 16 presents the results, which was obtained for simulation at a 90° interference angle using a signal wave with a 60° polarization angle and a p-polarized reference wave. In Fig. 16(a), the NDE of the reconstructed wave can be observed to be the highest when the polarization angle of the reading wave is the same as that of the reference wave. However, when their polarization angles are orthogonal to each other, the amplitude of the reconstructed wave is zero (NRE). The s-and p-polarized waves exhibit a trend similar to the polarization angle of the reading wave, and the ratio of s-to p-polarized waves is always equal to 3, which ensures that the polarization angle of the reconstructed wave is always at 60° ( Fig. 16(b)). These phenomena conform to those presented in Table 7. Further, the hologram formed under this condition has the same function as the polarizer. θ = 0°U nder balanced conditions of exposure and 0° interference angle, the diffraction efficiency of the reconstructed wave is the highest and remained unchanged. The sum of the polarization angles of the reconstructed and reading waves is a fixed value. As cosθ = 1, the Eq. (21) can be transformed to 22 . (23) The result shows that a new linearly polarized state is reconstructed, and the polarization angle is α+β−γ, which was performed by P Qi 22 . The results indicate that the sum of the polarization angles of the reconstructed and reading waves is a fixed value, and that the amplitude of the reconstructed wave is a constant. P Qi had highlighted that at an interference angle of 0°, the reconstructed wave satisfies Eq. (23) under any exposure condition, when the polarization state of the signal wave is orthogonal to that of the reference wave; this phenomenon is shown in Fig. 6. In a similar manner, the Eq. (23) was simulated at a 0° interference angle using a signal wave with a 60° polarization angle and a p-polarized reference wave. The results are presented in Fig. 17. When the reconstructed wave realizes FRE or ORE, the law followed is consistent with that presented in Table 7. Further, the NDE of the reconstructed wave remains unchanged when the polarization angle of the reading wave changes. However, simultaneously, the polarization angle of the reconstructed wave exhibited a linear change. Consequently, the hologram formed un-der this condition has the same function as the half-wave plates. 0° < θ < 90°W hen θ is between 0° and 90°, there is no change in Eq. (21), FRE and ORE can be achieved, and no NRE occurs. However, the reconstruction characteristics of the reconstructed wave, such as the NDE and polarization angle, are both affected by the polarization angle of the reading wave and present a nonlinear change. Consequently, Eq. (21) was simulated at a 40° interference angle using a signal wave with a 60° polarization angle and a p-polarized reference wave. The results are presented in Fig. 18. As shown in Fig. 18(a), in general, changing the polarization angle of the reading wave causes a simultaneous change in the polarization angle and the NDE of the reconstructed wave, resulting in peak dislocation of the sand p-polarized components in the reconstructed wave. As discussed earlier, when the reading wave has the same polarization angle as that of the reference wave, the NDE of the reconstructed wave obtained is the highest, and FRE can be realized. In contrast, when orthogonal, the NDE obtained is the lowest, and ORE can be realized. Further, as shown in Fig. 18(b), the polarization angle of the reconstructed wave changes nonlinearly. Unbalanced condition In our earlier discussions, we introduced certain interesting polarization characteristics in the case of balanced conditions. However, the most common exposure conditions were in an unbalanced state. To realize polarization modulation of the reconstructed wave in the case of unbalanced conditions, the exposure response coefficient (A/B) of the polarization-sensitive medium must be measured. Thus, we analyze this aspect. The unbalanced condition is: In an unbalanced condition of exposure, the diffraction efficiency of the intensities and polarization holograms in the reconstructed wave are different. In 2021, J Y Wang proposed a method for measuring the exposure response coefficient A/B 27 . Consequently, the variation of the exposure response coefficient of PQ/PMMA material with exposure energy was tested using this method, and the NRE with non-orthogonal polarization reading was realized. In the experiment, both the reference and reading waves were s-polarized. Signal waves with different polarization angles were used for several experiments (0°, 15°, 30°, 45°, 60°, 75°, and 90°). Under such recording and reading conditions, according to Eq. (16), the expression of the exposure response coefficient A/B is 27 : where χ is the polarization angle of the reconstructed wave. The experimental results are presented in Fig. 19. As shown in Fig. 19(a), under different recording conditions, the polarization angle of the reconstructed wave varied with the exposure energy. Thus, considering the data of the polarization angle of the reconstructed wave and Eq. (25), the variation in the exposure response coefficient with exposure energy can be obtained. As shown in Fig. 19(b), the exposure response coefficient A/B changes with the increase of exposure energy, and exhibits a different trend under signal waves with different polarization angles, with the variation range being 0. 3-8.7. However, at the beginning of the exposure, the A/B under different recording conditions had approximately the same initial value (8.4). In contrast, when the exposure energy was lower than 140 J/cm 2 , the average value of A/B decreased from 8.4 to 3.1. During this period, the change rule of A/B was almost independent of the polarization angle of the signal wave indicating that at the initial stage of exposure, the intensity and polarization holograms of PQ/PMMA materials encounter difficulties on attaining the balanced condition of exposure (A/B≠-1) under any recording conditions, and they are in an unbalanced condition. Concurrently, different recording conditions have almost no effect on this unbalanced state. Based on this special phenomenon, J Y Wang used an unbalanced condition (A/B=8.4) to realize the NRE of non-orthogonal linear polarization holography. The method uses the condition of A/B=8.4 to make the s-and p-polarized components equal to zero in Eq. (16). Consequently, the diffraction efficiency of the reconstructed wave varied with the exposure energy, as shown in Fig. 20. As shown in Fig. 20, the maximum diffraction efficiency was 0.005 at exposure energy of 800 J/cm 2 . Such a low diffraction efficiency indicates that the reconstructed wave realized NRE. In a similar manner, this also proves that the measured exposure response coefficient A/B value is accurate. This experiment relies on the exposure response coefficient to achieve NRE in the state of non-orthogonal interference, recording, and reading. Thus, this conclusion is expected to break through the limitation of a 90° interference angle, and polarization multiplexing dual-and four-channel technology can be realized under general interference angles. Conclusion and outlook In linear polarization holography, we focused on the changes in the polarization state and NDE of the reconstructed wave, including FRE, ORE, and NRE. Among the reconstruction characteristics that are independent of exposure energy, the polarization characteristics change linearly with the exposure energy, which is achieved by constraining the polarization state during the holographic recording and reconstruction process. In general, the polarization characteristics of the reconstructed wave are affected by the exposure energy and present a nonlinear change. Combined with these reconstruction characteristics, multichannel polarization multiplexing technology or vector beam generation can be carried out. However, these applications are still in the preliminary stage, which combination with phase modulation can be further studied for improving storage capacity, or generating vector beams with phase vortices. Moreover, the reconstructed wave exhibits linear polarization characteristics in the case of a special conditions (θ = 0° or 90°, A/B = -1), and nonlinear characteristics under general conditions (0° < θ < 90°, A/B ≠ -1). These can provide references in the analysis of hologram noise or the characteristics of polarization and diffraction efficiency of holographic gratings 36,37,102,103 . In addition, it is expected to make metamaterials with anisotropic refractive index distribution through multiple exposure, realizing the modulation of the amplitude, phase, polarization and propagation direction of light, which can allow potential applications such as optical metasurfaces, photonic crystal, all-optical logic gate, polarization sensor and so on. Consequently, it is conducive to the production of linear and nonlinear optical functional devices with low-cost planar structures, and planar optical elements with a customer-design function are possible owing to its properties. In terms of theoretical research, the changes of rotation direction and ellipticity of circularly or elliptically polarized waves in polarization holography are complex, and accompanied by some special reconstruction characteristics 24 . Therefore, the research of elliptical polarization holography is challenging. At present, the main challenge comes from the polarization sensitive media. Looking for a material with high diffraction efficiency for polarization holograms, which simultaneously has stable optical properties, ideal absorption and transmission coefficients, as well as low volume shrinkage is the basis for the application of polarization holography.
2022-02-18T16:04:28.952Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "b51711a17b463a118aa17b73e879a3f5eab5d5aa", "oa_license": "CCBY", "oa_url": "https://www.oejournal.org/data/article/export-pdf?id=620cdca099d881528090e6ef", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2f9e89cc8f73bc2a782206e48ee10a8edfaf6f29", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
220291116
pes2o/s2orc
v3-fos-license
New approaches to studying morphological details of intramolluscan stages of Angiostrongylus vasorum Angiostrongylus vasorum is a pulmonary artery parasite of domestic and wild canid. On molluscs, intermediate host, first stage larvae (L1) are found after the first day of infection, in the 8 th L2 and in the 30 th L3. It was evaluated L1, L2 and L3 recovered by Baermann technique from Achatina fulica infected with 1000 L1. Fifty larvae/stage were incubated with antibodies anti-β-tubulin, anti-α-tubulin, anti- α-actin, anti-β-actin and anti-collagen, and then with Alexa 633. Fifty larvae/stage were observed with picrosirius red and Oil Red O. It was also observed in the anterior region of L1 the beginning of the chitinous stems development, in the initial portion of the intestine and genital primordium. In L2 anterior region, the papillae, chitinous canes juxtaposed to the mouth and intestines bigger than L1. The L3 musculature is well defined, next to the chitinous stems, there are two round distally arranged from each other. It was observed the whole extension of the intestine genital primordium and intense cellularity in the L3 distal portion. With the picrosirius red the L1, L2 and L3 musculature could be observed, as the nerve ganglia on L3. Oil Red O revealed that L1, L2 and L3 store energy on lipid droplets. Introduction Angiostrongylus vasorum affects the right ventricle of the heart and the pulmonary arteries in dogs, red foxes and other canids (Mozzer & Lima, 2015). In humans, A. cantonensis causes eosinophilic meningitis and A. costaricensis triggers abdominal eosinophilic ileocolitis (Thiengo et al., 2013). The complete life cycle of these nematodes takes place in their definitive hosts, which are usually rodents. The eggs settle in the intestinal submucosa, while the adults lodge in the mesenteric arteries. Achatina fulica the African giant snail was introduced in Brazil in 1980, currently its present in 23 Brazilian states including Minas Gerais (Barçante et al., 2005;Thiengo et al., 2013). It was found naturally infected with A. cantonensis, in Pernambuco, Rio de Janeiro, Santa Catarina, Espírito Santo and Amazonas (Thiengo et al., 2013). The eosinophilic meningitis cases in Espírito Santo are related to the ingestion of these molluscs. Achatina fulica also can be infected with A. costaricensis and A. vasorum (Neuhauss et al., 2007;Coaglio et al., 2016). In Brazil A. fulica are found naturally infected with other nematodes of importance in the veterinary field, Aelurostrongylus abstrusus, a cat lung parasite, Rhabditis sp. and Strongyluris sp., parasites found in the intestine of amphibians and reptiles (Oliveira et al., 2010). Knowing this, the participation of A. fulica in the context of public health must be proposed, as there are reports of natural infection related to this molluscs in several regions of Brazil, as such, it is important to know the development of the different stages of mollusc nematode parasite, as the knowledge gathered could help in creating control strategies. Treatment of angiostrongyliasis with nematicides generally does not produce good results, and new therapies are extremely necessary. Although recent "omic" data on parasitic nematodes accelerate the identification of potential drug targets (Mccoy et al., 2017), the validation of these targets in functional experiments remains a challenge because few techniques allow functional characterization. Therefore, the use of different morphological markers for parasitic nematodes may help morphological observations of these nematodes and may contribute to "omic" studies. In the present study, we analyzed the morphology and morphometry of L1, L2, and L3 larvae of A. vasorum recovered from Achatina fulica by assessing the cytoskeleton, connective tissue and lipid distribution of these larvae using cytoskeleton and collagen antibodies and staining with picrosirius red and Oil Red O, respectively. Obtaining intramolluscan stages of Angiostrongylus vasorum The strain of A. vasorum that was used is maintained in dogs at the experimental kennel of the Institute of Biological Sciences of the Federal University of Minas Gerais (ICB/UFMG) (Lima et al., 1985). To obtain L1 larvae, feces collected from the dogs that maintain the strain were subjected to the Baermann technique. This study and maintenance of the strain in animal hosts was approved by the ethics committee for animal use of UFMG (CEUA/UFMG) under the number 147/2011. Then, 1 mL of water containing 1000 L1 was placed in each of specimen of infection containers of width 5 cm and height 5 cm. Specimens of the mollusc Achatina fulica of mean size of 4 cm were individually introduced into each infection container (one per container) so that they would be exposed to L1. They were left there for 24 h, at the room temperature of 23 °C (RT). After this period, these molluscs were transferred to a rectangular terrarium containing autoclaved soil supplemented with calcium carbonate. Water and lettuce were provided ad libitum. The terrariums were cleaned three times a week. The remaining larvae in the infection containers were quantified as described by Pereira et al. (2006). Fifty molluscs were used to recover L1 (1 day post-infection, dpi), 50 to recover L2 (8 dpi) and 50 to recover L3 (30 dpi). The L1, L2 and L3 larvae were recovered under a Baermann's technique, as proposed by Coaglio et al. (2016). Morphological evaluation of A. vasorum larval stages 350 larvae per stage (L1, L2 and L3) were recovered with the help of a stereoscopic microscope (25x) and were stored in a 2 mL polypropylene tube containing 300 μl of PBS. Then, 900 μl of 4% paraformaldehyde was added and the samples were incubated for 24 h at 4 °C. To remove excess fixative, the samples were washed 3x with water by means of centrifugation at 180 g for 10 min. At the end of this, the larvae were resuspended in 900 μl water and used as described below. 50 larvae per stage (L1, L2 and L3) were used as a control that was detected by means of ligth microscopy. To allow small pores to be formed in the cuticle of the nematodes, the tubes with the larval stages were placed in a 10 ml beaker with 95% ethyl alcohol for 4 min at -80 °C. Subsequently, the tubes were placed in water for 10 min at RT, followed by slight agitation on a microplate shaker for 30 min at RT. After this period, the samples were centrifuged at 450 g for 1 min. After this procedure, 0.5% saponin was added and the samples were incubated for 10 min at RT under gentle shaking. They were then washed 3x with water by means of centrifugation at 180 g for 10 min. Next, each sample received 200 μl of BTB (borate triton β-mercaptoethanol solution_(Borate buffer/25mM; Triton X-100/0.5%; β-mercaptoethanol/3% in the final concentration) and was incubated for 1 h at RT, under gentle shaking (Biomixer Ts-2000A shaker). Subsequently, the samples were washed with BTB by means of centrifugation at 450 g for 1 min. After a third hour with BTB, the samples were again washed 3x by means of centrifugation at 220 g for 2 min and, following the removal of BTB, each sample was resuspended in borate triton (BT) solution_(Borate buffer/25mM; Triton X-100/0.5% in the final concentration). Following the removal of excess BT as described above, the cytoskeleton content and distribution of the larvae (L1, L2 and L3) was investigated. Fifty larvae of each stage were incubated individually with 50 μl of each of the following primary antibodies (Santa Cruz Biotechnology) diluted 1:500 in PBS containing bovine serum albumin (BSA): anti-β-actin (sc 69879), anti-α-tubulin (sc 5286), anti-β-tubulin (mouse-sc 55529) and anti-collagen (Cat. No. sc-9855, Santa Cruz Biotechnology, CA). The samples were incubated overnight at 4 °C under gentle agitation. They were then centrifuged at 220 g for 2 min. The supernatant was discarded and 250 μl of the blocking solution (4% BSA/PBS) was added for a further 1 h 30 min at 4 °C, under gentle shaking. Excess dye was removed by centrifuging the samples at 220 g for 2 min. The procedure was repeated four times. Then, 50 μl of the secondary antibody conjugated to Alexa 633 (goat anti-mouse, Invitrogen/Cat. No. A-21052) diluted at 1:500 in PBS was added to the samples and incubated for 2 h in the dark at RT, under gentle shaking. To remove the excess antibodies, the samples were washed 4x with PBS at 220 g for 2 min. The samples were mounted between slides and coverslips and images were captured using an epifluorescence microscope_model Axiovert 200M APOTOME (Carl Zeiss, Germany). To investigate the lipid content and distribution, 50 larvae of each stage were incubated with the Oil Red O dye at a ratio of 3:1 for 24 h at RT. The larvae were mounted between a slide and a coverslip and observed under an Olympus  BX41 ligth microscope coupled to an Olympus  DP12 digital camera. Following this, picrosirius red (Junqueira et al., 1979) (3:1 in PBS) was added to 50 larvae of each stage for 24 h at 4 °C. The larvae were then centrifuged at 220 g in PBS 2x to remove the excess dye. The samples were mounted between slides and coverslips, and images were obtained via epifluorescence microscopy with Apotome, as described above, because picrosirius red can be detected at the red wavelength. First-stage larvae (L1) The LI specimens that were recovered were thin and transparent, measuring 332 ± 12 μm in length and 11 ± 1 μm in width. The anterior end was rounded and the tail was curved ventrally with an unguiform appendage, as seen using ligth microscopy ( Figure 1A). Anti-α-tubulin antibodies showed the nerve ring, the intestinal region and the anal opening ( Figure 1B). Although some structures were not identified through this antibody, there were two oval structures next to the chitinous rods that were not detected by means of ligth microscope. These structures measured 5 μm 2 , with an average of 2 μm in length and 0.5 μm in width ( Figure 1B). The nerve ring was located 70 μm from the anterior end of the larva and was 6 μm in length and 7 μm in width ( Figure 1B). The width of the initial portion of the intestine was 6 μm and the total length of the intestine was 158 μm ( Figure 1B). Anti-β-tubulin antibodies labelled the chitinous rods and intestine well, while the nerve ring was poorly shown by them. The chitinous rods measured 7 μm in length and the anal opening was located 25 μm from the final portion of the tail. It was also revealed that the distance from the tip of the tail to the slit that is present in this region was 4 μm ( Figure 1C). Using anti-collagen antibodies, collagen fibers were detected in the structures of L1, including the intestine, chitinous rods and nerve ring ( Figure 1D). The distance from the sheath of L1 to the wall of the body was 1 μm ( Figure 1D). When labeled with anti-α-tubulin, anti-collagen and anti-β-tubulin, L1 showed prominence of the posterior portion, especially the intestine ( Figure 1B-D). Initially, with epifluorescence microscopy without Apotome, staining with picrosirius red only revealed a few structures. However, after using the Apotome, it was possible to detect the chitinous rods, the nerve ring, the esophagus and the musculature around it, the intestine and the genital primordium of L1. Picrosirius red staining showed an oval-shaped genital primordium measuring 6 μm in length and located at 69 μm from the anal opening and 99 μm from the tip of the tail. The L1 musculature was 2 μm thick and had small oval-shaped cellular structures of 3 μm in length ( Figure 2A). Staining with Oil Red O showed intense presence of lipids throughout the body of L1 ( Figure 2B). Second-stage larvae (L2) The L2 larvae were located in the tissues of the intermediate host and were characterized by being slightly mobile, arched ('C' shaped) and brownish in color due to the presence of granules inside the intestinal cells. They measured about 420 ± 23 μm in length and 12 ± 2 μm in width and had two sheaths that occupied the entire interior space, as shown by ligth microscopy ( Figure 3A). Anti-α-tubulin antibodies revealed an esophagus of 147 μm in length. When dilated, the esophagus measured 12 μm in width and 27 μm in length. The nerve ring in L2 had a rectangular shape, measuring 15 μm in width, and was located 80 μm away from the tip of the anterior portion. The anti-α-tubulin antibodies also showed that the intestine as a compact mass of 27 μm in width, occupying almost the whole width of the larva, differently from L1, whose intestine was observed as a filament ( Figure 3B). Anti-β-tubulin antibodies marked the intestine more intensely than the esophagus. Every segment of the intestine of L2 was formed by small rounded cells ( Figure 3C). Only the gut became labelled with anti-β-actin antibodies and, when the Apotome was used, we observed two small papillae in the anterior portion, about 2 μm wide and 5 μm apart from each other. The oral opening measured 15 μm ( Figure 4C). The esophageal wall stained with picrosirius red ( Figure 4A). This morphological detail was not observed when L2 were incubated with the antibodies. The intestine was not shown with this dye when epifluorescence was used alone; however, after using the Apotome attachment, we were able to view the structures of the intestine, along with the esophageal composition, the excretory pore (30 μm in length), the genital primordium and the anal opening ( Figure 4A). Picrosirius red staining also revealed that the genital primordium was not as well-defined in L2 as it is in L1. Rather, it was a cluster of cells, 17 μm in length and 6 μm in width, located 180 μm away from the end of the tail ( Figure 4A). Lipids were recognized by means of the Oil Red O dye, throughout the body of L2 ( Figure 4B). Third-stage larvae (L3) L3 appeared free from sheaths and was clearer and more transparent than the other stages, with two chitinous rods arranged longitudinally at the anterior end. The tail ended with a digitiform appendix, measuring about 470 ± 25 μm in length and 19 ± 1 μm in width, as shown by ligth microscopy ( Figure 5A). Anti-collagen antibodies also revealed the nerve ring ( Figure 5B). Anti-α-tubulin antibodies revealed the chitinous rods, nerve ring, esophagus and intestine. The genital primordium and anal opening became labelled less intensely than was observed when the other stages were analyzed. The lateral cell layer was composed of rounded cells ( Figure 5C and 5D). The 13 μm-long chitinous rods were 2 μm apart and presented a bulb at their base measuring 1.5 μm in length and 1 μm in width. The nerve ring was 64 μm away from the anterior end of the larvae, measuring 18 μm in length by 8 μm in width ( Figure 5D). The length of the esophagus was 192 μm and it had a dilation in the final portion, which was 46 μm long and 20 μm wide ( Figure 5D). The initial width of the intestine (19 μm in length) could be noted in the portion that attached to the esophagus: 260 μm in length and 20 μm in width. Regarding anti-β-tubulin antibodies, α-tubulin marked the chitinous rods, nerve ring intestine and the genital primordium ( Figure 5C and 5D). The Oil Red O dye weakly stained the gut ( Figure 5E). "-shaped L2 presenting the second sheath (thin white arrow) in bright field: distance from the mouth to the end of the esophagus is indicated as between the red arrowheads; the intestine is indicated as between the yellow arrowheads; (B) Labeling with anti-α-tubulin antibodies: dotted green indicates the esophagus; yellow arrowheads indicate a dilation in the final portion of the esophagus; red arrowheads indicate the distance from the mouth to the end of the esophagus; blue and white arrowheads indicate the nerve ring; intestine with width indicated by the white bar; (C) L2 with anti-β-tubulin antibodies: the intestine is indicated as between the blue arrowheads; the anal opening at the end of the intestine is indicated as between the red arrowheads; yellow arrowhead indicates the length of the anal opening. Details of Angiostrongylus vasorum intramolluscan Staining with picrosirius red dye revealed nerve ganglia, which were oval-shaped structures spaced symmetrically from the esophagus ( Figure 6A and 6B). The dye also showed that nerve cells existed in the anterior region and were similar to the amphidial pouch found in some adult nematodes ( Figure 6A). Furthermore, the genital primordium was 156 μm away from the end of the tail: it was 21 μm in length by 6 μm in width and was composed of 15 rounded cells ( Figure 6B). The excretory pore (32 μm in length) extended into the esophagus and was located 66 μm away from the oral opening ( Figure 6B). The labeling also revealed the full extent of the intestine ( Figure 6B). In the final median portion of the intestine, oval-shaped cells were observed, symmetrically spaced 17 μm from one another ( Figure 6B). Table 1 shows the morphological information on the intramolluscan larval stages of A. vasorum. Although the chitinous rods were present from the L1 stage onwards, they were twice as large in L3 as those found in the other two stages. The nerve ring was three times as long in L3 (18 μm) as in L1 (6 μm), but only 3 μm longer than in L2. The genital primordium developed greatly between the stages: it was 2.5 times smaller in L1 than in L2 and 3.5 times bigger in L3 than in L1. L2 showed the greatest distance between the genital primordium and the tail. The esophagus was more developed in L3. stained with picrosirius red: the genital primordium is indicated as between the yellow arrowheads; the distance from the genital primordium to the final portion of the tail is indicated as between the red arrowheads; the excretory pore is indicated by the blue arrowhead; (B) L2 stained with Oil Red O; dark spots show lipid droplets; (C) Intestine viewed using the Apotome attachment: the anti-collagen antibody is indicated as between the red arrowheads; the anti-β-tubulin antibodies are indicated as between the green arrowheads and the anti-β-actin antibodies are indicated as between the blue arrowheads. Two spherical structures at the anterior end of the larva that were revealed with anti-β-actin antibodies are indicated as between the yellow arrowheads. Discussion Angiostrongylus vasorum undergoes two molts inside its molluscs hosts, developing from L1 to L2 and then from L2 to L3. The larval stages of A. vasorum recovered from A. fulica were similar to those found in L1 of Arion sp., Omalonyx matheroni and Subulina octona, since L1 showed a mean length of 335 μm, L2 had a mean length of 420 μm and L3 had a mean length of 485 μm, respectively (Guilhon & Cens, 1973;Bessa et al., 2000;Mozzer et al., 2011). The morphological details presented during the development of the digestive and reproductive tracts, which begins at the larval stages of A. vasorum, have not been well studied so far. Here, we described these details for the L1, L2 and L3 larval stages, which are not well observed through optical field microscopy. In L1, the antibodies and dyes used here revealed the oral opening, the chitinous rods and a pair of rounded structures close to these, along with the nerve ring, esophagus, intestine and anal opening. Although anti-β-actin antibodies presented the least intensity, they detected the intestine and nerve ring well. We also observed prominent areas throughout the interior of L1. Using picrosirius red, we were able to detect the initial formation of the chitinous rods in the anterior region of L1. The morphology revealed here suggests that the chitinous rods are part of the oral cavity, thus indicating that the oral cavity is in the process of formation. This structure seems to be part of the digestive system of L1, which would thus be a narrow, almost closed stoma. However, in L3, a dilation between the two structures was observed. These structures were defined as chitinous rods and the dilation between them gives a stomal appearance to this apparent oral opening. The esophagus detected in L1 has a rhabditiform shape that is characteristic of the first-stage larvae, as shown by Rosen et al. (1970). The L1 intestine was well characterized at the 1 st dpi, in the final third of the larvae, ending in the anal opening. This observation is different from what was presented by Lv et al. (2009), who found that the intestine of the L1 larvae of A. cantonensis had a dilated appearance only five days after infecting Pomacea canaliculata. The tail of L1 has an unguiform appendage shape, which is the main characteristic differentiating the L1 stage of Metastrongylidae from other parasitic nematodes (Rosen et al., 1970). By using these markers in L2, we were able to show a pair of rounded structures, as well as the nerve ring, esophagus, gut (which presented as a compact mass occupying virtually all of the posterior portion of the larva), genital primordium and anal opening. In the oral opening, two protuberances that were not present in the other stages were found. The morphology of L2 is the least studied so far, probably due to the difficulty in recovering these larvae from the molluscs' tissues. Nevertheless, it is known that L2 has two sheaths: one that originated from L1 and is being eliminated; and the other that is under development to form the sheath in L3 (Guilhon & Afghahi, 1969;Bessa et al., 2000;Coaglio et al., 2016). The oral opening, the chitinous rods (anti-α-tubulin, anti-β-tubulin and picrosirius red), the nervous system (anti-α-tubulin, anti-β-tubulin, anti-collagen and picrosirius red) the excretory pore (picrosirius red), the esophagus (picrosirius red), the intestine (anti-α-tubulin and anti-β-tubulin), the genital primordium (picrosirius red), the anal opening (picrosirius red) and lipid reserves in the intestine (Oil rRed O) were all detected in L3. In the final portion of the tail, after the anal opening, we observed a set of cells with symmetrical sizes that may form the structures that will arise in the future stages of the parasite in the definitive host. The esophagus formed a bulb close to the intestine. Morphological information regarding structures such as the chitinous rods, nerve ring, excretory pore, genital primordium and esophagus has been reported for L1, L2 and L3 larvae of some metastrongyloids during intramolluscan development (López et al., 2005;Rebello et al., 2013;Coaglio et al., 2016;Colella et al., 2017). However, that information did not have the degree of detail presented here. Indeed, the present study reveals novel observations regarding chitinous rods in L1 and L2, nerve ganglia in L3, spherical structures located near the mouth region of L2 and lipid reserves, intense cellularity and body cavities in all three stages. The infecting stage is L3. Angiostrongylus vasorum is characterized by the presence of a pair of chitinous rods near the mouth. Our data suggest that these structures are present from the L1 stage onwards but only reach full development in L3 when they show a space between each other. They may constitute the stoma in the adult parasite. The parasites A. cantonensis, A. costaricensis and A. abstrusus also present structures similar to the chitinous rods found in A. vasorum that are characteristic for L3. To date, there have not been any reports from any morphological studies characterizing the internal morphology of intramolluscan nematodes (Ash, 1970;Hata & Kojima, 1990;Lv et al., 2009;Ohlweiler et al., 2010). The muscle structure that forms the intestine consists of a thick cellular assembly that ends in the anal opening. Also, the body musculature of L3 is denser than that of L1. Picrosirius red strongly labelled the final portion of the intestine of L3 and therefore may be considered to be a useful tool for studying intramolluscan nematode development. The tail ends with a digitiform appendix, which a characteristic of L3 that is common to A. vasorum, A. costaricensis and A. cantonensis (Ash, 1970;Guilhon & Cens, 1973;Hata & Kojima, 1990;Bessa et al., 2000). This study was the first to use antibodies to better characterize the structures of A. vasorum larvae. Other studies have used a similar strategy with antibody markers to characterize the structures of Schistosoma mansoni (Collins et al., 2011), Echinostoma paraensei (Souza et al., 2011), Ascaris suum (Fellowes et al., 1999;Williamson et al., 2009) and Caenorhabditis elegans (Albeg et al., 2011). Our study pioneered the use of picrosirius red to morphologically characterize parasites, since this dye had so far only been used to observe the production of collagen in histological tissue sections. For example, picrosirius red had been used on mouse liver to analyze its response when infected with Schistosoma mansoni (Lenzi et al., 1999) and on rat liver infected with Plasmodium berghei (Haque et al., 2011). This low-cost dye successfully stained L1, L2 and L3, thereby revealing almost all the structures that were also observed using antibodies. Staining with picrosirius red and labelling with cytoskeleton and collagen antibodies provided a more realistic view than did optical field microscopy, because these tools showed details about how the larvae are actually structured. The dyes and antibodies used here generated more detailed information about the larval structures. Therefore, future studies using these tools may help expand the current knowledge about the biological role of the structures of the larval stages of A. vasorum. Staining the larvae of A. vasorum with Oil Red O showed the main lipid reserve sites at each larval stage. Saturated lipids are the most important energy reserves in nematodes and account for 11-67% of the dry weight of the parasite's juvenile phase, whereas neutral fats such as triglycerides are the main form of energy storage in adults (Andaló et al., 2011). The lipid reserves observed in L1 were uniformly distributed throughout the body. Lipid droplets are the main storage sites for neutral lipids (Xu et al., 2012). The lipid reserve of L2, which remains immobile in the host's tissue, was found in the whole body. Mendonça et al. (2008) found similar results when staining L2 of A. costaricensis with Oil Red O. L3, which is the most active larval stage, spends more of its energy reserves than do the other two stages. Consequently, L3 has fewer lipid droplets concentrated in the intestine than are observed in the other larval stages. Conclusions Using anti-β-tubulin, anti-α-tubulin, anti-collagen and anti-β-actin antibodies, along with picrosirius red and Oil Red O staining, we revealed here unprecedented details regarding the internal morphology and morphometry of the intramolluscan larval stages of A. vasorum that should help expand the current understanding about the biology of this parasite. The picrosirius red dye revealed the musculature of A. vasorum and nerve ganglia in L3. The picrosirius red dye was better for morphological and morphometric analysis of A. vasorum L1, L2 and L3 in A. fulica, and it was also a cost-effective dye. Staining L1, L2 and L3 of A. vasorum with Oil Red O revealed that they store energy in the form of lipid droplets that change their main location as the larvae progress through their development.
2020-07-02T10:06:27.798Z
2020-06-26T00:00:00.000
{ "year": 2020, "sha1": "e1ec1606502da4b85adf3df61ba4f46e0c35096d", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/rbpv/v29n2/1984-2961-rbpv-29-2-e000420.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "788104d3c69917f319b148810abe23f206f21174", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
237477154
pes2o/s2orc
v3-fos-license
Biological treatment in resistant adult-onset Still’s disease: A single-center, retrospective cohort study Objectives The aim of this study was to assess the demographic and clinical characteristics of patients with adult-onset Still’s disease (AOSD) under biological treatment. Patients and methods This retrospective cohort study included a total of 19 AOSD patients (13 males, 6 females; median age: 37 years; range, 28 to 52 years) who received biological drugs due to refractory disease between January 2008 and January 2020. The data of the patients were obtained from the patient files. The response to the treatment was evaluated based on clinical and laboratory assessments at third and sixth follow-up visits. Results Interleukin (IL)-1 inhibitor was prescribed for 13 (68.4%) patients and IL-6 inhibitor prescribed for six (31.6%) patients. Seventeen (89.5%) patients experienced clinical remission. Conclusion Biological drugs seem to be effective for AOSD patients who are resistant to conventional therapies. Due to the administration methods and the high costs of these drugs, however, tapering the treatment should be considered, after remission is achieved. It is well known that proinflammatory cytokines such as ferritin, interleukin (IL)-1, IL-6, IL-8, IL-18, tumor necrosis factor-alpha (TNF-a), and interferon-gamma (IFN-g) are responsible for manifestations of AOSD. 1,3,4 Macrophage activation syndrome, amyloidosis, disseminated intravascular coagulation, thrombotic thrombocytopenic purpura, microangiopathy, diffuse alveolar hemorrhage, and death may be seen due to the unsuppressed disease activity and continuing proinflammatory cytokine release. 1 Although the primary treatment option for AOSD is corticosteroids, it may be insufficient for one-third of patients. 5 Conventional immunosuppressive drugs (methotrexate, cyclosporine, leflunomide) may be necessary for remission induction and tapering corticosteroids. 5 Biological drugs may be required for refractory disease. Due to the well-known effects of IL-1, IL-6, and TNF-a in the pathogenesis of the disease, inhibition of these pathways are favorable treatment options. 6 There is a limited number of data in the literature regarding biological drug usage in refractory AOSD, and the clinical manifestations affecting the preference of biological drugs. In the present study, we aimed to assess the demographic and clinical characteristics of the patients with AOSD receiving biological drugs who were resistant to conventional therapies. PATIENTS AND METHODS This single-center, retrospective cohort study was conducted at Ankara Gulhane Training and Research Hospital, Rheumatology outpatient clinic between January 2008 and January 2020. A total of 59 patients with AOSD were screened. A total of 19 AOSD patients (13 males, 6 females; median age: 37 years; range, 28 to 52 years) who were resistant to conventional treatment and under biological treatment were included in the study. Data regarding the demographic and clinical characteristics of the patients and treatment regimens were received from the patient files. All patients were diagnosed with AOSD according to the Yamaguchi et al.'s criteria. 7 The patients with missing data and without follow-up were excluded. Malignancies, infectious diseases and other inflammatory diseases were also excluded, before the diagnosis of AOSD. A written informed consent was obtained from each patient. The study protocol was approved by the Gülhane Training and Research Hospital Ethics Committee (No: 2020-301, Date: 30/06/2020). The study was conducted in accordance with the principles of the Declaration of Helsinki. The biological drugs were prescribed to the patients with clinical and laboratory active disease. Before starting a biological drug, all patients received at least one conventional therapy (corticosteroids, methotrexate, leflunomide, and cyclosporine-A). Anakinra (IL-1 inhibitor) and tocilizumab (IL-6 inhibitor) were used as biological therapy depending on the clinical and laboratory findings of the patients. Clinical remission was defined as the absence of clinical and laboratory findings of active disease for at least two consecutive months. A flare was defined as a need for additional treatment or an increase in dosage of the currently used drugs due to a new clinical and laboratory activation in a patient with remission. The disease with flares was accepted as refractory disease. Resistant disease was defined as ongoing disease activity, regardless of the treatment for at least two consecutive months. The definitions of disease activity were determined based on the available studies in the literature. 8,9 Statistical analysis Statistical analysis was performed using the SPSS for Windows version 11.5 (SPSS Inc., Chicago, IL, USA). The Kolmogorov-Smirnov test was used to assess the normality assumption. Normally distributed continuous variables were expressed in mean ± standard deviation (SD), while non-normally distributed continuous variables were expressed in median and (interquartile range [25 th -75 th percentiles]). Categorical variables were expressed in number and frequency. were negative in all patients. Conventional immunosuppressive drugs used to treat AOSD were methotrexate (n=16, 84.2%), leflunomide (n=7, 36.8%), and cyclosporine-A (n=6, 31.6%). All patients were using corticosteroids before or during the first third months of the biological treatment (Table 1). Seventeen (89.5%) patients achieved clinical and laboratory remission, of whom 11 (84.6%) were using anakinra and six (100%) were using tocilizumab. Anakinra treatment Anakinra was used in 13 (68.4%) patients. Eight (61.5%) patients were in the 16-35 years old group, in which AOSD is more common. The median duration of anakinra treatment was 14 (range, 5 to 85) months. The clinical manifestations of the patients in this group were generally systemic. Fever (100%), sore throat (92.3%), arthralgia (84.6%), the elevation of transaminases or hepatosplenomegaly (53.8%), and salmon-pink rash (46.2%) observed. The patients in the anakinra group received methotrexate and cyclosporine as conventional immunosuppressive drugs ( Table 1). The other patients received anakinra as the first biological drug. All patients used anakinra at a dose of 100 mg/day. One (7.7%) patient receiving anakinra had a disease relapse at 10 years of the treatment. Disease remission was occurred in the first month of the treatment in this patient, after increasing the anakinra dose to 200 mg/day. Corticosteroids were discontinued in all patients during the first six months of follow-up, except for one (7.7%) patient with liver transplantation ( discontinued in five (45.5%) of 11 patients who were in remission. The median time for cessation anakinra was 14 (range, 5 to 85) months. Two (18.2%) of these patients were using methotrexate as maintenance therapy, three (27.3%) patients were being followed without treatment (Table 2). One (7.7%) patient had a cutaneous reaction in the injection site by anakinra which was improved with antihistaminic therapy. No other side effects were observed. Tocilizumab treatment Tocilizumab used in 6 (31.6%) patients. There were no patients under 35 years old in the tocilizumab group. The median treatment duration for tocilizumab was 22 (range, 9 to 43.5) months. Resistant polyarticular manifestations, as well as systemic ones, were common in the tocilizumab group. The rate of fever was 100%, arthritis was 83.4%, and arthralgia, hepatosplenomegaly, sore throat, and salmonpink rash were 66.7% in the tocilizumab group. Methotrexate was the most frequently prescribed conventional immunosuppressive drug. The second conventional immunosuppressive drug prescribed before the introduction of a biological drug was leflunomide ( In the first three months, all of the patients in tocilizumab group achieved remission. Corticosteroids were discontinued in the first six months of follow-up. No adverse reaction was seen in the tocilizumab group. DISCUSSION Corticosteroids and conventional therapies are usually successful in controlling disease activity in patients with AOSD. However, in a considerable amount of patients, life-threatening clinical manifestations may occur due to the ongoing disease activity. When the disease activity cannot be suppressed with conventional therapies, biological drugs may be an option for the treatment, which inhibit the pathogenetic cytokine pathways that responsible for the clinical findings of the disease. The current study showed that tocilizumab was predominantly preferred for Adult-onset Still's disease is an autoinflammatory disease which presents in genetically predisposed individuals with affection of innate and adaptive immune systems. 10 Although many cytokines play a role in the development of the clinical findings of AOSD, IL-1b is the main cytokine that is responsible for the clinical manifestations. 11 The triggering factors, such as infections or environmental factors, lead to secretion and activation of proinflammatory cytokines IL-1b and IL-18 by provoking dysregulation of NOD-like receptor 3 protein (NLRP3). Also, Toll-like receptor 7 stimulates dendritic cells to activate neutrophil migration by inducing T helper 17 responses. The IL-1b may induce TNF-a, IL-6, and IL-8 secretion. 12 The IL-6 induces the production of ferritin from hepatocytes, leading to the burst of clinical findings like fever and salmonpink rash. On the other hand, IL-18 activates the secretion of IFN-g, which plays a role as the main cytokine of macrophage activating syndrome. 10,13,14 Although many cytokines are implicated in the formation of the broad clinical spectrum of AOSD, particularly IL-1, IL-6, and TNF-a are target cytokines for the treatment of the patients whose clinical findings cannot be suppressed with conventional therapies. 15,16 Still's disease was first described by Sir George Frederick Still in 1897 as systemic juvenile arthritis. 2 The adult form of Still's disease is a rare entity, which is commonly seen among 16 to 35-year-old female patients. 2,17,18 The current study found a higher rate of male and elderly patients. Also, all of the patients who received tocilizumab were older than the patients in the anakinra group and chronic polyarticular form of the disease was more common. Kalyoncu et al. 19 reported that male sex, young age, and having polyarthritis were related to chronic disease course and refractory disease. The higher rate of both male patients and patients with chronic articular disease courses in the current study may be related to the poor prognostic factors which lead to the occurrence of resistant disease. Additionally, a 56-year-old patient was diagnosed with lung cancer at the sixth month of the tocilizumab therapy, indicating that older patients with AOSD should be followed carefully for the development of new-onset malignancies. Also, AOSD may present as a paraneoplastic syndrome. 20 Thus, a thorough screening for malignancy is required in patients with AOSD. The arthritis prevalence was lower in the current study than the studies performed with AOSD patients using conventional therapies. 10,19 The patterns of arthritis were mainly oligoarticular or monoarticular in the studies evaluating conventional therapies, whereas, in the current study, all of the patients had refractory polyarthritis. The patients with monoarthritis and oligoarthritis may have benefited from conventional therapies, whereas polyarthritis may be resistant to conventional therapies. 19 Also, the rate of the patients with arthritis was higher in the tocilizumab group than the anakinra group; however, the difference was not statistically significant. A recent study conducted in Italy investigated the efficacy of IL-1 inhibitory treatment in patients with AOSD and showed that the prevalence of systemic manifestations was 74.2%, 21 similar to our study. Also, they reported an improvement in chronic articular disease with high Disease Activity Score 28 (DAS28). On the other hand, the literature data regarding the effect of IL-1 inhibitors on articular manifestations are controversial. Besides, few studies have shown that IL-1 inhibitors may not be sufficient for controlling the chronic articular disease, as well as controlling systemic disease. [22][23][24] A different pathway other than IL-1 may be responsible for chronic articular manifestations. In the pathogenesis of the chronic articular form of AOSD, which resembles rheumatoid arthritis, TNF-a and IL-6 may play a more crucial role than IL-1. 25 Although the elevation of transaminases can be frequently observed in the course of AOSD, fulminant hepatic failure is a rare manifestation. Anakinra was prescribed for a patient (Patient No: 9) with hepatic failure who required liver transplantation. The patient is still under follow-up with remission and with normal transaminases under the treatment of low-dose corticosteroid, a calcineurin inhibitor, and anakinra. 26 Although the data regarding the use of anakinra in the patients with liver transplantation are limited, the patient was treated based on the data of the efficacy and safety of anakinra among the patients with renal transplantation. Similar to the previous studies, the most commonly preferred conventional immunosuppressive drug before the commencement of biological drug was methotrexate. 23,27,28 Cyclosporine was the second most common preferred drug for patients with hepatic transaminase elevations and who were receiving anakinra. Leflunomide was the drug secondly prescribed for the patients with mainly articular symptoms and who were receiving tocilizumab, consistent with the literature. 22,27 Fewer adverse reactions were observed in the current study, compared to previous studies. 22,29,30 There is no randomized-controlled study investigating the use of subcutaneous tocilizumab in patients with AOSD and most data are retrieved from case series. 29,30 In the current study, three patients were using subcutaneous tocilizumab, two of them received subcutaneous form as the first administration, and one received subcutaneous form after achieving remission with intravenous form. A patient's treatment was discontinued after lung cancer was diagnosed. The other two patients were under follow-up with remission with subcutaneous form of tocilizumab, and no adverse events were observed. Two patients in the study groups died from active disease and multiorgan failure eventually. The rest of the patients were followed with remission. The rate of the patients in remission and whose biological therapy was discontinued due to the remission were higher than the results of the Colafrancesco et al.'s study. 21 In the literature, the data for cessation tocilizumab in the patients with remission are based on case reports. Frequently, lengthening the dosing interval or reducing the dose were preferred methods. 29 In the study presented by Reihl Crnogaj et al., 28 three of four patients had disease flares after cessation of tocilizumab due to the remission of the disease. This study has certain limitations. First, a small number of patients were included in the study. Second, there was no control group who were using only conventional therapies. Further large-scale studies including those using both conventional and biological therapies may provide more accurate results on the treatment, clinical, and laboratory course of the disease. In conclusion, in the course of AOSD, biological drugs may be rarely required for patients with active disease and arthritis resistant to conventional therapies. However, many cytokines play a role in the pathogenesis of the disease, inhibition of the main cytokines with biological drugs is crucial. Using IL-1 inhibitors for the improvement of mainly systemic symptoms and using IL-6 inhibitors for the improvement of mainly chronic articular symptoms seem to be rational. Besides, due to both the potential adverse events and the high costs of the drugs, reducing the dose, lengthening the dosing interval, and ceasing the drugs should be the key points to be considered for patients with remission. Declaration of conflicting interests The authors declared no conflicts of interest with respect to the authorship and/or publication of this article.
2021-09-11T09:30:24.496Z
2021-06-24T00:00:00.000
{ "year": 2021, "sha1": "f18c82e7a3f070819dd3f1feb9f03deb4db97325", "oa_license": "CCBYNC", "oa_url": "https://archivesofrheumatology.org/full-text-pdf/1304", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f3920c9721c81f16f740d681a946e46498f0573a", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
257596203
pes2o/s2orc
v3-fos-license
The effects of 4,6-Dichloro-2-Sodiooxy-1,3,5-Triazine on the Fibrillation Propensity of Lyocell Fibers ABSTRACT Dichlorotriazine reactive crosslinker has been used to reduce the fibrillation of Lyocell. The fibrillation propensity and mechanism of 4, 6-dichloro-2-sodium dioxy-1, 3, 5-triazine treatment on Lyocell fibers were studied, and the mechanical properties and dyeing properties of the treated Lyocell fibers were evaluated. The results showed that the three-step reaction process can make lyocell have more uniform behavior of decreased fibrillation propensity, in which the first step is physical mixing of triazine and cellulose, the second and third steps are the stepwise reactions of cellulose with two chlorines of Dichlorotriazine under different conditions, respectively. At the end of the reaction, non-fibrillating Lyocell fibers were produced. In addition, the structure, mechanical properties and dyeing properties of the Lyocell fibers were not affected by the cross-linking chemical reaction process. In conclusion, this study provides a better uniform effect and economically reasonable route for the preparation of commercial non-fibrillating Lyocell fibers. Introduction Lyocell fibers are claimed as green fiber with a good application prospect in the 21 st century.It is spun from cellulose solution in N-methylmorpholine-N-oxide system with a dry-jet wet spinning.(Jiang et al. 2020;Poongodi et al. 2021). However, the main problem that restricts the further promotion of Lyocell fibers is its obvious fibrillation properties.Lyocell fibers crystal units form a highly oriented arrangement by dense stacking manner (Gerhard et al. 2013;Okugawa, Yuguchi, and Yamane 2021;Sawada et al. 2021), which led to the crystal units have a strong binding force in the longitudinal direction, but weak transverse force (Zhang, Okubayashi, and Bechtold 2005).When the fiber is influenced by frictional force, the crystallites or microfibrils slide past each other in the longitudinal direction and produce fibrillation tendency (Schurz and Lenz 1994).In addition, small molecules could promote this process such as water and ethanol (Okugawa, Yuguchi, andYamane 2020, 2021). Process control is hard to solve the fibrillation problem of Lyocell fibers effectively (Mi et al. 2015;Mortimer and Péguy 1996).Cellulase and alkali pre-treatment were used to decrease the fibrillation of Lyocell fibers (Natarajan, Rajan, and Subrata 2022;Periyasamy 2020).Nevertheless, chemicals are used to discard microfibrils of Lyocell in this process (Natarajan, Rajan, and Subrata 2022).Therefore, it will not change the fibrillation propensity of Lyocell fibers permanently. The most mature way to solve this problem is to form covalent bonds between cellulose molecules through cross-linking (Bates et al. 2006;Bates, Phillips, and Renfrew 2007;Natarajan, Rajan, and Subrata 2022;Perepelkin 2007).For example, Tencel-A100 and Tencel-LF are produced by this technology (Taylor 2016). However, there are a few problems with all kinds of cross-linking fibers.The fibers produced by acrylamide have defects such as excessive formaldehyde content (Faizan et al. 2018;Jaturapiree et al. 2011;Petersen 1987), and this kind of reactive crosslinker has potential toxicity (Bates, Phillips, and Renfrew 2007), which has been gradually eliminated.Resin reactive crosslinker will cause problems such as the degradation of fabric performance (Faizan et al. 2018;Petersen 1987).Dichlorotriazine reactive crosslinker also had problems such as nonuniform non-fibrillating propensity. In this paper, the fibrillation propensity and mechanism of Lyocell fibers by 4,6-Dichloro-2-sodiooxy-1,3,5-triazine was studied systematically to mitigate its production cost and enhance the uniformity of decreased fibrillation propensity.Firstly, the effect of different technologies on the fibrillation propensity was studied.Then, a new cross-linking process was suggested by studying the reaction mechanism and optimizing the process parameters.Finally, non-fibrillating Lyocell fibers were continuously prepared on the beltline, which has the characteristics of low cost, good uniformity, excellent mechanical properties, etc.It provides important reference value for the industrialization of non-fibrillating Lyocell fibers. Preparation of non-fibrillating Lyocell fibers The non-fibrillating Lyocell fibers were prepared by using the self-made production test line (Figure 2).The fibers pass through the hot roll and the running speed of the fibers is 40 m/min.Steam is introduced to prevent the moisture on the fibers from evaporating.Setting temperature sensors to control the fibers temperature through the hot rolling temperature. Depending on the pH value of the experiment, sodium hydroxide, sodium carbonate or sodium bicarbonate was mixed with 1% 4,6-Dichloro-2-sodiooxy-1,3,5-triazine solution.And then it is added to the Lyocell fibers in a soaking ratio of 1:5. An accurately control regarding the parameter of pH value, reaction time and reaction temperature, carry out of the cross-linking reaction is compulsory: The pH value is controlled by the content of sodium hydroxide and other reagents in the added solution. The reaction time is controlled by the running speed and the winding distance of the fibers on the roller. The reaction temperature is controlled by hot rolling temperature. Wet abrasion resistance measurement The propensity to fibrillate was determined by the time of wet abrasion resistance (Bates et al. 2006;Bates, Phillips, and Renfrew 2007).The basic principle of this method is that rub a single fiber constantly under wet conditions, and the wear of time is different for different fibrillation propensity of Lyocell fibers.A higher time indicates a lower fibrillation propensity.As shown in Figure 3.The fibers are stretched under a certain weight and are constantly rubbed with a roller.A layer of cotton cloth is uniformly wound around the surface of the roller to provide friction.The time needed to break the fibers, that is the time of wet abrasion resistance. FTIR analysis Use Nicolet-10 Fourier transform infrared spectrometer to identify the infrared spectrum of the fibers.The IR spectra was scanned over the wave number range of 4000-500 cm −1 . XPS analysis XPS spectra were measured by X-Ray Photoelectron Spectroscopy (ESCALAB 250, USA).The samples were irradiated with monochromatic Al K Alpha (100 eV) using a spot size of 500 µm × 500 µm.In addition, high-resolution scan XPS spectra of N 1s and Cl 2p were recorded with pass energy of 30 eV, and the Energy Step Size was 0.100 eV, from which the surface chemical compositions were obtained.To ensure reproducibility, the samples were analyzed in duplicate or triplicate and data analysis was performed using software for the equipment. XRD analysis X-Ray diffraction patterns of Lyocell fibers were measured by a reflection method and recorded on X-ray diffraction apparatus (Siemens-Bruker D5000, Germany) using a Cu Kα radiation.Scattered radiation was detected in the range of 2θ = 5-60° at a scan step size of 0.05°. Tensile mechanical properties The mechanical properties of the fibers were evaluated by XQ-1 tensile tester (Donghua University, China).The test length was 20 mm, tensile speed is 5 mm/min.To ensure reproducibility, mean test value results of 50 samples were taken. Analysis of fibers' morphology by SEM analysis The apparent morphology of the Lyocell fibers were verified by SEM (S-4700, Japan).The fiber surfaces were coated with approximately 20 nm of copper to make samples more conductive and suitable for SEM analysis.The SEM was operated using 10 kV. Dying Lyocell fibers were dyed in a Dyeing Machine (HBC-24, China) with 0.5% on a mass of fibers (o.m.f.) of the specified dye and at a liquor-to-goods ratio of 30:1 following commercial conditions recommended by the dye manufacturers.Fibers were initially immersed in liquor containing the dye and 20 g/L sodium sulfate before raising the temperature to 30°C and further running the machine at this temperature for 15 min.Ten grams per liter sodium carbonate was then added, and raise the temperature to 60°C at the rate of 1°C/min.And the dyeing continued for another 60 min.The fibers were then removed from the dye bath and rinsed thoroughly in deionized water prior to after-soaping, final rinsing and air drying. For all application methods, at the end of the dying and soaping, the fibers were removed from the bath and the absorbance of the liquid measured at the wavelength of the maximum absorption.The substantivity (S) and fixation (F) of the compound for dried substrate was then calculated by the reference method (Phillips, Reisel, and Renfrew 2008). The effect of production process on the fibrillation propensity of Lyocell fibers The chemical reaction between dichlorotriazine and cellulose is a nucleophilic substitution reaction.The first step of the reaction is the nucleophilic addition of the hydroxyl groups of the cellulose to the carbon atoms with the lowest electron cloud density.Due to the high electronegativity of the chlorine atom, the negative charge is concentrated on the carbon adjacent to the chlorine atom.The first step of the reaction will generate a negatively charged intermediate product.The second step of the reaction is that the chlorine atoms leave the intermediate and forms a covalent bond product.Therefore, when one of the chlorine atoms undergoes nucleophilic reaction and is eliminated, the density of the electron cloud of the carbon atom increases.This may make the reaction between another chlorine and cellulose more difficult (Ibbett et al. 2010;Phillips, Reisel, and Renfrew 2008).In addition, the reactivity between dichlorotriazine and cellulose increases with increasing temperature and did not react at low-temperature (Ibbett et al. 2010;Phillips, Reisel, and Renfrew 2008).Therefore, the technological process has an essential effect on fibrillation propensity of Lyocell fibers.There are four common processes as shown in Figure 4(a). After the first step reaction of process 3, the Lyocell fibers don't produce non-fibrillating propensity.After the second heating reaction, the fibers have decreased fibrillation Figure 4(b), and the uniformity of process 3 is higher than that of process 2 Figure 4(c).This is because the cross-linking reaction does not occur at lower temperature, which can promote uniform distribution of 4,6-Dichloro-2-sodiooxy-1,3,5-triazine on fibers, thus enhance the non-fibrillating propensity. Process 4 did not exert influences on the fibrillation propensity in the first and second reactions.It produces decreased fibrillation propensity during the final heating stage Figure 4(b).And the decreased fibrillation propensity and uniformity are significantly improved compared with other processes Figure 4(c).The decreased fibrillation propensity of process 4 is better than that process 3, and the degree of confidence is greater than 98% by the One way ANOVA.This is because the gradient heating process avoids the hydrolysis of 4,6-Dichloro-2-sodiooxy-1,3,5-triazine, which helps to improve the definitive decreased fibrillation propensity and uniformity. Therefore, the most reasonable process is the three-step reaction (Process 4).The first step envisages a tantamount distribution of the 4,6-Dichloro-2-sodiooxy-1,3,5-triazine at lowtemperature to improve the non-fibrillating uniformity of Lyocell.The second step is to generate the first chemical bond between one chlorine in 4,6-Dichloro-2-sodiooxy-1,3,5-triazine with cellulose at medium temperature.The third step is to generate the second chemical bond between another chlorine in 4,6-Dichloro-2-sodiooxy-1,3,5-triazine with cellulose at highly temperature.At the end of the reaction, non-fibrillating Lyocell fibers were produced.This process provides a better uniform effect and economic route for the preparation of commercial non-fibrillating Lyocell fibers.Next, the process steps will be optimized. Effect of different conditions on the Lyocell fibrillation propensity For the reaction process of process 4 in the preceding text, the reaction conditions were optimized in this section. Effect of reaction temperature on the Lyocell fibrillation propensity The reaction temperature will affect the reactivity between Dichlorotriazine and cellulose and determine the ultimate non-fibrillating propensity and uniformity. The influence of temperature in the first step on the non-fibrillating propensity and uniformity as shown in Figure 5(a).In the range of 20-50°C, there is no significant difference in the final nonfibrillating propensity at different temperatures.However, the excessively high temperature will reduce the uniformity of non-fibrillating considerably.This is because the increase in temperature will lead to the early chemical reaction when the 4,6-Dichloro-2-sodiooxy-1,3,5-triazine has not been evenly distributed, resulting in a significant decline in non-fibrillating uniformity. Following consideration of the energy consumption, the optimal reaction temperatures of the first step, the second reaction and the third step are 30°C, 50°C and 90°C, respectively. Effect of reaction pH value and reaction time e on the Lyocell fibrillation propensity In general, the increase of pH value will increase the amount of ionized cellulose and the reactivity of cross-linking reaction, but if the pH value is too high, that will promote fibrillation of Lyocell fibers.The first step reaction mainly takes place in the process of uniform distribution of 4,6-Dichloro-2-sodiooxy-1,3,5-triazine, and the pH value of the system has no significant effect on this process.Therefore, the pH values of the second and third reactions were investigated. Figure 6(a) shows the effect of the pH value of second reaction on non-fibrillating propensity.Under the condition of pH 12.0-13.5, the cross-linking reaction can be completed within 2 min.The non-fibrillating propensity of Lyocell fibers increases firstly and then decreases when pH value increases.Because an increase of pH value will increase the reactivity between Dichlorotriazine and cellulose, but high pH could promote fibrillation of Lyocell. Figure 6(b) indicates the effect of pH value on the non-fibrillating propensity in the third step reaction.It can be seen that the non-fibrillating propensity increases significantly with the increase of pH value at 12.0-13.5.In addition, the pH value will influence the reaction time.When the pH value is 12, the reaction is quite slow.When the pH value is 12.5, the reaction takes a long time to complete, which is difficult to achieve in practical production.When the pH value is 13 or 13.5, the reaction can be completed in a short time.And there was no significant difference between them. Following consideration of the energy consumption, the best pH value of the second step reaction is 13, and the best pH value of the third step reaction is 13. The structure and reaction mechanism of the cross-linking Lyocell in preparation Non-fibrillating Lyocell fibers were prepared by using the optimum process described previously (Process 4).The optimal reaction conditions of the first second and third steps are 30°C, 50°C (pH = 13, 2 min) and 90°C (pH = 13, 2 min), respectively.In this section, the reaction mechanism was discussed. FTIR and XPS test In order to deeply understand the process, the Lyocell fibers state at different stages of the "three-step process" was characterized by FTIR spectroscopy, XPS spectroscopy and XRD test. As shown in Figure 7(a), the characteristic peaks at 1690 cm −1 are the stretching vibration peak of triazine ring skeleton and the stretching vibration peak of double bond.And there was no new characteristic peak in the FTIR spectra at the first reaction step.It is shown that 4,6-Dichloro- 2-sodiooxy-1,3,5-triazine did not react with cellulose at the first reaction step.In addition, the infrared absorption frequency and the vibration strength of an organic molecule depend on the force constant of its specific chemical bond, while the force constant concerns the way in which the electrons are distributed in the molecule.The induced effect of the chlorine atom changes the force constant of the double bond and increases the vibration frequency.Therefore, the wavelength of the stretching vibration peak of triazine ring and the double bond is higher. There was a new characteristic peak at 1560 cm −1 during the second step of the reaction.It shows that the cellulose reacts with the first chlorine atom of the 4,6-Dichloro-2-sodiooxy-1,3,5-triazine.At the end of this reaction, the 4,6-Dichloro-2-sodiooxy-1,3,5-triazine eliminates one chlorine atom, leading to a decrease in wavelength.Therefore, the wavelength of stretching vibration peak of triazine ring and the double bond reduced from 1690 cm −1 to 1560 cm −1 . The characteristic peak of 1560 cm −1 disappeared when the third step reaction was carried out.It shows that the chemical reaction occurs between cellulose with the second chlorine atom of 4,6-Dichloro-2-sodiooxy-1,3,5-triazine.At the end of this reaction, 4,6-dichloro-2-sodium oxy- 1,3,5-triazine eliminated two chlorine atoms, leading to a decrease in the force constant of the double bond, while the wavelength decreases.Therefore, there is no obvious characteristic peak on the FTIR spectrum. High-resolution scan XPS spectra of N 1s and Cl 2p are shown in Figure 7(b).The phenomenon in Figure 7(b) verifies the conclusion of Figure 7(a).There are chlorine atoms on the Lyocell fibers after the second step reaction.And chlorine atoms on the Lyocell fibers disappear after the third step reaction.Nitrogen is visible on the Lyocell fibers when the second step reaction because of the reaction between 4,6-Dichloro-2-sodiooxy-1,3,5-triazine with cellulose.And the nitrogen content does not vary after the third step reaction.It is worth noting that the signal intensity of the spectrum is weak due to the minimal element content. XRD test Figure 8 shows the XRD test consequences of Lyocell fibers at different stages.After calculation, the Crystallinity Index (Crl) did not change significantly in the whole process.This shows that the reaction process occurs on the amorphous region or surface of the crystalline region, which does not change the crystalline structure of Lyocell. SEM test The reference method promoted fibrillation of fibers (Mi et al. 2015).SEM test results showed that non-fibrillating Lyocell were obtained after the third step reaction (Figure 9).Before that, Lyocell fibers did not have the capability of decreased fibrillation. Reaction mechanism In summary, the first step of the "three-step process" is to distribute the 4,6-Dichloro-2-sodiooxy-1,3,5-triazine at low-temperature, where no chemical reaction has occurred.The second step is to generate the first chemical bond between cellulose and 4,6-Dichloro-2-sodiooxy-1,3,5-triazine at medium temperature.The third step is that cellulose and 4,6-Dichloro-2-sodiooxy-1,3,5-triazine generate a second chemical bond at highly temperature and finally produce non-fibrillating propensity.This process is illustrated in Figure 10. Mechanical and fibrillation properties Mechanical properties are one of the main factors restricting the application of Lyocell fibers.Table 1 presents the results of fibers tests of Lyocell treated by 4,6-Dichloro-2-sodiooxy-1,3,5-triazine.As a result of cross-linking the tenacity and elongation of the Lyocell fibers are reduced, but in an acceptable range.In addition, there was no significant differences in the non-fibrillating propensity between this study and the products sold in the market (Lyocell-LF and Lyocell-A100). Dyeing capacity of non-fibrillating Lyocell fibers Dyeing performance is part of the main factors restricting the application of Lyocell fibers.Therefore, this section explores the influence of cross-linking process of the dyeing performance of Lyocell fibers.The photos of dyed fibers are presented in Figure 11 Dyeing properties of different Lyocell fibers were estimated by measuring dye substantivity (S) and fixation (F) (Table 2).In addition, fibrillation properties after dyeing were also analyzed (Table 2). The non-fibrillating ability of the crosslinked fibers will not change significantly no matter what dye is used.The substantivity (S) and fixation (F) (Table 2) indicate that the dyeing capacity of Lyocell and non-fibrillating Lyocell fibers could be an effective commercial treatment.The visual analysis of the dyed samples presented in Figure 11, no differences are observed.This aspect is supported by the values for substantivity (S) and fixation (F).It is shown the crosslinking reaction process does not affect the dyeing performance of the Lyocell fibers.In addition, there were no significant differences in the dyeing capacity between the fibers treated according to the procedure presented within the study and the products sold in the market (Lenzing Lyocell LF). Conclusion In order to improve the non-fibrillating propensity and uniformity of non-fibrillating Lyocell fibers, the influence of cross-linking process, reaction mechanism and parameter optimization was studied in this article.The cross-linking process has a significant effect on the non-fibrillating propensity and uniformity of Lyocell fibers.The direct heating process will result in a significant decline in the nonfibrillating propensity of Lyocell fibers.The low temperature process can promote uniform distribution of 4,6-Dichloro-2-sodiooxy-1,3,5-triazine to improve the uniformity of nonfibrillating Lyocell fibers. Uniform and excellent non-fibrillating Lyocell fibers can be prepared by a three-step reaction process.The FTIR and XPS test results showed that the 4,6-Dichloro-2-sodiooxy-1,3,5-triazine was evenly distributed on the fibers at low temperature.The 4,6-Dichloro-2-sodiooxy-1,3,5-triazine and cellulose form the first and second chemical bonds at medium and highly temperatures, respectively.And non-fibrillating Lyocell fibers are produced after the formation of the second chemical bond.In addition, the chemical reactions occur on the surface of the crystalline region or amorphous region of the cellulose.The chemical reaction process does not change the structure of Lyocell fiber and it has little effect on the mechanical properties and dyeing capacity of the Lyocell fibers. Overall, this study provides a better process for the preparation of non-fibrillating Lyocell fibers. Figure 3 . Figure 3. Test method of wet abrasion resistance. Figure 4(b) shows the change curve of fibrillation propensity of Lyocell fibers under different processes.And Figure 4(c) shows the final fibrillation propensity of fibers by different processes. Figure 4 . Figure 4. Comparison of different process routes: (a) Temperature rise curve; (b) Change curve of fibrillation propensity; (c) Final fibrillation propensity. Figure 5 . Figure5.Effect of reaction temperature on non-fibrillating propensity: (a) First step reaction; the temperature of the first step is variable; the temperature of the second step is 50°C, the temperature of the third step is 90°C; (b) Second step reaction; the temperature of the first step is 20; the temperature of the second step is variable, the temperature of the third step is 90°C; (c) Third step reaction; the temperature of the first step is 20; the temperature of the second step is 50°C, the temperature of the third step is variable. Figure 6 . Figure 6.Time varying curve of non-fibrillating propensity under different pH: (a) the second step reaction; (b) Third step reaction.(The temperature of the different step is 30, 50 and 90°C, respectively). Figure 9 . Figure 9. Microscopic photographs of Lyocell after fibril induction for 30 min in 1 g/l NaOH solution; (a) after the first step; (b) after the second step, and (c) after the third step. Table 1 . Mechanical properties and fibrillation properties for different Lyocell.
2023-03-18T15:11:13.746Z
2023-03-16T00:00:00.000
{ "year": 2023, "sha1": "0eeec0bc1259debb4df4d57328fce49bf7b504f6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/15440478.2023.2181270", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "1635e195746f52bc6fa27135b31e934f19c073d3", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
257212101
pes2o/s2orc
v3-fos-license
Oral giant cell tumor or giant cell granuloma: How to know? Introduction The distinction between giant cell tumors and giant cell granulomas is challenging, as both entities have overlapping diagnostic criteria, especially in oral locations. The two entities have similar clinical and radiological presentations, but they differ in their prognoses. Objective The main objective of this study was to list the clinical, radiological, histological, and prognostic features of maxillomandibular giant cell tumors and giant cell granulomas cases n order to assess their value as a diagnostic referral factor that may allow the distinction between maxillo-mandibular giant cell granuloma and giant cell tumor. Study design Data of maxillomandibular giant cell granulomas and giant cell tumors were assessed through a scoping review and a pre-existing systematic review of literature. We have also realized a bicentric retrospective study. Results Various criteria facilitate the differential diagnosis like age, size, locularity and presence of necrosis zone but not the gender. The most discriminating factors was symptomatology (reported in 72% of GCTs while only 15% of GCGs) and the distribution pattern of giant cells in the stroma (homogeneously dispersed in 80% of GCTs versus grouped in clusters in 86.7% of GCGs). Recurrences were most described for giant cell tumors than giant cell granulomas. Malignant transformation and pulmonary metastasis were exclusively reported for giant cell tumors. Conclusion As clinical and radiological elements are not sufficient to distinguish between these two entities, immunohistochemistry and molecular genetics can be represent diagnostic biomarkers to distinguish giant cell granulomas and giant cell tumors in oral cavity. We have attempted to define the main criteria for the differentiation of giant cell tumor and giant cell granuloma and propose a decision tree for the management of single maxillomandibular giant cell lesions. Introduction Giant cell-rich lesions of bone represent a heterogeneous group of multinucleated giant cell proliferation of osteoclastic type. In the maxillomandibular region, some are commonly found, such as central or peripheral giant cell granulomas (GCGs), cherubism, aneurysmal bone cyst (KOA), and brown hyperparathyroidism tumor (BHT), while others are more rarely found, such as giant cell tumors of the bone (GCTs) and giant cell tenosynovial tumor of the temporomandibular joint. GCTs was described in 1818 by Cooper and Travers and is listed in two forms in the general classification of bone and soft tissue tumors that was modified in 2020: the bone form classified as osteoclastic giant cell-rich tumors, malignant or of intermediate malignancy (locally aggressive, rarely metastatic), and the form in "soft tissue" classified as so-called fibrohistiocytic tumors of intermediate malignancy [1,2]. Bone GCTs are either benign or malignant primary tumors. According to some authors, the appearance of malignancy may be present from the early stages of tumor development (primary malignancy) or may follow a transformation of a benign GCTs [3]. There is currently no consensus on this subject. GCTs accounts for 4-9% of primary bone tumors. It usually occurs in the epiphyses of long bones (70%). Craniofacial localizations are rare (2-4% of cases), mainly in the sphenoid bone and more occasionally in the temporal bone [4]. Regarding GCGs, Jaffe introduced in 1953 the "reparative giant cell granuloma" which he defined as a strictly benign form of giant cell lesions, distinct from GCT by its exclusive maxillomandibular location and non-neoplastic nature and suggests a post-traumatic or infectious repair process [5]. GCGs finds its place in the 2017 WHO classification of odontogenic and maxillofacial bone tumors among "giant cell lesions and bone cysts," with a distinction between its central (intraosseous) and peripheral (soft tissue-dependent) form [6]. Central GCGs, with a bony origin, represents 1-7% of benign maxillomandibular lesions. Peripheral GCGs, also known as giant cell epulis, is a variant with a gingival origin and is three to five times more common than its intraosseous counterpart [7]. For a long time, GCTs and GCGs have been considered a single entity varying in degree of aggressiveness. The diagnosis was in favor of GCGs when the location was maxillomandibular, whereas tumors located in extragnathic locations were considered GCTs. This dichotomy was subsequently challenged with the description of maxillomandibular giant cell lesions with malignant or metastatic progression, which are not included in the definition of GCGs [3]. In addition, there have been reports of extragnathic GCGs, mainly involving the small bones of the hands and feet [8]. Thus, it was no longer possible to consider that the location fixed the diagnosis. It now seems to be accepted that these lesions are two distinct entities with similar clinical and radiological presentations but different prognoses. GCGs have a recurrence rate of approximately 10-15% with no associated metastasis, whereas GCTs have a high local recurrence rate (approximately 25%) with a risk of malignant transformation and metastatic progression [3,8,9]. Thus, these two entities do not warrant the same therapeutic management. Histologically, pathologists are often confronted with overlapping hist logic features and it is often difficult to make a definitive diagnostic. So the distinction between giant cell tumors (GCTs) and giant cell granulomas (GCGs) is challenging, as both entities have overlapping diagnostic criteria, especially in oral locations. The main objective of this work was to identify the diagnostic factors that can distinguish these two entities in oral locations. Materials and methods We wanted to do a scoping review. It was the most appropriate way to analyze this topic [10]. A collection of data from maxillomandibular GCTs and GCGs were assessed by means of a scoping review (SR) and a retrospective bicentric study (RS). Research question The research question was: "Are there clinical and/or radiological and/or histological criteria to distinguish between the diagnosis of maxillomandibular giant cell granulomas and giant cell tumors?" Search strategy The methodology of the SR met PRISMA requirements (Fig. 1). An electronic search was undertaken in the Medline (Pubmed), JOMOS and Cochrane databases, between February and April 2021, without restriction of inclusion date. After discussion, the most Eligibility, inclusion and exclusion criteria Eligibility criteria included articles reporting cases of GCTs, of maxillary or mandibular location, including multiple tumors. Where the location or nature of the lesion was not specified in the title or abstract, the article was retained for full reading. Exclusion criteria were publications reporting cases of brown tumors of hyperparathyroidism and syndromic diseases (Noonan, cherubism and other syndromes associated with facial malformations), mixed tumors with a giant cell contingent, other cervicofacial locations of giant cell tumors (including temporomandibular joint or nearby (condyle, coronoid process) because of the risk of confusion with a giant cell tenosynovial tumor or pigmented villonodular synovitis), review articles, animal or in vitro studies, studies published in a language other than French or English, full texts that are not accessible via inter-university credits. Data collection and analysis After exclusion of duplicates, the titles and abstracts of articles found in the databases were independently evaluated by two reviewers in accordance with the inclusion and exclusion criteria. If the titles and abstracts were deemed relevant, the full text continued to the selection phase. In case of disagreement, the full text had to be proofread to reach an agreement between the two reviewers. Potentially eligible articles were then subject to evaluation of the full text. Data extraction The data extraction was listed in a predefined analysis table validated by 3 reviewers which included epidemiological, clinical, radiological, histological, treatment and follow-up criteria. Methology of the systematic review on giant cell granuloma Two systematic reviews of literature (SRLs) on GCGs published in 2018 by Chrcanovic et al. on more than 5000 patients were used as references for GCGs [11,12] (Table 1B). Methology of the retrospective study 2.3.1. Eligibility criteria and non-inclusion criteria To be included in our RE, cases had to be granulomas or giant cell tumors, of maxillary or mandibular location (except cornea, condyle, temporomandibular joint), histologically diagnosed on biopsy or excisional specimen and without restriction of inclusion date. Patients excluded from the RS were patients with brown tumors of hyperparathyroidism, non-maxillary or mandibular locations, cases without histological diagnoses and mixed lesions associating another histological contingent with the giant cell lesion. Clinical data collection and analysis The data extraction was listed in a predefined analysis table validated by all 3 evaluators which included: epidemiological criteria, clinical aspect, radiological data and histological/immunohistological descriptions. Administrative aspects Our RE was reported at the level of each investigation center. It bears the name "TuGra_CéGé_CavOrale" and the identification number 20210723162142 at the Pitié Salpêtrière hospital and the number PASS21-250 at the Timone and Conception hospitals in Marseille. Statistical analyses Descriptive analyses were performed to analyze the data extraction from our SR and from the selected articles. Statistical analysis was not feasible due to a too large difference in numbers (29 cases of GCTs for our SR compared to 5094 cases of GCGs for the RSLs of Chrcanovic and al.). Statistical analysis was not possible due to a large difference in the number of participants in each group (SR vs RS), which did not meet the requirements for statistical tests. Results Concerning the SR about GCTs, the database search yielded 181 results. After analysis, 29 articles were retained, and the reading of the full texts allowed the inclusion of 19 articles. 6 additional articles were included from the bibliographic references of the articles selected by this search. A total of 25 articles, which included data from 29 cases and covering a publication period from 1953 to 2020, were employed. The selected articles were all single case reports (1-2 cases) (Table 1A). Regarding GCGs, we used the two SRLs on GCGs published by Chrcanovic et al. as specified in the materials and methods section (Table 1B) [11,12]. The RS included 35 patients Table 1 Level of evidence and grading. (A) articles of the scoping review (regarding Giant Cell Tumors -GCTs); (B) articles of the systematic reviews (regarding Giant Cell Granuloma -GCGs). with no reported cases of GCT. Epidemiological, clinical, radiological, and histological data from the SR on GCTs, the two SRLs on GCGs and the RS are reported in Table 2 and analyzed. Epidemiological data Gender did not appear to contribute to the differential diagnosis. A female predominance was found in Chranovic's SRLs [11,12] whereas our study (RS) found a male predominance. For GCTs, our SR did not find a gender predominance ( Table 2, line 1). On average, between our SR on GCTs and the two SRL on GCGs, subjects with GCGs were more than 10 years younger than subjects with GCTs. In our RS, we did not find this difference, and averages between GCTs and GCGs are similar ( Table 2, line 2). The most discriminating factor was symptomatology, which was reported in 72% of cases of GCTs (SR), while symptomatology was described in only 15% of cases in Chranovic's SRLs [11,12] and found in 23% of cases in our RS ( Radiological features Some radiological aspects showed differences. Indeed, GCTs in our SR were mostly multilocular (83.3%) which was not the case for GCGs in the SRL of Chranovic (38.2%) ( Table 2, line 8). The cortical bone was always thinned in the GCTs in our SR, and very often in the SRL of Chranovic on GCGs (84.3%) ( Table 2, line 9). Finally, perforation of the cortical bone appeared in only half of the GCTs samples in our SR (53.8%.) and in the SRL of Chranovic. (50.9%) ( Table 2, line 10). Histological features Histological data were not reported in the SRLs on GCGs. Comparing the data from our SR on GCTs and our RS on GCGs, giant cells (GCs) were homogeneously dispersed in most GCTs (80% in SR) which was not the case for GCGs ( Fig. 2A) (13.3% in our RS ( Table 2, upper line 11), which was predominantly grouped in clusters (Fig. 2B) (86.7% in our RS on GCGs and 20% in our SR on GCTs) ( Table 2, lower line 11). The vascular component of the stroma is very dominant in the GCGs in our RS (69.6%), whereas this component is less important in the GCTs in our SR (46.1%). Moreover, the collagenous component is almost twice as important in GCTs in our SR (30.8%) compared to that of GCGs in our RS (17.4%) ( Table 2, line 12). The presence of hemosiderin deposits or areas of hemorrhage was found in 17.2% of GCTs in our SR, and 47.1% on GCGs (Fig. 2) in our SR ( Table 2, line 13), and osteoid deposits or bone neoformation was found in 40% of GCTs in our SR, and 74.1% on GCGs in our SR ( Prognosis Recurrence of GCGs was described in 13% of cases in Chracnovic's RSL (11,12) and 32% in our RS, whereas GCTs recurred in 48% of cases in our SR (Table 3, line 1). In contrast to GCGs, malignant transformation and lung metastasis have been described for GCTs, respectively in 10.3% and 7%, and never in GCGs in our RS and SR (Table 3, line 2 and 3). Discussion Thus, it appears that specific criteria, such as age, size, symptomatology, locularity, pattern of GC distribution and presence of necrosis, can guide the differential diagnosis of GCGs and GCTs even in oro-facial localization. However, it may be difficult to reach a definite diagnosis when all criteria do not match. Currently, with the exception of our analysis, comparative data on maxillomandibular locations are lacking, but comparative studies of gnathic GCGs versus extragnathic GCTs have been performed and may help to highlight the distinction of these two entities. Knowledge and differentiation with extragnathic GCTs Regarding the age of patients, some authors find that extragnathic GCTs occur in older patients than gnatic GCGs [13,14]. Our work confirms this distinction for maxillo-mandibular localizations of GCTs. In the literature, extragnathic GCTs series frequently report GCTs as painful, whereas GCGs is widely described as painless, except in its aggressive form [15,16]. However, some authors agree that the pain is not a clinical distinguishing criterion [9,13,17,18]. We have observed in our SR, that the GCTs were much more often symptomatic (pain in particular) than GCGs. Radiologically, some authors find that extragnathic GCTs and gnathic GCGs are very similar, characterized by an eccentric, nonmineralized osteolysis zone. On Magnetic Resonance Imaging (MRI), both entities show an attenuation similar to that of the muscle and a low to intermediate signal intensity in T1 and T2 weighting [14]; Several authors state that GCTs and GCGs are indistinguishable on imaging, regardless of their location [14,19]. In our SR, imaging locality seems to be a factor that can guide the diagnosis between GCGs and GCTs. In literature, GCGs is described as a radiolucent lesion, both unilocular and multilocular. Loculations are more frequent in large and/or mandibular lesions [13,20]. Histologically, it is commonly described by several authors that multinucleated giant cells are often focused around areas of hemorrhages in GCGs, whereas they are more evenly distributed in extragnathic GCTs [9,15,21,22]. In their study comparing 37 extragnathic GCTs and 24 GCGs of the jaw, Wang et al. found that 13.5% of GCTs showed areas of necrosis, while they were absent for all GCGs [23]. According to our results, we tend to agree with the conclusion of Atiyeh et al. that histologically, the giant cell distribution pattern is the most reliable and reproducible parameter for the differential diagnosis between these two entities [17]. Contribution of immunohistochemistry and molecular biology The advent of immunohistochemistry and molecular biology offered hope for improving the diagnosis of lesion for which histology was insufficient. Thus, many immunohistochemical markers have been tested to understand the origin of these various giant cell lesions. As there are no immunohistochemical marker studies of GCGs and GCTs at the orofacial level, we have analyzed, from some works in the literature, the immunohistochemical markers used in the study of extragnatic GCTs and gnatic GCGs (Table 4). Only P63 and OCT4 appear to be useful for differential diagnosis with immunopositivity noted in GCTs [24,25]. P63 is a homologue of the tumor suppressor p53 and plays a pivotal role in development of epithelium, craniofacial structures, limb. It is widely used as a diagnostic aid Fig. 3. Proposal of a decision chart for the diagnosis of single giant cell lesion of maxillo-mandibular location. in breast and prostate cancer [24]. OCT-4 is a binding transcription factors and plays a critical role in the development and self-renewal of embryonic stem cells and has been linked to oncogenic processes [25]. The main finding for the diagnosis of GCTs is the identification of somatic mutations in the H3F3A gene, one of the two genes coding for histone H3 [9,26]. Depending on the study, somatic H3F3A gene mutations affect 92-96% of patients with extragnathic GCTs [27,28]. In contrast, histone gene mutations were not identified in a series of Asian patients with extragnathic GCTs, but the authors reported somatic IDH2 (isocitrate dehydrogenase gene) mutations in 80% of patients [29]. This mutation has not been found in a series of Caucasian patients [27]. Gomes et al. demonstrated that GCGs do not share the H3F3A p. Gly34Trp or p. Gly34Leu mutations reported in extragnathic GCTs but did find KRAS, FRGR1 and TVPV4 "gain-of-function" somatic mutations in 72% of sporadic cases [30,31]. Thus, by using these markers on Giant Cell lesions of the oral cavity, it might be possible to distinguish GCTs from GCGs even at the orofacial level. Initial assessment of a giant cell lesion of the jaw Based on these findings, although some parameters overlap, we have attempted to define the main criteria for the differentiation of GCTs and GCGs and propose a decision tree for the management of single maxillomandibular GC lesions that still need to be evaluated by further clinical studies (Fig. 3). Hyperparathyroidism should first be excluded in order to eliminate the diagnosis of a brown tumor of hyperparathyroidism, which is clinically and histologically similar to GCGs and GCTs and whose treatment is based on metabolic normalization [4,32]. GCGs, especially when it is multiple and early-onset (usually before the age of 20), is described in association with RASopathies corresponding to genetic syndromes caused by germline mutations of the "RAS/MAPK" signaling pathway, such as Noonan syndrome, neurofibromatosis type 1 or LEOPARD syndrome [26,42]. Thus, the search for underlying pathology must be in the foreground for multiple or multifocal lesions. Management of GCTs and GCGs Current management of GCGs is based on surgical resection combined with supported curettage or peripheral osteotomy, to reduce the risk of recurrence [43]. For aggressive lesions, an "en bloc "resection may be considered [11]. Several pharmacological treatments are described as effective alternatives to the surgical management of GCGs. These are essentially subcutaneous injection of calcitonin or alpha interferon and intra-lesional injection of corticosteroids, although they have significant drawbacks (long duration of treatment, need for additional surgery in case of ineffectiveness and side effects [14,44]. Denosumab still not well described as treatment for GCGs in the literature, but may become a therapeutic option for some patients [18]. Regarding GCTs, the ideal treatment remains controversial. It is based on curettage or wider excision, depending on the tumor extension to the soft tissues [45,46]. "En bloc" resection, which has long been described as the reference treatment, now seems to be reserved for GCTs with wide soft tissue extension, due to the resulting morbidity [45,46]. For non-operable or metastatic GCTs, denosumab is currently the treatment of choice. It can also be used as a neoadjuvant treatment to reduce tumor size and thus surgery-related morbidity. The use of radiotherapy and chemotherapy is nowadays rare and reserved for cases of malignant GCTs [45,46]. Risk of bias We ensured that the samples of the included studies were independent, i.e. that there was no overlap between the clinical cases presented. This work has a publication bias as only validated cases were published. In addition, the clinical and histological descriptions were often unique to each author. Finally, a confusion bias could exist between the diagnostic terms used, the methodology of diagnostic attribution being therefore sometimes difficult to verify. In addition, the comparison of data from the literature and our SR is a limitation of analysis. At last, the articles of the SR have weak scientific evidence, with all articles being grade C, level 4 (Table 1), which justifies not performing a systematic review and meta-analysis. Conclusion All the characteristics found to distinguish the two entities remain "orientation" factors and may be insufficient to make a definite diagnosis. In cases where certain elements may point to GCTs, immunohistochemistry and especially molecular genetics seem indispensable. However, further studies are needed to better understand and distinguish these two entities. Author contribution statement E. Hoarau, R. Lan, J. Rochefort: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. P. Quilhot: Conceived and designed the experiments; Contributed reagents, materials, analysis tools or data; Wrote the paper. V. Baaroun, G. Lescaille, F. Campana: Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. Funding statement This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Data availability statement Data included in article/supplementary material/referenced in article. Declaration of interest's statement The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2023-02-27T16:09:46.723Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "07b815d8955eb8ec1981c0b1a7a1475a77cd60fb", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.heliyon.2023.e14087", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "edf8ffaa1fb49dbb79e8e9338ee2bf1245a4ae34", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
125632708
pes2o/s2orc
v3-fos-license
Ambiguity in renormalization of X-junction between Luttinger liquid wires We study four-lead junction of semi-infinite wires by using fermionic representation in the scattering state formalism. The model of spinless fermions with short-range Luttinger liquid type interaction is used. We find the renormalization group (RG) equations for the conductances of the system in the first order of fermionic interaction. In contrast to the well-known cases of two-lead and three-lead junctions, we show that the RG equations cannot be expressed solely in terms of conductances. The appearing ambiguity is resolved by choosing a certain sign defined at the level of S-matrix, but hidden in conductances. The origin of this Z_2 symmetry is traced back to the particle-hole symmetry of our Hamiltonian. We demonstrate two distinct RG flows from any point in the space of conductances. The scaling exponents at the fixed points are not affected by these observations. Introduction One-dimensional (1D) systems of interacting fermions served as toy models for theorists since more than fifty years.[1] The advances in technology and fabrication during last two decades propelled these models into the forefront of current research in physics at nanoscale.Examples of such systems include carbon nanotubes, [2] 1D edge states in quantum Hall regime [3] and in topological insulators.[4,5] One dimensional conductors and junctions between them should be important ingredients of any future electronic devices.[6] It is well known, that the transparency (conductance) of such junctions to the electric current is subject to renormalization due to interaction between fermions within the wire.For physical case of electronic repulsion this conductance tends to zero at low temperatures and applied voltages according to power law, with the exponent defined by the interaction strength.There are two approaches developed for the description of this conductance renormalization.One approach uses the so-called bosonization technique, which treats the interaction between electrons in the bulk of the wire exactly, and considers the junction as a boundary.[7] The system is described in terms of chiral fermionic densities (currents), and the boundary conditions are imposed onto these currents.Fixed points of the boundary action are determined and perturbations around them are classified according to their scaling dimensions.[8,9] This way one judges about (in)stability of various fixed points, i.e. the limiting values of conductance obtained during renormalization procedure. Another approach to renormalization of conductance was first formulated in the limit of weak interaction in the bulk.In this approach one describes an arbitrary junction via its unitary S-matrix, defined in the absence of interaction.[10] For N semi-wires met at one junction this matrix belongs to U (N ) group.The fermion wave function are given in a scattering state formalism, and corrections to S-matrix are subsequently calculated.[11] These corrections are logarithmically divergent which eventually leads to the renormalization group (RG) equation for the S-matrix and conductance.This approach was essentially improved in [12,13] by taking into account higher orders of interaction and subleading logarithms in perturbation theory for conductance.By summing up the appearing series one obtains the non-pertrubative RG equation for the conductance, whose solutions nicely reproduce those scaling exponents which are known exactly from the bosonization approach. Up to now the main focus of theoretical analysis concerned the simpler cases N = 2 and N = 3, which describe an impurity in one wire and Yjunction with three leads, respectively.The case of four-lead junction with N = 4 received less attention, partly because of the difficulties in the description of S-matrix and ensuing analysis of RG equation even in the lowest order of perturbation theory.We mention her recent studies for a special model cases of S-matrix with N = 4 while discussing regular networks of Luttinger liquids [14], superconducting hybrid junctions [15], and tunneling between two helical edge states of the topological insulators [16]. We notice that in all previous studies of two-lead and three-lead junctions the RG equations written in terms of S-matrix and in terms of conductance matrix (defined by squares of matrix elements |S ij | 2 ) were equivalent.The same equivalence held in the considered simpler model cases of four-lead junctions as well.However, there is no one-to-one correspondence for N ≥ 4 between the matrix of conductances and the S-matrix, even after removal of trivial phase factors.Mathematically, it is known as an ambiguity in restoring the unitary matrices from the unistochastic ones.[17,18] In general, this ambiguity can even be continuous and three-dimensional, while in our model below we only find a double discrete ambiguity which is a proven minimum for symmetric S-matrix in U (4) group.[18] The main goal of our paper is to demonstrate this ambiguity already in the first order RG equations.We show for a particular physically motivated class of S-matrices the description of the junction in terms of conductances is incomplete.We find two possible RG flows starting from any point in the conductances' space.The RG fixed points and the scaling exponents are not influenced by this ambiguity. The plan of the paper is as follows.We define our model in Sec. 2, we explain the notion of reduced conductances in Sec. 3, the RG equations are discussed in Sec. 4. The ambiguity of the RG equations is revealed and discussed in Sec. 5, we present our concluding remarks in Sec. 6. The model We consider a model of interacting spinless fermions describing two quantum wires connected by a junction in the middle of the wires.Alternatively, we speak of four semi-wires connected at a single spot.In the continuum limit, linearizing the spectrum at the Fermi energy and including forward scattering interaction of strength g j in wire j, we may write the Tomonaga-Luttiger liquid Hamiltonian in the representation of incoming and outgoing waves in lead j (fermion operators ψ j,in , ψ j,out ) as Here Ψ in = (ψ 1,in , ψ 2,in , ψ 3,in , ψ 4,in ) denotes a vector operator of incoming fermions and the corresponding vector of outgoing fermions is expressed through the S-matrix as Ψ out (x) = S • Ψ in (−x) .In the chiral representation we are using positions on the negative (positive) semi-axis corresponding to incoming (outgoing) waves.We consider quantum wires of finite length L, contacted by reservoirs.The transition from wire to reservoir is assumed to be adiabatic (i.e.produces no additional potential scattering).The junction is assumed to have microscopic extension l of the order of the Fermi wave length.Interaction effects inside the junction are neglected.This is expressed by the window function Θ(x; −L, −l) = 1, if −L < x < −l, and zero otherwise.The regions x < −L are thus regarded as reservoirs or leads labeled j = 1, 2, 3, 4. We put the Fermi velocity v F = 1 from now on.The interaction term of the Hamiltonian is expressed in terms of density operators ρ j,in = Ψ + ρ j Ψ = ρ j , and ρ j,out = Ψ + ρ j Ψ = ρ j , where ρ j = S + • ρ j • S and the density matrices are given by (ρ j ) αβ = δ αβ δ αj and ( ρ j ) αβ = S + αj S jβ .The S-matrix describes the scattering at the junction and belongs to U (4) group. Reduced conductances In the linear response regime our system is characterized by the matrix of conductances defined by I i = C ij V j , with the current I i flowing in wire i and the voltage V j applied to the wire j.The current conservation, I i = 0, and the absence of response to the equal change in voltages result in the Kirchhof's rules, i C ij = j C ij = 0.It suggests that we can choose more convenient linear combinations of I i , V j reducing the number of independent components in C ij .In the d.c.limit we have from the Kubo formula [19] The appropriate representation for the reduced conductance matrix may be constructed by using generators of U (4) Cartan subalgebra, which are three traceless diagonal matrices and one unit matrix.We define with the property Tr(µ j µ k ) = 2δ jk , j = 1, . . ., 4. The densities are expressed as ρ j = 1/ √ 2 k R jk µ k , where the 4×4 matrix R is given by and has the properties The outgoing amplitudes are expressed in a similar form with µ j replaced by µ j = S + • µ j • S. It also means [20] that we work now with the combinations of currents and voltages of the , or We could use more physical combinations V 1 − V 2 etc. without additional factors 1/2, 1/ √ 8, see [19], however, it is irrelevant for our purposes below.The reduced conductance matrix in such basis is determined by G = R C R T and has a structure In the presence of interactions, the structure of the last expression is unchanged, but the elements vary.The main effect in the d.c.limit can be described by the renormalization of the S-matrix, [19] which translates to the renormalized quantity, where the superscript R shows that we work in the basis µ j instead of ρ j , and the superscript r denotes that the quantity is fully renormalized by interactions.From now on we assume that all quantities are renormalized and drop this latter superscript. Renormalization group equations The renormalization of the conductances by the interaction is determined by first calculating the correction terms in each order of perturbation theory in g j .We are in particular interested in the scale-dependent contributions proportional to Λ = ln(L/l), where L and l are two lengths, characterizing the interaction region in the wires, Eq. (1).The above expression for Λ corresponds to vanishing temperature, T v F /L, and for higher temperatures we should replace Λ = ln(l −1 / max[L −1 , T /v F ]).In lowest order of perturbation the scale dependent contribution to the conductances is given by [21] Tr W jk W lm g ml Λ , where are a set of sixteen 4 × 4 matrices (products of W 's are matrix products), g ml = g m δ ml is the matrix of interaction constants and the trace operation Tr is defined with respect to the 4×4 matrix space of W 's. We multiply G ij by R T from the left and by R from the right to get the components of Y R in the form Differentiating these results with respect to Λ (and then putting Λ = 0) we find the RG equations in the first order in the interaction The number of non-zero matrices W R jk = {R T • W • R} jk is reduced to nine in most general case, since W R j4 = W R 4j = 0 ; we have g R ml = {R T • g • R} ml .The matrices W R jk are best evaluated with the aid of computer algebra. RG flow ambiguity phenomenon We continue our analysis by appropriate choices of interaction g ij and Smatrix parametrization.Let interaction constants first be equal in both wires g = diag(g, g, g, g), and we parametrize the S-matrix by three angles in the following way where This parametrization (10) describes the transmission and reflection in each wire (α 1 , α 2 ) and the hopping between the wires (β).The above Eq.( 7) is invariant upon "rephasing", i.e. the multiplication of S from both sides by unitary matrices of the form diag(e iγ 1 , e iγ 2 , e iγ 3 , e iγ 4 ).Without loss of generality we may assume The computation of the reduced conductances matrix (6) yields The matrix form of the RG equation ( 9) is now written as a set of coupled RG equations for components of Y R in terms of the initial variables Our natural desire is to express these equations entirely in terms of the conductances, as it was successfully done in our previous studies for two and three wires connected by the junction [12,22,13]. We have three independent components of the reduced conductances matrix, denoted as a 1 , a 2 , b in Eq. (12).The attempt to write the right-hand side of the RG equations (13) in terms of a 1 , a 2 , b faces the ambiguity problem in the term ∝ sin α 1 sin α 2 .The sign of this term depends on angles range: if α 1 and α 2 both belong to the range either (0, π) or (−π, 0) then sin α 1 sin α 2 is positive, but if α 1 and α 2 belong to different segments (0, π) and α 2 to (−π, 0) then the discussed term is negative.Notice, that the values of conductances a 1 and a 2 are not affected by the change of sign α 1 → −α 1 and α 2 → −α 2 , respectively.This change of sign corresponds to complex conjugation of some elements of S-matrix (10), namely, r i → r * i and t i → t * i , which cannot be compensated by "rephasing" operations. One may ask what is the internal discrete symmetry, manifesting itself at the level of RG equations?To answer this, we consider two decoupled (β = 0) Luttinger liquids with impurities.Standard calculations [12] show that (at least in the lowest Born approximation) the phase α j is equal to U bs /v F , with U bs the backscattering amplitude off the impurity in jth wire.The sign of U bs is unimportant, as only the square |U bs | 2 defines the conductance [7].When allowing hopping between the wires, we start to feel the difference between two cases: i) the scattering potentials in both wires are of the same sign, i.e. both bumps or both pits, or ii) scattering potentials in the wires are of different sign, i.e. a pit in one wire and a bump in another.Another explanation of the sign question in Eq. ( 13) concerns the particle-hole symmetry of the Hamiltonian (1).At the level of decoupled wires the sign of U bs (and α j ) is changed when passing to hole description, Ψ in → Ψ † in , etc.One may then regard the above sign question as arising from the possibility to perform the particle-hole transformation in one wire. We see that the RG equations cannot be defined in terms of conductances only.Generally, we have two different RG flows for conductances, and the choice between them should be done on the base the initial phases of S-matrix, α 1 , α 2 .Further analysis of RG equations shows that the ambiguity doesn't influence the position of fixed points (FPs).We have four FPs parametrized by a 1 = ±1, a 2 = ±1, b = 1, which reads as α 1,2 = 0, π, β = 0 in terms of angles.These FPs correspond to simple cases of two separated wires with absolute transmission or reflection in each of them.The fifth FP is defined by a 1 = a 2 = b = 0 and discussed below. To clarify the character of different FPs we generalize our consideration and allow for different interaction constants in two wires : The representation of the RG equations purely in terms of conductances is cumbersome and we rewrite them in terms of the angles [19] dα The same ambiguity of RG equations in terms of conductance is seen again in (15).One can change, e.g.α 2 → −α 2 , without changing the conductance but altering the RG flows. As before, we observe four universal FPs, α 1,2 = 0, π, β = 0 (i.e. a 1,2 = ±1, b = 1).Only one of these four is a stable FP and it is defined by the sign of the interaction in individual wires.According to usual expectations [7,10] we have for the stable FP: a j = sign g j .In addition we find a fifth nonuniversal FP, which is never stable, and whose position attains a compact form in terms of conductance: This FP is in physical domain at |a 1 | < 1, which happens for g 1 g 2 > 0. We illustrate our findings in Fig. 3, where we show the body of conductances, i.e. the set of allowed conductances values (a 1 , a 2 , b) and possible RG trajectories, starting from the same parametric point (a 1 , a 2 , b) for different values of g 1 , g 2 .We see that the stable FP depends on the quarter in the plane of (g 1 , g 2 ), similar to the situation with Y-junction [21].We also demonstrate the existence of two possible RG flows, leading to the same stable FP (darker one is for plus sign in ambiguity term, lighter -for minus).One can verify that the scaling exponents near the FPs are not affected by the discussed sign ambiguity.Two possible RG flows result only in a different prefactors in the scaling dependence of the conductances.For instance, for the repulsive interaction in both wires, g 1 , g 2 > 0, if the RG flow starts in the vicinity of Starting far away from such vicinity, two possible RG trajectories will end at the different points α j , β in (17) but the scaling law is the same.Figure 3: Fixed points and RG trajectories in conductances space.Two possible trajectories lead to the attractive FP, determined by the signs of interaction in wires.It is seen that the RG flows of the conductances are non-monotonous functions of the scaling variable.The fifth non-universal FP appears when g 1 g 2 > 0, its position defined by Eq. ( 16). Discussion In this paper we study four-lead junction of spinless Luttinger liquid wires by fermionic representation in scattering state formalism.The interaction in wires leads to the renormalization of the conductances of the system, expressed via the absolute values of the S-matrix elements.The RG equation for the conductances can usually be formulated entirely in terms of con-ductances, which was explicitly checked for general two-lead and three-lead junctions.In this paper we demonstrate that in case of four-lead junctions the RG equations possess the sign ambiguity unresolved in terms of conductances.From a mathematical viewpoint, this ambiguity is an intrinsic property of U (4) group, and is most easily seen in the sign choice, which defines the left-and right-isoclinic subgroups of SO(4) ⊂ U (4).From a physical viewpoint, the ambiguity may be traced back to the particle-hole symmetry of our Hamiltonian.This results in two possible non-monotonous dependences of the renormalized conductances as functions of the scaling variable. We note that should we use the bosonization approach in our analysis, we would not observe the ambiguity in question.This is because the bosonization starts with the FPs of RG equations and analyzes the scaling dimensions of perturbations around it.We show above that the scaling dimensions (exponents) do not depend on the choice of RG flow, and it is only prefactors before the scaling exponentials which are determined by the particular RG trajectory, leading to the close vicinity of FP from the distant points in parameter space. We believe that the discussed ambiguity may be an important issue in further theoretical investigation and experimental manipulation of X-junctions between quantum wires. Figure 1 : Figure 1: A four-lead junction of quantum wires, corresponding to the Hamiltonian (1). Figure 2 : Figure 2: Point contact of two wires is schematically shown, illustrating discrete symmetries in our choice of S-matrix, Eq. (10).
2015-04-06T15:14:47.000Z
2015-04-06T00:00:00.000
{ "year": 2015, "sha1": "ce04999ec2bb10213d14aef13a03cd0fa4c0db82", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1504.01301", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ce04999ec2bb10213d14aef13a03cd0fa4c0db82", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
240427026
pes2o/s2orc
v3-fos-license
Characterizing the deep pumping-induced subsidence against metro tunnel using vertically distributed fiber-optic sensing Continuous pumping of groundwater will induce uneven ground settlement, which may adversely affect the nearby metro tunnels. In this paper, taking Nantong Metro Line 1 crossing Nantong Port Water Plant as an example, the surface level measurement and subsurface deformation monitoring using vertically distributed fiber-optic sensing are implemented to acquire the surface and subsurface settlement of emergency water supply conditions. The fiber-optic cable vertically buried in the constant-temperature layer is used to measure the subsurface strain field and deduce the deformation amount of each stratum. The monitoring results show that, during the pumping, the deformation of the aquifer and ground surface is linearly compressed with time; after the pumping, the ground surface continues to settle linearly at a slower rate for about 50 days, followed by a slow linear rebound, and the aquifer is logarithmically rebounded. In addition, deep pumping causes the deformation of the aquifers to be much greater than the surface settlement; the surface settlement lags behind the settlement of the aquifers by 1–2 months; the surface rebound recovery also exhibits a similar delay. Fitting models were derived to predict the maximum settlement and curvature radius of the site, which indicates that the adverse effects against the metro tunnel are not negligible once the continuous pumping exceeds 15 days. Those insights can be referred by the practitioners for the control of urban subsidence. Introduction Ground subsidence is a hazardous environmental geology issue which not only reduces stratum elevations, but also yields damage to buildings and infrastructure (Herrera-García et al. 2021;Pacheco-Martínez et al. 2013). Moreover, for metro tunnels, one of the most common linear underground infrastructures in cities, the longitudinal differential deformation induced by ground subsidence, can lower their safety, durability, and waterproof performances (Peng et al. 2017;Wang et al. 2016). The variation of the groundwater level induced by human activities is a major cause of differential subsidence in urban areas (Edalat et al. 2020;Xu et al. 2016b;Xue et al. 2005). Particularly, for those cities which have to withdraw underground water as the water supply, long-term pumping activities in the water source area might induce serious subsidence problems (Chai et al. 2004;Othman and Abotalib 2019). Given the differential settlement, former studies mainly focused on its impacts on surface buildings. However, recent studies of ground subsidence against metro tunnels have revealed that, compared with the negative impacts imposed by the settlement of ground surface, those induced by the subsurface deformation are more significant for underground infrastructure (Shen et al. 2014). Zheng et al. (2014) studied the stratified settlement caused by the extraction of confined water using field tests and found that the deformation of phreatic layers is less than that of the confined aquifer layers, which was other than the acknowledged settlement law caused by dewatering of ground surface. Note a growing consensus has suggested that ground subsidence occurs lagging behind the pumping activity (Kearns et al. 2015), and the duration 1 3 762 Page 2 of 15 of land subsidence induced by deep pumping is longer than that by surface pumping (Cui and Jia 2018). However, current theoretical models of the pumping-induced settlement remain unavailable to fully characterize the above-mentioned influencing factors (Budhu and Adiyaman 2010;Wang et al. 2018;Xu et al. 2012;Zhang et al. 2017;Zhou et al. 2017). Therefore, concerning extracting deep groundwater scenarios, current models still have to be deliberately calibrated by the field measurements (Shen and Xu 2011;Xu et al. 2016a). As is known, the ground subsidence can be monitored by a variety of measures (Poland et al. 2006), such as leveling (Abidin et al. 2001), GPS (Baldi et al. 2009;Choudhury et al. 2018;Hu et al. 2006;Mousavi et al. 2001), InSAR (Calderhead et al. 2011;Motagh et al. 2017), and their combinations (Galloway and Burbey 2011;Saleh and Becker 2018). Note all those measures cannot acquire layered subsidence measurements; even the layered marks can acquire, in that the layered settlement meters are fixed-point arranged, the discrete subsidence measurements cannot finely characterize the subsurface deformation field (Jiang et al. 2016). The distributed fiber-optic sensing (DFOS), a novel monitoring technique, can obtain the strain field along the sensing cable. Although the DFOS has been employed to monitor the subsurface deformation field of Shengze, an abnormal post-dewatering subsiding area in Suzhou of China (Gu et al. 2018;Wu et al. 2021;Zhang et al. 2018a), few works have been documented to use DFOS to monitor the variation process of the subsurface deformation field during a rapid pumping, and no mention assesses its negative impacts against metro tunnels. The application of DFOS in the metro tunnel was mainly reflected in structural health monitoring (Gómez et al. 2020;Gue et al. 2015). Due to temperature fluctuations near the surface, this technique was not used to evaluate changes in the stratigraphic environment around the tunnel. In this paper, taking a groundwater plant near Nantong Port, Jiangsu Province of China, as an example, a test of deep multiwell dewatering was implemented to verify the applicability and feasibility of the DFOS technique to monitor the variation of the subsurface deformation field. Feature extraction on the DFOS measurements was also performed to assess the impacts of subsurface settlement against the metro tunnel. Fitting equations were deduced to shed light on the evolutionary trend of the surface and subsurface deformation field during and after the pumping, which can be used to predict its long-term impacts against the metro tunnel. Principle of distributed fiber-optic sensing (DFOS) A variety of DFOSs can be used for strain field monitoring . Typically, the Brillouin optical timedomain reflectometer (BOTDR) is used in this paper. The principle of the BOTDR is based on the change in the scattered light caused by nonlinear interactions between the incident light and the phonons which are thermally excited within the light propagation medium. When occurring in an optical fiber, the backscattered light experiences a frequency shift (the Brillouin frequency), which is dependent on the temperature and strain environment of the fiber (Wu et al. 2015). Compared with other scattered lights, a substantial advantage of Brillouin scattering is that its frequency shift caused by temperature is only 0.002%/°C, which is much smaller than that caused by strain. Therefore, while measuring the Brillouin frequency shift induced by strain, the influence of the temperature on the Brillouin frequency shift can be neglected if the changes of temperature are within 2 °C. The relationship between the Brillouin frequency shift and the strain of optical fiber yields: where v B (ε) is the Brillouin frequency against strain ε, v B (0) is the Brillouin frequency shift without stain, dv B (ε)/ dε is the strain coefficient, and the proportional coefficient of strain, at a wavelength of 1.55 μm, is approximately 0.5 GHz/%. Following such a term, the strain distributed along the sensing optical fiber can be measured. Given the monitoring scenario of land subsidence, the deformation field along the sensing optical cable caused by soil compression or rebound at depth h can be calculated in accordance with the measured strain, which yields: The drilling depth of optical fiber design is 230 m in this test, the metal-reinforced single-core cable (MRC) with model NZS-DSS-C02 and the fixed-point cable (FPC) with model NZS-DSS-C08 were respectively, installed in the monitoring hole to measure the stratum deformation; the structures of these two types of fiber sensors are shown in Fig. 1. MRC, which can effectively protect the optical fibers with several metal reinforcers, has good coupling and uniformity with soil by the screw structure of the sensor surface. FPC, with a fixed-point design, can be used to measure spacial inhomogeneous and discontinuous sections (Gu et al. 2018;Shi 2017). The DFOS can effectively sense the deformation at different locations along with the fiber. When the fiber is fully coupled with the surrounding strata, the fiber at different locations can reflect the strains of strata at different depths. The strains reflected at different locations just represent the local strains of the fiber. The optic fiber signal collected by the interrogator is discrete and the amount of data is related to the fiber measurement length and resolution. To efficiently extract the morphological distributions along with the underground depth, the measurements ought to be organized in the form of a space-time matrix B (Sun et al. 2014). Given that the total number of sampling points along the optical fiber is n and the total number of sampling times is m; B is a two-dimensional matrix with n rows and m columns, which yields: where the element ε ij is the measurements of the strain field on the measuring point with different depth i at sampling time j. The matrix can visualize the deformation distribution of different depths with time. Now that using the DFOS for subsidence monitoring, the submatrix B s of the space-time matrix B is usually extracted to characterize the local distribution of stratum deformation field, which yields: where the time interval of sampling points of the submatrix B s is [u, v], and that of the sampling depth interval is [s, t]. The submatrix B s can also be represented as a column vector group, which yields: where The column vector E j represents the strain vector acquired by the DFOS at the depth range [s, t] at sampling time j. The column vector E j at a certain time is substituted into Eq. (2) to obtain the ground deformation Δd at a specific depth range [s, t], which yields: where h i is the length of a certain measured micro-element section of the DFOS in the formation. The strains in the local area of the stratum can be superimposed along the fiber to obtain the deformation of the stratum in a specific depth range. Conditions of engineering geology and hydrogeology Nantong, a coastal city in eastern China, has planned to build four metro lines. Among those, the planned section between the Jianghai Avenue Station and the Bus Station of Metro Line 1 will travel beneath the emergency water source of the Nantong Port Water Plant. The tunnel has a circular section with a burial depth of 21 m. The lining structure has an inner diameter of 5.5 m and a thickness of 0.35 m. The strata at this depth were less subject to surface temperature fluctuations, and the DFOS can effectively monitor the compressional strata deformation. In accordance with the relevant specifications (Gao et al. 2010), the cumulative settlement of the layer where the tunnel is located should not exceed 20 mm, and the curvature radius of the longitudinal deformation curve is not less than 15,000 m. Given the emergency water supply, a large amount of groundwater will be extracted from the aquifer. This may induce subsidence within the overlying strata, which might in turn pose some threats to the operational safety of the metro. This paper characterizes the adverse impacts of emergency pumping on the metro tunnel by monitoring the subsurface deformation field induced by a test of multi-well dewatering. Nantong is located in the alluvial plain of the Yangtze River Delta, widely covered by the Quaternary strata. The thickness of the strata ranges from 200 to 360 m, which is composed of a set of multiple sedimentary cycles with alternating sand and clay layers. The sand layer is thicker and contains coarse particles, which is conducive to the enrichment and transport of pore groundwater. The pumping test was performed on the south bank of the estuary of the Tonglv Canal into the Yangtze River; as shown in Fig. 2, the groundwater resources are abundant. The geographic location of the testing site is 32°00′55″-32°01′16″ N latitude and 120°49′11″-120°49′33″ E longitude, with a site altitude of about 4.0 m. Nantong has a humid subtropical monsoon climate with an annual average temperature of 16 °C, precipitation of 1036 mm, and evaporation of 1392 mm. The Tonglv Canal that crosses the test site is a hydraulic facility that has been artificially expanded in recent decades to bring water from the nearby Yangtze River, so its water flow is mainly influenced by the Yangtze River. Table 1 lists the physical and mechanical parameters of the strata of the test site according to the preliminary investigation works. As seen from the table, the aquifers are mainly composed of sandy soils, mixed with silty clay aquitards. Among those, the silt layer at a depth of 50 m attains a great amount of water content, as well as a small compression modulus, which is the aquitard between the phreatic aquifer (PA) and the first confined aquifer (CA1). The permeability varies between the silty clay and silty sand interlayers within both ranges of 120-150 m and 180-190 m. Note both interlayers might impede the transit of groundwater, which causes the inconsistency between the deformation fields of the subsurface and the ground surface. The groundwater is mainly loose rocks porewater, mostly stored in sand layers. In accordance with the storage condition, the groundwater can be divided into five aquifer groups from top to bottom, namely, the phreatic aquifer (PA), the first confined aquifer (CA1), the second confined aquifer (CA2), the third confined aquifer (CA3) and the fourth T o n g l v C a n a l Y a n g t z e R i v e r Nantong Nantong Port Gangzha District Chongchuan District R e n g a n g R iv e r Hongqiao Rd Qingnia n Rd confined aquifer (CA4), as shown in Fig. 3. All adjacent aquifers are hydraulically connected, with lateral recharge from the Yangtze River and infiltration recharge between aquifers dominating along the river, and artificial pumping is the main drainage route. Note that only CA4 is not included in this pumping test. The PA consists of the silty clay, silty sand, and fine sand of the Yangtze Delta phase of Holocene (Q h ), buried upon a shallow depth of 50 m. The depth of the water level, seasonally varying from 1 to 3 m, is affected by the atmospheric precipitation and surface runoff. The layer of PA is characterized by the coarse particles in the upper and lower sections and fine particles in the middle section along the vertical direction; some sections of the lower aquifer are connected to CA1. The dewatering amount of a single well is about 10-20 m 3 /day, with poor water quality and thus few exploitations. Fig. 2 Location of the test site The CA1 consists of alluvial and marine loose sands of the Upper Pleistocene (Q p3 ), with a burial depth ranging from 50 to 110 m. The lithology of the aquifer mainly consists of pebbles, gravel, coarse sand, medium sand, fine sand, and silty sand; those soil particles, from coarse to fine, are vertically distributed from bottom to top. The aquifer has high permeability and thus enough groundwater supply, which is closely connected to the upper aquifer PA and lower aquifer CA2. The dewatering amount of a single well is about 2000-3000 m 3 /day, also with poor water quality. The CA2 consists of the fine sand and silty sand layers of the fluvial and estuarine sedimentary of Middle Pleistocene (Q p2 ), buried from 130 to 150 m. Note the water barrier between CA1 and CA2 is partially missing. Due to the thin layer thickness and discontinuous distribution, its water content is small and its distribution varies widely. The dewatering amount of a single well is about 300-3000 m 3 /day, still with poor water quality. Aquifer CA3 consists of gravelly medium sand, fine sand, and locally gravelly cobble of river-lake sediment of the Lower Pleistocene (Q p1 ), whose buried depth ranges from 180 to 240 m, with an uneven thickness ranging from 20 to 100 m. The dewatering amount of a single well is generally over 2000 m 3 /day. Influenced by the distribution of thicker aquifers near CA2, its connection with the upper aquifers is relatively independent. Both the quality and quantity of the groundwater are good and rich, which makes CA3 the main exploited freshwater aquifer of Nantong City. Note that in this test, the groundwater is extracted from aquifer CA3, with a well depth of 225 m. Due to the flat topography of the test site, the distribution of groundwater level is uniform, the hydraulic gradient is small, and groundwater runoff is slow. The filling layer at the canal is missing due to artificial expansion, and the water is more likely to infiltrate into PA. The thickness of CA2 is largest near the riverbank and decreases at the southern of the test site. The thickness distribution of CA3 is the opposite of CA2, with the maximum thickness at the southern part of the site. Test layout and schedule Fifteen pumping wells, labeled from W1 to W15, of the Nantong Port Water Plant near metro line 1 were selected for this test. All wells were pumped at a rate of 8 × 10 4 L/h under the emergency water supply conditions, from August 9 to 16, 2018. The layout of the pumping wells and monitoring points is shown in Fig. 4. Two well groups exist in the test site, namely, the south well group, W1-W7, located near the water plant on the southern side, and the north well group, W8-W15, at the river bank on the northern side. In addition, 23 monitoring points of ground surface settlement, labeled as S1-S23, were deployed near the metro line and both pumping well groups. Given the subsurface deformation field might vary from that of the surface ground, two adjacent boreholes (D1), with a depth of 230 m and a diameter of 129 mm, were deliberately drilled and a metal-reinforced single-core cable (MRC) and a fixed-point cable (FPC) were, respectively, laid inside to measure the vertical subsurface deformation field. Figure 5 illustrates the measuring layout of the DFOS monitoring system. The end of the MRC and FPC were connected to a BOTDR interrogator, which can process and record the strain field data along with the optical cable. The parameters of the interrogator are listed in Table 2. The fiber was installed with a heavy guide at the end, and a suitable housing and pulling cable was installed on the guide. When the hole was formed, the heavy guide with the . 3 The hydrogeological profile along the metro line fiber-optic cable fixed was brought into the hole by gravity through the pulling cable. During the lowering of the fiber, the fiber was tied and fixed to the steel cable every 2-3 m and avoids stress on the optic fiber. When the cable was lowered in place, the optic fiber was pre-stretched and its top end was fixed at the hole opening to maintain a tight tension. While keeping the fiber in tension, we backfilled the borehole and checked the signal status of the fiber to ensure the verticality of the buried fiber. Backfilling the borehole requires pre-stretching the fiber to ensure its perpendicularity. Former studies have shown that a certain degree of stretching does not affect the quality of the monitoring by the fiber (Zhang et al. 2018a). To synchronize the deformation of fiber with the subsurface strata, the optical fiber should be buried 1 year before the test. Fine sand-clay soft aggregate, similar to the site strata, was used for the backfill material in the borehole. The deformation modulus of the backfill soil was adjusted with different ratios of fine sand and clay, and the backfill soil with the same deformation modulus to the surrounding strata at different depths in the borehole (Zhang et al. 2020). The backfill gradually consolidated over time and the optic fiber could be fully coupled with the surrounding soil under the action of subsurface envelope pressure. Former studies have shown that beyond a certain envelope pressure and burial depth, the optic fiber and the soil can exhibit strong coupling (Zhang et al. 2018b). Three rounds of DFOS measurements were acquired for calibration before the pumping 261 days (Nov. 21, 2017), 227 days (Dec. 25, 2017, and 205 days (Jan. 16, 2018) respectively, and no engineering disturbance S19 S14 S18 S17 S16 S15 S1 S2 S3 S10 S8 S4 S9 S7 S13 S5 S12 S11 activities on the site during this period. The signal gradually stabilized 227 days before the pumping (Dec. 25, 2017)), which indicates that the fiber has been sufficiently coupled to the surrounding strata. Three monitoring items, including water level, surface settlement, and subsurface deformation field, were performed during the test. The monitoring of the water level of the wells was implemented until no obvious variation can be observed. Three leveling calibrations were conducted before the pumping 59 days (June 11, 2018), 42 days (June 28, 2018), and 23 days (July 17, 2018), respectively. The monitoring schedule is depicted in Table 3. Note that three rounds of DFOS measurements were collected per occurrence date. Water level variation From Aug. 9 to 16, 2018, the group pumping was conducted synchronously on the 15 wells in Fig. 4. After stopping pumping for 30 days, the water level tended to be stable. The measurements exhibit that before pumping, the initial water levels of the 15 wells were almost the same, approximately − 16.1 to − 17.5 m. The water level sharply dropped during the pumping; the decline rate of water level gradually slowed. The water level attained its minimum on the seventh day of pumping. Figure 6a shows the distribution of the water level. As noted from the figure, a total level drop of 11.39-16.50 m occurred on Aug. 16. Subsequently, a sharp rebound of water level occurred on Aug. 17 right after the pumping, while the round rate obviously slowed down from Aug. 22. On August 30, 2 weeks after the pumping, the water level was almost restored to its initial value, only with a level falling of 0.05-0.65 m (Fig. 6b). The distribution of the recovery values of the water level is shown in Fig. 6c. Note that the water level distribution of each well was approximately the same as the initial level, reflecting the strong groundwater recovery capability of the test site. As seen from Fig. 6, the greatest decline of the water level occurred near the center of the northern wells (W13), which is located near both the river bank and the metro line. The rise and drop of water level exhibit similar distribution patterns, suggesting the soil permeability of the west side is greater than that of the east side, and the groundwater on the east side attains a stronger rechargeability. Figure 7 shows the distribution patterns of the ground surface settlement during the pumping. As seen from Fig. 7a, at the initial stage of pumping, a large settlement occurred on the west side of the north well group; and also, a small range of settlement occurred in the south well group. The greatest settlement occurred at the measuring point S9, with a settlement value of 2.9 mm. A tiny settlement occurred on the rest part. As seen from Fig. 7b, with the continuous pumping, a small amount of settlement appears on the wide range of the site, and the settlement area of the west side of the north well group enlarged a little. The maximum settlement occurred on measuring point S22, with a settlement value of 3.1 mm. In addition, the settlement values of the east side of the embankment and the south well group are both small, suggesting a good supply of groundwater. Figure 8 shows the distribution pattern of settlement within the 5 months after the pumping. As shown from Fig. 8a, within 2 weeks after the end of pumping, the settlement range further enlarged. The maximum subsidence during this period occurred on the west side of the river bank (S23) with a value of 3.3 mm. This phenomenon indicates that the settlement behavior lagged the deep pumping activity. As seen from Fig. 8b, the settlement area gradually merged to exhibit a large range of the settlement area, which is similar to the distribution pattern of the variation of water levels in the wells. A large settlement occurred on the west side of the north well group and the distribution is continuous. The maximum settlement during this period occurred in the central area of the river bank (S14) with a value of 3.8 mm, which was also the maximum settlement value during the whole test. This phenomenon further reflects that ground settlement lags the deep pumping. As seen from Fig. 8c, the settlement area did not vary significantly 3 months after the pumping. However, a notably concentrated settlement occurred on measuring point S21, on the west side of the embankment, with a value of 3.4 mm. As seen from Fig. 8d, the settlement at the measuring point S21 gradually dissipated and its range expanded. The settlement value decreased to 1.3 mm, and the overall settlement of the site had tended to stabilize. Note that no obvious ground settlement occurred in this test, suggesting that the existence of multiple aquitards impeded the free transfer between different aquifers. Also, the test results indicate that the permeability of the strata on the west side is greater than that of the east side. Characterization of the subsurface deformation field To further study the influence of the groundwater barrier on the deformation connectivity of the strata, the subsurface deformation values acquired by DFOS were substituted into the space-time matrix B. Note the buried depth of the constant-temperature layer of the test site is from 10 to 230 m for MRC and that of FPC is 20-230 m; thus, the measurements of the strain field of the non-thermostatic layer near the surface were excluded owing to the measuring uncertainty induced by the temperature variation in the variable temperature layer. Thus, the submatrices of the constanttemperature layer were extracted to plot the strain field contours during and after the pumping, as shown in Fig. 9. As can be noted from the figure, the strain concentrates within the acquired layers in Fig. 5. Specifically, restricted by the aquiclude of the clayey layer ranging from the buried depth of 150-180 m, the greater strain mainly occurs within aquifer CA3 from the buried depth of 180-230 m. As seen from Fig. 9a, during the pumping, the strain values of MRC within the buried depth of 10-180 m were tiny. Note obvious compressive strain occurred within aquifer CA3, which corresponds to average daily subsidence of approximately 1.61-2.87 mm calculated by Eq. (7). As seen from Fig. 9b, the strain value of FPC, from 20 to 140 m, was close to 0, but fluctuated obviously from 140 to 170 m, with the strain value of about − 30 με. Note the relating soils consistent with the local clay interlayer distribution near 150 m. Significant compression subsidence was also detected in the strata within the The results monitored by MRC and FPC had a similar trend during pumping. The groundwater level drops rapidly in the initial stage of pumping, and the rate of the level drop starts to slow down after 3 h. Then, the water level dropped continuously at an ever-slowing rate with pumping. The DFOS monitoring data indicates that CA3 has synchronously produced large compressional deformation, and CA2 has also produced a small amount of (a) (b) (c) (d) compressional deformation. This phenomenon reflects that the main source of groundwater is CA3 and that the groundwater is mainly recharged laterally. As seen from Fig. 9c, d, after the pumping, the strata strain field, below the buried depth of 180 m, varied slightly, while the significant variation of the strata strain field occurred within 180-230 m underground. Although the strata exhibited compressive strain at the immediate end of pumping, tensile strain started occurring in 1 week, suggesting the existence of an obvious and rapid stratum rebound and the rebound evolved from top to bottom. One month after the end of pumping, the rebound extended from the upper part to the entire aquifer CA3. Four months after the end of pumping, the rebound rate slowed down owing to the gradual recharging of the groundwater. The stratum rebound of aquifer CA3 measured by MRC ranges from 1.89 to 2.15 mm, calculated by Eq. (7), while that of FPC measurements lies within 1.24-2.39 mm. The water level rebounded rapidly within 1 day after the end of pumping, and then rebounded slowly with a gradually slowing trend, and rebounded to the initial state after 21 days. The DFOS monitoring data shows that both CA2 and CA3 stop compressive deformation trend within 1 week after the end of pumping, and CA3 shows a significant rebound deformation after 2 months during the postpumping, indicating that the rebound of the compressive water formation lags behind the return of water level. The stratigraphic strain field is converted to the accumulated subsurface deformation in accordance with Eq. (7). Figure 10 compares variation trends of both cumulative subsurface deformation of aquifer CA3 monitored by DFOS and the corresponding surface settlement monitored by the leveling over time. As seen from the figure, the surface settlement at point D1 increased linearly during the pumping process and remained in that increasing trend for 2 days after the end of pumping. Then it entered a slow settling process and reached the maximum settling value of 3.2 mm on day 50. The other monitoring points also showed slow settlement or remained unchanged at that time. After day 50, the monitoring points started to rebound and the site entered the recovery period. Meanwhile, a sharp compressive deformation of aquifer CA3 occurred during the pumping, and the deformation varied linearly with time. The deformation value reached the maximum of 18.24 mm on the first day after the end of the pumping, and then a rebound occurred, whose rate slowed down with time. The deformation of CA3 returned to the initial state 5 months after the end of pumping. Compared with the subsurface deformation of CA3, the surface settlement is smaller and lagged about 1-2 months, suggesting the subsidence induced by the deep pumping is gradually transmitted to the surface. Analysis of cumulative ground settlement As seen from Fig. 10, during the pumping state, aquifer CA3 and surface settlement increased linearly over time; during the postpumping stage, the surface settlement continued to increase linearly, but the rate decreased, while aquifer CA3 exhibited a nonlinear rebound trend. Note linear functions are used to fit aquifer CA3 and surface settlement trends during the pumping. The logarithmic function is used to fit the rebound trend of CA3 during the postpumping, and a piecewise linear function is used to fit the surface subsidence and the rebound trend of the postpumping stage, respectively. Equations (8) and (9) are the fitting functions of the deformation trends of CA3 and ground surface, respectively. where d d (t) (in mm) is the cumulative deformation of aquifer CA3, and t (in days) is the duration. Equation (8) suggests that continuous pumping within the aquifer would induce sharp subsurface deformation. The underground rebound follows a logarithmic trend and the rebound rate slowed down with time. where d s (t) (in mm) is the cumulative settlement of surface, and t (in days) is the duration. Equation (9) shows that the surface will undergo linear settlement over time during the pumping and remained in that increasing trend for 2 days after the end of pumping. Then it entered a slow settling process and reached the maximum settling value on day 50, which comprises the major part of the total settlement. Subsequently, the surface rebounded linearly with time, and the time recovering to the initial state was the same as that of CA3, suggesting that the surface settlement after the pumping is closely related to the subsurface deformation. Note that the surface settlement was divided into two phases, the former was closely associated with the pumping and the latter was closely related to CA3 stratigraphic deformation. The duration of the surface settlement was nearly 50 days, which is about 1/3 of the time required for the surface rebound, and the time required for both aquifer CA3 and surface to recover to their initial state was about 150 days. When the water source pumping continuously for t p days (t p > 7), the duration of the formation to recover to its initial state is t t then according to Eq. (8) satisfies Equation (10) is derived to obtain the total recovery duration of the strata t t and the duration of ground surface settlement t s , namely, Substituting into Eq. (9), the maximum surface settlement is According to Eq. (12), the maximum surface settlement increases approximately exponentially with the duration of the pumping. Once the dewatering lasts for more than 22 days, the ground settlement will reach 20.5 mm, exceeding the safety limit of 20 mm given by the relevant specifications (Gao et al. 2010). Therefore, the limit duration of continuous dewatering should not exceed 22 days. Analysis of the curvature radius along the metro line Metro tunnel, as a typical linear distribution structure, is more sensitive to uneven deformation of the ground along the line. From Figs. 7 and 8, the settlement differences in the site traversed by this metro were more significant than those in other areas. The settlement data from the settlement monitoring points along the metro line were selected (10) 72.39 − 13.69 ln t t − t p +7 + 42.57 + 2.4 t p − 7 = 0 (t p > 7). to draw the trend plot of cumulative surface settlement along the metro line with time by the linear interpolation method, as shown in Fig. 11. The settlement was much less than the specification limit of 20 mm (Gao et al. 2010). Influenced by the distribution of water wells, the maximum settlement of the northern well group was larger and more concentrated than that of the southern well group. The maximum settlement point was located near K17 + 800, with a value of 3.8 mm, which occurred 1 month after the end of pumping. The southern well group was more dispersed and the settlement value was smaller, but the influence range was larger. In the southern well group area, there were two large settlement points near K17 + 950 and K18 + 150, respectively. The minimum settlement point appeared near K18 + 050 between the two larger settlement points, and the rebound first appeared 3 months after pumping, and the amount of rebound increased with time. In 140 days after pumping, a rebound occurred in all strata along the metro line. In addition to the effect of absolute settlement values, excessive relative settlement reduced the subsidence curvature radius. The curvature radius, at each monitoring point along the metro line, induced by the longitudinal deformation of the tunnel was calculated by the threepoint method (Cupec et al. 2009). Statistically, a relatively small curvature radius occurred on three points (S14, S3, and S4). The variation trends of the curvature at those three points were plotted in Fig. 12. As seen from the figure, the minimum curvature radius along the metro line occurred at settlement groove 2 on September 11, with a radius of 3.89 × 10 6 m. Note that it is much greater than the 1.5 × 10 4 m defined by the specification, indicating that the short-term dewatering activity has less influence on the tunnel. Also, note that the curvature radius of those key points decreases exponentially with time during the pumping. Once the pumping lasts more than 15 days, the curvature radius will be less than the standard value. Within the first week after the end of the pumping, the curvature radius rose rapidly and then fluctuated steadily, suggesting Although the groundwater recharge at the test site was adequate and the strata were well recovered; excessive pumping could have a negative impact on the strata where the tunnel is located. Based on the DFOS monitoring results, the pumping had a great impact on the strata of the pumped aquifer, so the pumping behavior should be avoided in the aquifer where the tunnel is located. From Eq. (8), the CA3 strata were linearly compressed during pumping. When the total amount of pumping is constant, decreasing the pumping rate will slow down the compression curve, while shortening the pumping time will make the compression curve shorter. As an emergency water source, the pumping time should be as short as possible in case of providing emergency water quantity. Calculated by Eq. (8), the pumping duration of all wells at the maximum pumping rate (8 × 10 4 L/h) with the corresponding formation full recovery time is shown in Table 4. From Table 4, the recovery time of the ground deformation does not increase linearly with the pumping duration; the longer the duration of pumping, the more difficult it is for the strata to recover. Therefore, the normal duration of emergency pumping should not exceed 1 day, and under extreme conditions should not exceed 2 days. There should be a 1-month interval between pumping activities to maintain a dynamic equilibrium state for the formation deformation. Conclusions (1) The maximum surface settlement was about 3.8 mm, and the minimum curvature radius was 3.89 × 10 6 m, which indicates that the 1-week pumping could not have a significant negative impact on the metro tunnel. (2) A large settlement occurred on the west side of the test site, which is similar to the variation distribution of groundwater level, suggesting the settlement distribution is affected by the formation permeability and groundwater rechargeability. (3) During pumping, the deformation of the aquifer and ground surface is linearly compressed with time; the deformation of the aquifers to be much greater than the surface settlement, and the surface settlement lags behind the settlement of the aquifer by 1-2 months. (4) After pumping, the ground surface continues to settle linearly at a slower rate for about 50 days, followed by a slow linear rebound; the aquifer is logarithmically rebounded for about 5 months. (5) Continuous pumping of groundwater sources for more than 15 days can cause significant deformations in the deep strata to be transmitted to the surface; the normal duration of emergency pumping should not exceed 1 day, and under extreme conditions should not exceed 2 days.
2021-11-03T13:58:14.859Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "fd47e71b47e770bc0622c0132f83ce66a4bc009a", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-168411/v1.pdf?c=1631873776000", "oa_status": "GREEN", "pdf_src": "Springer", "pdf_hash": "fd47e71b47e770bc0622c0132f83ce66a4bc009a", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [] }
4700748
pes2o/s2orc
v3-fos-license
Down-regulation of respiration in pear fruit depends on temperature The respiration rate inside plant organs such as pear fruit is controlled both by diffusion limitation of oxygen and through additional regulatory processes in a temperature-dependent manner. Introduction Gas exchange in plant organs relies on diffusion, causing gas to move from a high to a low concentration according to Fick's law. Limited gas diffusion inside bulky organs (roots, fruits and tubers) may affect metabolic processes such as respiration, and lead to metabolic changes (Denison, 1992;Drew, 1997;Geigenberger et al., 2000;Franck et al., 2007;Ho et al., 2010a;Armstrong and Beckett, 2011a;Verboven et al., 2012). In seeds, restriction of gas diffusion during development or germination may result in hypoxic conditions (Rolletschek et al., 2003;Borisjuk and Rolletschek, 2009;Verboven et al., 2013) locally affecting respiratory activity. In fruit, the high resistance to gas transport of cortex tissue and the high respiration rate associated with ripening induces local anoxia during controlled atmosphere (CA) storage (Lammertyn et al., 2003;Ho et al., 2010aHo et al., , 2013. In such anoxic stress conditions, metabolism is likely to switch from the respiratory to the fermentation pathway, causing physiological disorders (Franck et al., 2007;Herremans et al., 2013;Ho et al., 2013). Decreasing respiratory O 2 consumption in response to a reduction in the available O 2 has been described in different plant tissues, including seeds (van Dongen et al., 2004), fruit (Lammertyn et al., 2001(Lammertyn et al., , 2003Ho et al., 2010bHo et al., , 2011, and roots (Geigenberger et al., 2000;Gupta et al., 2009;Zabalza et al., 2009;Armstrong and Beckett, 2011a). While the respiratory metabolism is likely to be affected by local anoxia due to limited gas transport (Armstrong and Beckett, 2011a), active regulation of the respiratory metabolism may also play a role (Gupta et al., 2009;Zabalza et al., 2009). The arguments for both hypotheses have been reviewed (Gupta et al., 2009;Armstrong and Beckett, 2011b;Nikoloski and van Dongen, 2011). Changes in the respiratory metabolism might also be explained by a combination of both mechanisms (Gupta et al., 2009); however, this has not yet been investigated in detail. As invasive measurement of gas concentrations in pear fruit is difficult without distorting and damaging the tissue, and thus affecting the gas exchange process itself, mathematical models provide a useful alternative. Such reaction-diffusiontype models have been commonly constructed by combining Michaelis-Menten respiration kinetics with Fick's diffusion equation (Lammertyn et al., 2003;Ho et al., 2008Ho et al., , 2011Ho et al., , 2016Armstrong and Beckett, 2011a). In earlier modelling works, diffusion coefficients and respiration parameters were explicitly and experimentally measured (Lammertyn et al., 2003;Ho et al., 2008). Armstrong and Beckett (2011a) did not experimentally determine diffusion coefficients and respiration rate, but estimated these from the data by means of the model describing O 2 diffusion of roots using a multicylindrical geometry. As the experimental measurement of diffusion coefficients is prone to errors due to artefacts induced by cutting of the samples (flooding the intercellular spaces by leaking of the cell content), Ho et al. (2011Ho et al. ( , 2016 developed a method to compute these coefficients using microscale diffusion models and 3-D synchrotron X-ray tomography images of the tissue. For the respiration kinetics, the maximal rates were considered to vary with temperature while the Michaelis-Menten constants were shown to be relatively independent of temperature (Hertog et al., 1998). Lammertyn et al., (2001) used a K m O , 2 value of 0.14 μM (0.011 kPa O 2 ), averaged from values cited for isolated mitochondria from several plant species. Ho et al. (2013) estimated the K m O , 2 of apple tissue to be within the range 0.13-0.17 kPa. The maximal rate coefficients in the Michaelis-Menten expression are typically obtained by fitting the model to respiration measurements (Hertog et al., 1998;Lammertyn et al., 2001;Armstrong and Beckett, 2011a;Ho et al., 2013). Simulations with these reaction-diffusion models consistently predicted the existence of considerable gas gradients in the fruit, and local O 2 concentrations in the centre of the fruit approaching the K m of cytochrome c oxidase (COX), the rate-limiting enzyme in the oxidative respiration pathway. This suggests a passive regulation of the respiratory metabolism, but without excluding other regulatory mechanisms, for example at the transcriptome level as a consequence of a putative oxygen sensor. Such an oxygen sensor has been found recently in Arabidopsis thaliana (Licausi et al., 2011), but its existence has yet to be shown in pear fruit. Typically, diffusion limitation is more pronounced at higher temperatures when the respiration rate is high. If the temperature is reduced (i.e. from 20 °C to 0 °C), the respiratory activity decreases correspondingly . Any gradients of the O 2 concentration in the fruit would then be reduced and the O 2 concentration would become close to uniform throughout the fruit. This provides direct experimental control over internal O 2 concentration of the uniform fruit by changing the fruit external atmosphere, thereby potentially exposing other regulatory mechanisms of the respiration pathway. In this article, we aim to improve our understanding of the response of the oxidative respiratory metabolism of pear fruit to external O 2 levels by challenging our previously developed gas exchange model (Ho et al., 2010a(Ho et al., , 2011 with new experimental data on pear respiration. The objectives, therefore, were (i) to evaluate whether changes of the respiratory activity under hypoxic conditions are due to diffusion limitation and/ or active down-regulation; and (ii) to update our modelling concepts regarding the response of respiration to O 2 concentrations at low temperature accordingly. We consider pear as it has a dense tissue with a high diffusion resistance inducing large internal gas gradients. Also, pears are commercially stored under low oxygen conditions that aggravate hypoxic conditions inside the fruit. Fruit Pear fruit (Pyrus communis L. cv. 'Conference') were picked from the experimental orchard of the Research Station of pcfruit (Velm, Belgium) on 9 September 2010 and 24 August 2014. In 2016 independent validations, fruit which were harvested on 12 September 2016 were purchased commercially from a local fruit co-operative. For the experiments in 2010, fruit were cooled and stored under ambient air at -1 °C. For the experiments in 2014, fruit were cooled and stored according to commercial protocols for a period of 21 d at -1 °C followed by CA conditions (2.5 kPa O 2 , 0.7 kPa CO 2 , -1 °C). For the experiment in 2016, fruit were stored for 2 months under regular air at -1 °C. Picking data and cooling procedures were according to optimal commercial practices used for long-term storage of fruit determined by the Flanders Centre of Postharvest Technology (VCBT, Belgium). Respiration measurements A first experiment (Experiment A) was used to determine the maximal O 2 consumption rate and the maximal fermentative CO 2 production rate of fruit at different temperatures (see Supplementary Table S1 at JXB online). Fruit were placed in 1.7 litre glass jars (two fruit per jar ~0.43 litres) and flushed for 24 h. The gas mixture contained 21 kPa O 2 , 0 kPa CO 2 , and 79 kPa N 2 for measuring the maximal O 2 consumption rate, and 0 kPa O 2 , 0 kPa CO 2 , and 100 kPa N 2 for measuring the maximal fermentative CO 2 production rate. The experiment was carried out at 20, 10, and 0 °C. Four repetitions were carried out following the methodology of Ho et al. (2010b). In brief, flushing was arrested after 24 h, jars were closed, and changes of the O 2 and CO 2 partial pressures were measured over time by a gas analyser (Checkmate II, PBI, Dansensor, Denmark). The gas partial pressures were converted to molar concentrations following the ideal gas law. The respiration rate was calculated from the difference in gas concentration and the time lag between two measurements, and expressed in μmol per unit fruit volume (m 3 ) per time (s). Experiments B and C were used to determine kinetic parameters relating the response of respiration to O 2 level. In experiment B, we investigated the response of respiration to abruptly changing O 2 levels at low temperature. Samples were taken from CA storage and stored under regular air at 0 °C for 1 d before starting the experiment. Fruit were placed in 1.7 litre glass jars (two fruit per jar) at 0 °C at O 2 levels that were dynamically varied during a period of 18 d measuring the respiration rate daily. After each respiration measurement, the flushing was restarted. During the first 3 d, the O 2 level was set to 20 kPa. Then the O 2 level was reduced to 15 kPa O 2 (experiment B1) and 5 kPa O 2 (experiment B2), respectively. After 13 d, the O 2 level was increased again to 20 kPa. The temperature and CO 2 level were kept at 0 °C and 0 kPa, respectively. Three repetitions were carried out. In addition, an experiment (B3) was conducted to measure the respiration rate (two repetitions) under dynamically changing O 2 levels from 20 kPa to 8, 4, 20, and 4 kPa after 2, 4, 6, and 14 d, respectively (Supplementary Table S1). In experiment C, we investigated changes in respiration when the O 2 was slowly decreasing. Samples were taken from fruit stored under CA conditions (2.5 kPa O 2 , 0.7 kPa CO 2 , -1°C). Fruit were placed in 1.7 litre glass jars (two fruit per jar) and flushed with a gas mixture of 7 kPa O 2 , 0 kPa CO 2 at 0 °C for 24 h. Then, the jars were closed and the changing gas conditions were measured during 14 d from which the respiration rate was calculated. The air pressures in the closed jars were also monitored by a pressure sensor (DPI 142, GE Druck, Germany, accuracy ±0.01%). Three repetitions were carried out. Experiment D was carried out to validate the model. Samples were taken from fruit stored under normal atmosphere conditions (21 kPa O 2 , 0 kPa CO 2 , -1°C). Fruit were placed in 1.7 litre glass jars (two fruit per jar) and flushed with a gas mixture for 24 h. The mixture contained 21 kPa O 2 , 0 kPa CO 2 , and 79 kPa N 2 for the experiments at 20 °C (D3, D5) and 10 °C (D1, D4, D6), 11 kPa O 2 , 0 kPa CO 2 , and 89 kPa N 2 for the experiments at 5 °C (D7), and 7 kPa O 2 , 0 kPa CO 2 , and 93 kPa N 2 for the experiments at 0 °C (D2, D8). Then, the jars were closed and the changing gas conditions were measured during 6, 10, 14, and 15 d for the experiments at 20, 10, 5, and 0 °C, respectively. Four replicate measurements were carried out. We used available respiration data of pear harvested in 2010 (Ho et al., 2015) and data of pear harvested in 2014 and 2016 for validations. In all cases, fruit were stored in the dark inside cold rooms during the incubation period for the various respiration measurements. Reaction-diffusion model for intact fruit A previously developed reaction-diffusion model (Ho et al., 2008(Ho et al., , 2011Verboven et al., 2012) was used to describe the overall gas exchange of intact fruit to the externally applied O 2 level (Ho et al., 2008(Ho et al., , 2011Verboven et al., 2012): with α i the gas capacity of component i (O 2 and CO 2 ) of the tissue (Ho et al., 2008(Ho et al., , 2011Verboven et al., 2012), C i (µmol m −3 ) the concentration of component i, D i (m 2 s −1 ) the apparent diffusion coefficient, R i (mol m −3 s −1 ) the production term of gas component i related to O 2 consumption or CO 2 production, ∇ (m −1 ) the gradient operator, and t (s) time. Based on preliminary calculations, we found that permeation could be neglected. The gas capacity α i is defined as (Ho et al., 2008(Ho et al., , 2011Verboven et al., 2012): where ε is the fractional porosity of tissue, and C i (µmol m −3 ) and C i,tissue (µmol m −3 ) are the concentration of gas component i in the gas phase and the tissue, respectively. The concentration of the compound in the liquid phase of fruit tissue normally follows Henry's law represented by constant H i (mol m −3 Pa −1 ). R (8.314 J mol −1 K −1 ) is the universal gas constant and T (K) the temperature. At the fruit surface, the following boundary condition was assumed: with n the outward normal to the surface; the index ∞ referring to the gas concentration of the ambient atmosphere; and h i the skin permeability for gas i (m s −1 ) (see Table 1). The gasses within the head space of a closed jar were assumed to be uniformly distributed given their fast diffusivities in air (typically five magnitudes higher than in fruit). Therefore, for an intact fruit placed in the closed jar, the O 2 and CO 2 concentration in the headspace of the jar changed in response to the respiration of the pear fruit and was modelled as follows: Skin permeability (m s −1 ) 6.74 × 10 -7b 10.2 × 10 -7b Respiration parameters Table 2 Maximal CO 2 fermentative production rate V m f CO , , 2 (µmol m −3 s −1 ) See Table 2 Parameters Ho et al. (2015). b Value computed from simulated 3-D microscale of epidermis tissue. c Ho et al. (2013). d Ho et al. (2008). e Value was calculated from the ratio of measured R CO 2 to R O 2 in ambient air at 20 °C (experiment A). f Value was estimated from the experimental data (experiment B and C) at 0 °C (see more details in Table 3). g Value was calculated from the measurement (experiment A, see Table 2). where V fruit (m 3 ) and V air (m 3 ) and are the volume of the fruit and the free air volume of the jar, respectively. The term on the right-hand side expresses the respiration of the entire fruit. Equations 1-4 were numerically solved using the finite element method (Comsol 3.5, Comsol AB, Stockholm) on a 3-D pear geometry generated by means of the shape generator (Rogge et al., 2015). Response of respiration to the external O 2 level at low temperature A non-competitive inhibition model (Hertog et al., 1998;Lammertyn et al., 2001;Ho et al., 2010b) was used to describe the consumption of O 2 by respiration as formulated by Equation 5) The equation for the production rate of CO 2 consists of an oxidative respiration part and a fermentative part (Peppelenbos and van't Leven, 1996;Ho et al., 2010b). with V m,f,CO 2 (µmol m −3 s −1 ) the maximum fermentative CO 2 production rate, K m,f,O 2 (µmol m −3 ) the Michaelis-Menten constant of O 2 inhibition on fermentative CO 2 production, r q,ox the respiration quotient at high O 2 partial pressure, and R CO 2 (µmol m −3 s −1 ) the CO 2 production rate of the sample. To account for a regulatory mechanism that would adapt the maximal respiration rate V m O , 2 in response to changing O 2 levels, we assumed that a sensor would be activated by O 2 , resulting in a signal transduction cascade that eventually would change the amount of enzymes involved in respiration (Fig. 1). A decrease of the O 2 level would alter V m O , 2 due to adjustment of the balance between enzyme synthesis and degradation according to Supplementary Protocol S1 and Supplementary Fig. S1 , 2 to O 2 ; and m is the number of O 2 molecules aggregating one signal molecule. V R,2 in Equation 8 is the maximal O 2 consumption rate in the presence of O 2 , while V R,1 is a base affinity for O 2 ; V R is the maximal O 2 consumption rate at a steady O 2 condition. ΔV R =V R,2 -V R,1 is the amplitude of regulation of the maximal respiration rate by O 2 . Equations 7 and 8 imply that V m O , 2 may vary depending on the O 2 concentration in a hyperbolic way between V R,1 and V R,2 , and that this change is not abrupt but according to an exponential (firstorder) response. Equations 1-6 will further be called the 'gas exchange model' and Equations 1-8 the 'adapted gas exchange model'. The maximal O 2 consumption rate V m O , 2 and the maximal fermentative CO 2 production rate V m f CO , , 2 are temperature dependent and were assumed to follow Arrhenius's law (Hertog et al., 1998) Model parameters The apparent O 2 and CO 2 diffusivities of tissue were computed from microscale simulations in small cubical samples obtained from synchrotron radiation X-ray tomography images as described by Ho et al. (2015) (see Table 1). Diffusivities of the tissue depend not only on the porosity but also on the degree of connectivity of the pores since diffusion in the gas phase of a gas was mainly through the connected pores but not through the dead pores (unconnected pores). We did not need to differentiate between the cortex and the ovary ground tissue since the ovary ground tissue is located in the fruit core and its size is much smaller compared with the fruit size. The Michaelis-Menten constant, which is a ratio of rate constants, would be expected to be relatively independent of temperature (Hertog et al., 1998). The values of K m O , 2 and K m f O , , 2 were therefore assumed to be independent of temperature and are taken from Ho et al. (2013) ( Table 1). V m f CO , , 2 was obtained from the CO 2 production rate measured at 0 kPa O 2 (data of experiment A). The maximal O 2 consumption rate V R,2 was obtained from the O 2 consumption rate measured at 21 kPa O 2 (data of experiment A) in which the O 2 consumption rate was assumed to be saturated. The parameters k d , K H , and V R,1 of the adapted gas exchange model were estimated by minimizing the squared difference between O 2 the consumption rates predicted by Equations 1-8 and those measured from experiments B and C using a non-linear least squares estimation program written in Matlab (The Mathworks, Inc., USA) integrated with Comsol Multiphysics v. 3.5. In the estimation, a 2-D axisymmetrical model of the pear was implemented. Note that simulated results obtained from the 2-D axisymmetrical geometry were similar to those obtained from the 3-D geometry. However, the model with the 2-D axisymmetrical geometry had a low number of degree of freedoms, hence the computational time of the estimation was reduced considerably. The effect of temperature on the respiration parameters was not considered in the estimation since experiments B and C were carried out at constant temperature (0 °C). The data of experiment D were used for validation purposes only. Factors affecting fruit respiration The validation experiment was carried out in a closed system. Response of respiration as a function of the O 2 level was in fact affected by three main factors, namely accumulation of CO 2 during respiration, regulation of the respiratory metabolism, and O 2 diffusion limitation. A series of simulations was carried out to analyse the relative contribution of these factors to decreasing the fruit respiration rate. consumption rate of the fruit during the entire simulation for normalization purposes. The relative effect of down-regulation f DR was computed from: The relative effect of the remaining diffusion limitation f DL on the O 2 consumption rate was calculated from: Temperature dependency of respiration capacity The maximal O 2 consumption rate V m O , 2 and the maximal fermentative CO 2 production rate V m f CO , , 2 measured at different temperatures are shown in Table 2 The results are shown in Fig. 2. The maximal respiration rates were significantly affected by the temperature since the estimated values of the activation rates E a,V m,O 2 and E a,V m,f,CO 2 were equal to 73.4 ± 3.8 kJ mol −1 and 58.5 ± 3.3 kJ mol −1 , respectively. V m O , 2 and V m f CO , , 2 exponentially increased with increasing temperature. The predicted V m O , 2 values at 0 °C and 5 °C were 16% and 66% larger than those measured at the same temperatures, respectively. In addition, some variability between the measured V m O , 2 and V m f CO , , 2 in different seasons was observed. Therefore, in the model for a particular season, the input maximal rates were taken from the measured data in the same season to compensate for seasonal differences that otherwise would obscure the effects of changing O 2 levels in relation to temperature that we were interested in. Diffusion limitation affects respiration of intact fruit at high temperature In the next step, the gas exchange model was used to evaluate whether respiration was diffusion limited by comparing simulated and measured values of respiration at 10 °C and 20 °C (Fig. 3) inside the pear changed from steep in the beginning to shallow at the end of the experiment (Fig. 4). There was good agreement between simulated and measured values of respiration rates. The simulation results suggest that at 20 °C the large respiration rate caused a rapid depletion of O 2 towards the centre of the fruit. In combination with the diffusion resistance of the fruit cortex and skin tissue, this caused a steep O 2 gradient inside the fruit. As a consequence, the limited O 2 availability in the centre of the fruit reduced the local and thus also the overall respiration rate (Fig. 3A, D). At 10 °C, the respiration rate was considerably smaller (Fig. 3B, E). Since the diffusivity of O 2 and CO 2 is only slightly affected by temperature, the relative rate of O 2 transport compared with consumption was higher than at 20 °C, and the O 2 (and CO 2 ) gradient was more shallow (Fig. 4). The O 2 R O 2 and R CO 2 are the O 2 consumption rate and CO 2 production rate, respectively. Open circles indicate measurements (experiment D). Solid lines (-), dashed lines ( ̶ ̶ ), and dotted lines (⋅⋅⋅) correspond to simulations with an assumed ΔV/V R,2 of 0, 0.21, and 0.66, respectively. The ratio ΔV/V R,2 represents the amplitude of the regulation of maximal respiration rate by O 2 (see Supplementary Protocols S1, S3 for its derivation). The maximal O 2 consumption rate V R,2 at 20, 10, and 0 °C was measured at 21 kPa O 2 , 0 kPa CO 2 , and was equal to 230, 71.4, and 18.4 µmol m −3 s −1 , respectively. (This figure is available in colour at JXB online.) concentration in the jar at which the O 2 consumption rates decreased to half their maximal values was equal to 1.8 kPa and 4.9 kPa at 10 °C and 20 °C, respectively. The duration of the experiments at 20 °C and 10 °C was 5 d and 7 d, respectively (Fig. 3G, H). To evaluate whether the maximal respiration rate varied during the experiment, respiration measurements were carried out at 21 kPa O 2 , 0 kPa CO 2 , and 10 °C on four consecutive days. The respiration rate after 3 d increased slightly by 3%, but was not significantly different from that at day 1 ( Supplementary Fig. S2). Down-regulation of respiration at low temperature At 0 °C the respiration rate was very low (one order of magnitude lower than respiration at 20 °C), and it would take a considerable amount of time to deplete the O 2 in the jar. We therefore started the experiment at an O 2 concentration of 7 kPa. As the rate of O 2 diffusion was now much larger than that of O 2 consumption, the O 2 concentration profile was now almost uniform and there were hardly any gradients (Fig. 4). The rate-limiting enzyme of respiration is believed to be COX (Armstrong and Beckett, 2011a), and we thus expected a Michaelis-Menten-like behaviour with a saturation O 2 consumption rate at an O 2 level larger than the K m of COX. Surprisingly, the measurements showed a clear linear decrease of both the O 2 consumption rate and the CO 2 production rate with decreasing O 2 levels until 1 kPa. (Fig. 3C). The ratio of R CO 2 to R O 2 was >1 at an O 2 level lower than 0.5 kPa (Fig. 3F). By assuming a constant V m O , , 2 the gas exchange model predicted a Michaelis-Menten-like overall respiration rate R O 2 that saturated at sufficiently large O 2 concentrations, which was not consistent with the measurements (Fig. 3C, F). We thus modified our model to incorporate an adaption of V m O , 2 to the O 2 level in the jar. Dynamic adaption of V m O , 2 to O 2 level at low temperature In a next step we estimated the parameters k d , K H , and V R,1 of Equations 7 and 8 using the combined data of experiment B and C (Supplementary Table S1). At 0 °C, V R,2 in Equation 8 was set equal to the measured O 2 consumption rate at 21 kPa O 2 , 0 kPa CO 2 , and 79 kPa N 2 . We observed that the O 2 consumption rate at 21 kPa O 2 and 0 kPa CO 2 was not constant but decreased during long-term storage at -1 °C ( Supplementary Fig. S3) but at a rather slow pace (-3.64 × 10 -2 µmol m −3 s −1 d −1 ). Hence, V R,2 was set equal to the measured O 2 consumption rate at the initial time of the experiment, assuming it to be constant for the duration of the simulated storage period of 15 d (Supplementary Fig. S3). A good agreement between the fitted respiration rates and corresponding measurements was observed (Figs 5, 6). The change of R O 2 with an abrupt decrease of the O 2 level from 20 kPa to 5 kPa O 2 (Fig. 5A, D) was large; when O 2 decreased from 20 kPa to 15 kPa O 2 , it was hardly Fig. 4. Simulated O 2 (A) and CO 2 (B) partial pressure inside pear fruit in a closed jar at 20 °C (I), 10 °C (II), and 0 °C (III) and different times. The initial atmosphere composition was 21 kPa O 2 , 0 kPa CO 2 , and 79 kPa N 2 (I and II), and 6 kPa O 2 , 0 kPa CO 2 , and 94 kPa N 2 (III). The contour graphs in (A) and (B) represent the O 2 and CO 2 partial pressures inside the pear, respectively. The ratio ΔV/V R,2 was set to 0.21, 0.21, and 0.66 at 20, 10, and 0 °C, respectively. (This figure is available in colour at JXB online.) visible (Fig. 5B, E). We found that the adapted respiration model with m equal to 2 gave a better fit to the observed data than that with m equal to 1 (Supplementary Figs S4, S5). R 2 , a criterion for the goodness of fit (see definition in Supplementary Protocol S4), was 0.677 and 0.753 for the adapted respiration model with m equal to 1 and 2, respectively. Therefore, only estimated parameters with m equal to 2 were considered. The estimated values of k d and K H were 1.30 ± 0.23 d −1 and 23.3 ± 5.3 kPa 2 , respectively ( Table 2). The estimated value of K H implied that V m O , 2 reduced to half of V R,2 at a constant O 2 level of 4.8 kPa ( Supplementary Fig. S6A). The estimated value of k d suggested that the time forV m O , 2 -V R to decrease by 37% in response to a sudden drop in the O 2 concentration was 0.77 d (Supplementary Fig. S6B). The estimated value of V R,1 was 0.34 ± 0.06×V R,2 . All in all, these results indicate that even at O 2 concentrations much larger than the K m of COX, the respiration rate of the fruit is reduced, presumably due to down-regulation of key enzymes of the respiration pathway. We also tested the alternative hypothesis that K m O , 2 varied at low temperature but V m O , 2 remained constant. The estimated value of K m O , 2 was 2.04 ± 0.01 kPa, which was much larger than that of pear cell protoplasts and isolated mitochondria. Since the R 2 of this model (0.668) was lower than that of the adapted respiration model with m equal to 2 (0.753) and the fit was also worse than that of the adapted respiration model with m equal to 2 (Supplementary Figs S7, S8), we rejected this alternative hypothesis. Adaption of respiration in response to O 2 level at different temperatures We further tested the hypothesis that while adaption of respiration with O 2 levels was considerably high at low temperature, it was relatively insignificant at high temperature. As can be seen in our model analysis, ΔV at 0 °C was 66% of the total maximal respiration rate V R,2 . We further simulated different responses of respiration to O 2 level with ΔV/V R,2 of 0, 0.21, and 0.66, respectively. At 20 °C and 10 °C, the simulated results were comparable with the measured values when ΔV/V R,2 was low (0 or 0.21) (Fig. 3). In contrast, at 0 °C, the model fitted the measured data best for ΔV/V R,2 equal to 0.66. Replicate measurements of the respiration rate in response to different O 2 levels were additionally carried out in 2016 at 5, 10, and 20 °C ( Supplementary Fig. S9). Again the simulated O 2 and CO 2 consumption rates at 10 °C and 20 °C with ΔV/V R,2 equal to 0 or 0.21 fitted the data well ( Supplementary Fig. S9A-D) while at 5 °C a ΔV/V R,2 of 0.66 gave the best fit ( Supplementary Fig. S9E, F). Our simulation results confirmed that down-regulation was temperature dependent and significant at low temperature. Factors affecting fruit respiration The relative importance of the different factors affecting the respiration rate under decreasing O 2 levels is shown in Fig. 7. At 20 °C, respiration was rapidly reduced by diffusion limitations at a high respiration rate when the O 2 concentration decreased. The effect was found to increase predominantly when the O 2 partial pressure decreased to <12 kPa (Fig. 7A). At 10 °C, respiration was slightly reduced by the accumulating CO 2 concentration when the experiment evolved and the O 2 concentration decreased. When the O 2 partial pressure decreased below 4 kPa, diffusion limitations caused a progressively steep decline of the O 2 consumption curve. Note that at this stage, although the accumulated CO 2 level was high, CO 2 inhibition of respiration was much smaller than that caused by diffusion limitation (Fig. 7B). At 0 °C, the inhibition of respiration by CO 2 as shown in Fig. 7C was less profound due to the limited accumulation of CO 2 . Also, the simulation results showed that diffusion limitations at O 2 levels >2 kPa did not affect respiration much. A bilinear decrease was found. The first decrease was the adaptive response of the respiration rate at the O 2 level >2 kPa probably due to down-regulation. When the O 2 level was <2 kPa, however, the rate of O 2 diffusion through tissue became more predominantly limiting as the decline of the O 2 consumption rate became sharper until it reached zero. Potential of gas exchange model in a systematic study of respiration response to O 2 level Armstrong and Beckett (2011a) have shown the effect of O 2 diffusion on respiration of roots using a reaction-diffusion model on a multicylindrical geometry. These authors did not explicitly determine diffusion coefficients. To better understand the response of respiration to O 2 level, we analysed the respiration behaviour by means of a reaction-diffusion model that incorporated more detailed Michaelis-Menten kinetics for O 2 and CO 2 consumption and production, respectively, and that accounted for CO 2 inhibition effects. The corresponding diffusion coefficients were calculated by means of a microscale model following Ho et al. (2016). While earlier modelling work did show an obvious internal concentration gradient in pear at low temperature (Ho et al., 2008), this did not become visible in the current work. The gradients predicted before came from an underestimation of the experimentally measured diffusivities due to artefacts induced by cutting of the samples, inundating the intercellular spaces by leaking of the cell content (Ho et al., 2016). The current model predictions are more reliable as they are based on the improved diffusion properties determined from simulations at the microscale based on 3-D synchrotron microtomography images (Ho et al., 2016). Simulations showed that at above 10 °C, the overall respiratory activity of the fruit was predicted well by a gas diffusion model incorporating Michaelis-Menten kinetics to describe respiration, suggesting that under these conditions respiration is mainly controlled by diffusion limitations. Similar results were reported by Armstrong and Beckett (2011a) for root pieces. While respiratory down-regulation was not clearly found in our measurement and simulation results at high temperature, such an effect could have been annihilated by an increase of V m O , 2 by fruit ripening during the course of the experiment. However, this was not the case as the measurements showed that V m O , 2 did not significantly change during the experimental period (Supplementary Fig. S2). The observed decrease of respiration was therefore mainly due to diffusion limitation and the inhibition effect of accumulated CO 2 (Fig. 7A, B). The value of r q was equal to 0.77, indicating that the measured CO 2 production rate was lower than the measured O 2 consumption rate. Note that r q for an ideal respiration with carbohydrate substrate is considered equal to 1, and CO 2 solubilization might cause undercalculation of the CO 2 production rate. Our simulations with the input parameter r q equal to 1 predicted much larger CO 2 production rates compared with those of the measurements. The CO 2 production rate profiles predicted with parameter r q equal to 0.77 were well comparable with the measured profiles ( Fig. 3D-F). The simulations indicated that the CO 2 production rate is likely to be unaffected by CO 2 solubility. Down-regulation may explain the response of respiration to O 2 level at low temperature The conventional Michaelis-Menten-like model for gas exchange assumes that the respiration rate of plant cells would already saturate at an O 2 level as low as 1.5 kPa in the tissue. When we measured respiration rates at 0 °C (Fig. 3C, F) and 5 °C ( Supplementary Fig. S9E, F) to minimize the effect of O 2 diffusion, we found that the O 2 consumption rate was considerably reduced in response to decreasing O 2 levels well above 1.5 kPa (nine times the K m of tissue). In the experiments, the cooling capacity of our experimental cold rooms was sufficiently large to decrease the temperature from 20 °C to 0 or 5 °C within hours and keep the fruit at constant and almost uniform temperature (Supplementary Protocol S5; Supplementary Fig. S10). Note that K m represents the O 2 concentration at which the respiration rate reaches half of its maximum value (Armstrong and Beckett, 2011a). K m might be temperature dependent, similar to the maximum respiration rate (Kruse et al., 2011). However, Hertog et al. (1998) proposed that K m , being a ratio of rate constants, was relatively independent of temperature when the activation energies of individual rate constants were similar. The value of K m is 0.17 kPa O 2 for tissue , 0.14 kPa O 2 for cell protoplasts (Lammertyn et al., 2001), and 0.10-1 µM (~4.5 × 10 -3 -4.5 × 10 -2 kPa) for COX [0.10-0.12 µM (~4.5 × 10 -3 -5 × 10 -3 kPa), Rawsthorne and Larue (1986); 1 µM (~4.5 × 10 -2 kPa), Taiz and Zeiger (1993), and 0.14 μM (~6 × 10 -3 kPa), Millar et al. (1994)]. K m described in the model was much larger than that obtained from isolated mitochondria since K m for tissue accounted for diffusion barriers through the cell wall, cell membrane, and within the cytosol. Simulations with a two-compartment model (core and cortex) and different combinations of O 2 diffusivities and V max values were also carried out ( Supplementary Fig. S11). Respiration was assumed to follow conventional Michaelis-Menten kinetics without a regulatory mechanism. While at 10 °C the model fitted the data well, this was not the case at 0 °C for any of the aforementioned parameter combinations. The experimental data contradicted simulation results obtained with the gas exchange model incorporating conventional Michaelis-Menten-based respiration kinetics, suggesting an additional reduction in respiration rate beyond the substrate effects already accounted for. When we modified the respiration kinetics to allow for V m O , 2 to change as a function of the O 2 level rather than keeping V m O , 2 constant, the simulations fitted the measurements well (Fig. 3C, E, F; Supplementary Fig. S9E, F). This indicated that additional regulatory effects of the respiration pathways are likely to occur. The dynamics of regulatory and signalling pathways in the cell were modelled by reaction kinetics at the transcriptome level. We assumed that an O 2 signal could modulate the biosynthesis of respiratory enzymes in the cell through activation of an O 2 receptor. The response of the maximal respiration rate was proportional to the change of amount of enzymes involving in the respiration. We assumed that a decrease of the O 2 level would alter the maximal respiration rate due to adjustment of the balance between enzyme synthesis and degradation (see Fig. 1; Supplementary Protocol S1; Supplementary Fig. S1). So, fundamentally, the model allows a bidirectional change in enzyme activity. We observed a relatively slow adaption to changing O 2 levels with an estimated k d of 1.30 d −1 . Our results showed that the response of V m O , 2 to changing O 2 levels was more sensitive at low O 2 than at high O 2 levels (see Figs 3, 4; Supplementary Figs S9, S12). Zabalza et al. (2009) found that the respiratory demand in pea root was 300 nmol g −1 min −1 O 2 (equivalent to 5 × 10 4 µmol m −3 s −1 ) for pea root at 25 °C. For barley root at 25 °C, the respiratory demand was observed to be 100 µmol g −1 h −1 O 2 (equivalent to 2.7 × 10 5 µmol m −3 s −1 ) (Gupta et al., 2009). These values are considerably larger than that of pear fruit measured in this study (230 µmol m −3 s −1 at 20 °C). This difference is due to the fact that mature but pre-climacteric pear fruit are much less metabolically active as compared with roots that are actively involved in uptake processes. Respiratory down-regulation in plant tissues has been suggested by Gupta et al. (2009) andZabalza et al. (2009). Zabalza et al. (2009) observed a slow but linear decrease of the respiratory rate of roots of pea and Arabidopsis with decreasing O 2 levels until ~4 kPa, below which it steeply declined. However, the respiratory demand of roots of pea and Arabidopsis in the experiments performed by these authors was more than an order of magnitude larger than the pear respiration performed at 20 °C in this study, resulting in scavenging of oxygen from their system being completel in <2 h. Since down-regulation might require long exposure at specific O 2 levels, a change of respiration to O 2 levels has been alternatively suggested by substantial diffusion limitation on O 2 supply when O 2 respiratory demand was high (Armstrong and Beckett, 2011a). At low temperature due to low respiration demand, the adaptive response of the respiration rate to O 2 levels was shown to be due to down-regulation rather than diffusion limitation on O 2 supply (Figs 3C, E, F, 4; Supplementary Figs S13, S14). This has implications with respect to commercial storage of pear fruit under hypoxic conditions ['controlled atmosphere (CA) storage']. Abruptly and drastically changing the O 2 level is known to cause browning and cavity formation in pear, probably because respiration may consume most O 2 in the centre of the pear (Verlinden et al., 2002). This may create near anoxic conditions initiating a chain of events eventually causing the symptoms of the disorder. Adaption of the fruit to low O 2 levels by reducing the respiration rate would eventually result in less severe O 2 concentrations in the centre of the fruit and a reduction in the symptoms. This procedure is in fact applied in practice and may be further optimized. While the model was developed using 'Conference' pear data, it should be extended to other pear cultivars or to 'Conference' pears grown under different agronomic/climate conditions that may well affect both fruit respiration and microstructural properties. Note that the gas transport model that was used herein assumes that gas transport properties are uniform and isotropic, and that the respiration kinetics do not depend on position. Future research should incorporate more realistic features into the model and investigate their effect on gas transport. AOX might play a role in regulation of respiration The alternative oxidase (AOX) has been proposed to play a role in adaption of respiration to O 2 level within mitochondria (Szal et al., 2003;McDonald and Vanlerberghe, 2006;Gupta et al., 2009). At a short-term temperature change from 17 °C to 36 °C, the ratio of alternative respiration to total respiration was reported to be relatively constant and ~0.21-0.30 for different leaves of Nigella sativa, Cucurbita pepo, and Vicia faba (Macfarlane et al., 2009). However, partitioning of electrons via the alternative respiration pathway has been shown to be increased after long-term cold acclimation in some species (Gonzalez-Meler, 1999;Fung et al., 2004;Sugie et al., 2006). Our simulation results showed that the magnitude of regulation of respiration in response to O 2 level was relatively low at high temperature but significantly high at low temperature. Note that we did not explicitly model distinct AOX and COX pathways. If the AOX pathway is indeed responsible for the regulatory effects by O 2 , ΔV can be interpreted as its capacity, while the time and O 2 responses are lumped in the parameters k d and K H . Assuming that at 10 °C and 20 °C the amplitude of the regulation of the respiration rate, ΔV, was 0.21 times the total maximal respiration rate V R,2 , we found good agreement between simulation and measurements. This magnitude was similar to the partition of the AOX pathway to the total respiration at high temperature (0.21-0.30 from 17 °C to 36 °C, Macfarlane et al., 2009). At 0 °C, ΔV was found to be 0.66 times the total maximal respiration rate V R,2 . Likewise, the ratio of the AOX pathway to the total respiration for maize leaves (Zea mays L. cv Penjalina) growing at 25 °C was reported to be 0.25 but increased to 0.6 after 5 d at 5 °C (chilled) (Ribas-Carbo et al., 2000). These results indicate that the regulation of AOX might be involved in the response of respiration to changing O 2 levels at low temperature. Supplementary data Supplementary data are available at JXB online. Protocol S1. Modelling the response of V m O , 2 to O 2 level. Protocol S2. Temperature dependency of respiration capacity. Protocol S3. Amplitude of regulation of maximal respiration rate by O 2 . Protocol S4. Criterion for goodness of fit of the model. Protocol S5. Heat conduction model. Table S1. Description of data sets used in calibration and validation of model. Fig. S1. Proposed reactions and modelled equations describing response of receptor, enzyme, and respiration to O 2 level. Fig. S2. O 2 consumption rate of intact pear fruit as a function of time at 20 kPa O 2 , 0 kPa CO 2 at 10 °C. S13. Simulated V m O , 2 of pear fruit in the closed jar at 20 °C (I), 10 °C (II), and 0 °C (III) and different times. Fig. S14. Simulated O 2 and CO 2 gas partial pressure profiles from the centre to the surface along the radial direction in the closed jar at 20 °C (I), 10 °C (II), and 0 °C (III) at different times.
2018-04-03T01:33:43.113Z
2018-01-31T00:00:00.000
{ "year": 2018, "sha1": "2be07752b6986f4f8f6dd4ecf2f1c99c77bfdc1d", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/jxb/article-pdf/69/8/2049/25090462/ery031.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "2be07752b6986f4f8f6dd4ecf2f1c99c77bfdc1d", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
219590200
pes2o/s2orc
v3-fos-license
Ibuprofen mediates histone modification to diminish cancer cell stemness properties via a COX2-dependent manner Background The anticancer potential of ibuprofen has created a broad interest to explore the clinical benefits of ibuprofen in cancer therapy. However, the current understanding of the molecular mechanisms involved in the anticancer potential of ibuprofen remains limited. Methods Cancer stemness assays to validate ibuprofen function in vitro and in vivo. Histone modification assays to check the effect of ibuprofen on histone acetylation/methylation, as well as the activity of HDAC and KDM6A/B. Inhibitors’ in vivo assays to evaluate therapeutic effects of various inhibitors’ combination manners. Results In our in vitro studies, we report that ibuprofen diminishes cancer cell stemness properties that include reducing the ALDH + subpopulation, side population and sphere formation in three cancer types. In our in vivo studies, we report that ibuprofen decreases tumour growth, metastasis and prolongs survival. In addition, our results showed that ibuprofen inhibits inflammation-related stemness gene expression (especially ICAM3) identified by a high-throughput siRNA platform. In regard to the underlying molecular mechanism of action, we report that ibuprofen reduces HDACs and histone demethylase (KDM6A/B) expression that mediates histone acetylation and methylation, and suppresses gene expression via a COX2-dependent way. In regard to therapeutic strategies, we report that ibuprofen combined HDAC/HDM inhibitors prevents cancer progression in vivo. Conclusions The aforementioned findings suggest a molecular model that explains how ibuprofen diminishes cancer cell stemness properties. These may provide novel targets for therapeutic strategies involving ibuprofen in the prevention of cancer progression. BACKGROUND The cancer stem cell (CSC) is the chief culprit in tumour initiation and malignancy whereby the maintenance of cancer cell stemness largely depends on the surrounding environment or also called the 'niche'. [1][2][3] Although the highly tumorigenic CSCs play a role in tumour initiation/metastasis, and might therefore be a good clinical therapy target, CSCs unfortunately demonstrate a relative resistance to conventional chemotherapy and radiotherapy. In addition, tumour-associated inflammation factors within the tumour niche play a pivotal role in the maintenance of cancer cell stemness and the resultant tumour initiation/malignancy. 4,5 Our previous work presented a medium-throughput siRNA screen platform to identify inflammation genes that regulate cancer cell stemness, and obtained several novel candidates. 6 Agents that target these genes may inhibit both inflammation and cancer cell stemness at the same time. Ibuprofen (a non-steroidal anti-inflammatory drug) is commonly used for treating pain, fever and inflammation. Recent observational and epidemiological studies have shown that regular, prolonged use of ibuprofen reduces the risk for several cancers (e.g., colorectal, breast, cervical, gastric, lung cancers and head and neck cancer). [7][8][9][10][11][12][13][14] Although the benefits of ibuprofen for cancer patients have been appreciated, the mechanism remains unclear. Previous studies attribute the anticancer potential of NSAID like aspirin to the inhibition of cyclooxygenase-2 (COX2), which is upregulated in various cancer cells. 15,16 Of note, an increasing body of evidence suggests that aspirin may exhibit anticancer effects in a COX-independent manner. However, the role of COX expression in ibuprofen-mediated cancer inhibition remains unclear. Histone modification is a reversible process mediated by the epigenetic enzymes. 17,18 Histone methylation and acetylation are two important chemical modifications that act in transcriptional activation/inactivation, chromosome packaging and DNA damage/repair. 19,20 Histone demethylases (HDMs) and histone deacetylases (HDACs) are the key enzymes that remove methyl and acetyl groups, respectively, to regulate gene transcription. In this regard, NSAID like aspirin affects HDAC expression and suppresses progression of some cancers. [21][22][23] However, the role of ibuprofen on histone modification and the specific mechanisms involved remain unclear. Thus, we studied the role of ibuprofen on histone methylation and acetylation, and the attendant effects on cancer cell stemness and progression. Here, by investigating the role of ibuprofen on cancer cell stemness and progression, we found that ibuprofen restrains cancer cell stemness properties that include reducing ALDH + www.nature.com/bjc subpopulation, side population and sphere formation in three cancer types in vitro. Furthermore, ibuprofen inhibits tumour growth, metastasis and prolongs survival in vivo. In addition, ibuprofen was proved to inhibit inflammation-related stemness genes, especially ICAM3 that we screened by high-throughput siRNA platform. Exploration of the underlying mechanism demonstrated that ibuprofen reduced HDACs and histone demethylase (KDM6A/B) expression to mediate histone 3 methylation and acetylation to suppress gene expression via a COX2dependent manner. As the therapy strategies, ibuprofen combined HDAC/HDM inhibitors, and restrained cancer progression in vivo. Our research revealed the promising mechanism or strategies of ibuprofen in the prevention of cancer progression. Cytotoxicity assay Ibuprofen was purchased in Sigma (cat. I4883) and dissolved in DMSO. MDA-MB-231, A549 and HepG2 cells were cultured in a 96well plate and treated with various concentrations of ibuprofen for 24 h. Cell activity was tested by applying CCK8 kit (Dojindo, China) following the manufacturer's instructions. Aldefluor assay The Aldefluor assay kit (STEMCELL Technologies, Vancouver, Canada) was used to measure ALDH enzymatic activity in three cancer cell lines (MDA-MB-231, A549 and HepG2). In brief, 2.5 × 10 5 cells were suspended in Aldefluor assay buffer containing ALDH1 substrate and incubated for 60 min at 37°C. Cells treated with the specific ALDH inhibitor DEAB, served as the negative control. Stained cells were analysed on BD FACS Calibur flow cytometer (BD Biosciences, San Jose, CA). Data analysis was performed using Flowjo software (Tree Star, Inc., Ashland, OR). Side-population assay In total, 231, A549 and HepG2 cells treated with ibuprofen were harvested and resuspended in pre-warmed staining buffer (PBS buffer added 2% FBS) at a density of 1.0 × 10 6 cells/ml. Hoechst 33342 dye was added at a final concentration of 7 µg/ml (231), 8 µg/ml (A549) and 10 µg/ml (HepG2) in the presence or absence of 10 µM fumitremorgin C (FTC). The following steps were described previously. 6,24 QPCR assay Total RNAs of cells were extracted by TRIzol reagent (Cat. #15596-018, Invitrogen Inc., Carlsbad, CA) and then reverse-transcribed into cDNAs. Real-time PCR was performed in 20-μl reaction volumes by using TransStart Green qPCR SuperMix Kit (TransGen Biotech, Beijing, China, PR). The 2 -ΔΔCt method was used to determine the relative mRNA folding changes. Statistical results were averaged from three independent experiments performed in triplicate. The specific primer sequences are summarised in Supplementary Table 1. Sphere-formation assay The sphere-formation assay steps were described previously. 6 Animal study Female Balb/c mice at 6-8 weeks were separated randomly into several groups (n ≥ 5). In total, 5 × 10 4 4T1-luci cells were inoculated s.c. into each mouse at the right axilla. For lung metastasis assay, at 7 days after injection, mice were treated with ibuprofen 20 mg/kg, ibuprofen 40 mg/kg every 3 days and DMSO used as the control. For chemoresistance assay, at 7 days after injection, mice were first treated with cisplatin (2.5 mg/kg, 0.9% NaCl dissolved), then treated with ibuprofen 10 mg/kg, ibuprofen 20 mg/kg every 3 days until the mice were dead finally. DMSO was used as the control. Female nude mice at 6-8 weeks were separated randomly into several groups (n ≥ 5). In total, 2 × 10 6 ALDH + or ALDH− cells were inoculated s.c. into each mouse at the right axilla. For lung metastasis assay, at 12 days after injection, mice were treated with ibuprofen 20 mg/kg, ibuprofen 40 mg/kg every 3 days and DMSO used as the control. For chemoresistance assay, at 12 days after injection, mice were first treated with cisplatin (2.5 mg/kg), then treated with ibuprofen 10 mg/kg, ibuprofen 20 mg/kg every 3 days until 24 days. The mice were sacrificed by isoflurane anaesthesia treatment. DMSO was used as the control. NOD/SCID mice at 6-8 weeks were separated randomly into several groups (n ≥ 5). In total, 3 × 10 6 A549-luci cells were inoculated s.c. into each mouse. For inhibitor treatment assay, 21 or 23 days after tumour cells injection, mice were first treated with ibuprofen 10 mg/kg or HDAC inhibitor (Trichostatin A (TSA), 0.5 mg/kg), KDM6A/B inhibitor (GSK J1, 100 mg/kg), then every 3 days treated until the mice were dead finally. DMSO was used as the control. Tumour volume (mm 3 ) was measured with callipers and calculated by using the standard formula: length × width 2 /2. The individual measuring the mice was unaware of the identity of the group measured. Animal use complied with Nankai University and Jining Medical University Animal Welfare Guidelines. 25 Western blotting The western blot steps were described previously. 6,26 The special primary antibodies used in this assay are listed in Supplementary Table 2. Immunofluorescence The immunofluorescence assay was described previously. 6,25 TUNEL staining Paraffin-embedded tissue slides were prepared from the tumour xenografts' DeadEnd TM Fluorometric TUNEL System kit (Promega) that was applied for TUNEL staining. The experiment procedure was performed according to the paper instruction. 4,6-Diamidino-2-phenylindole was used to stain the nuclei, and the tissue slides were subjected to Olympus BX51 Epi-fluorescent microscopy under a 40× objective (FV1000-IX81, Olympus Microsystems, Shanghai, China). Chromatin immunoprecipitation assay The assay was performed with an EZ-Zyme Chromatin Prep Kit (Millipore), according to the manufacturer's protocol. Anti-histone 3 modification markers' antibodies were used to precipitate DNA cross-linked with histone 3 modification markers, respectively, and normal rabbit IgG was used in parallel as a control. Enriched DNA was then used as a template to assess the binding intensity of histone 3 modification markers to putative binding sites in the ICAM3 promoter. Primers used in this assay are listed in Supplementary Table S2. Immunohistochemistry Immunohistochemistry was performed on tumour tissue sections from the mice. Primary antibodies were raised against the target proteins at a 1:100 dilution overnight. The expression levels of the proteins were evaluated according to the percentage of positive cells in each tumour tissue section. The images were recorded by Olympus BX51 Epi-fluorescent microscopy under a 20× or 40× objective (Olympus Co, Tokyo, Japan). 27 HDAC activity assay The HDAC activity in cancer cells was evaluated using an Epigenase HDAC Activity/Inhibition Direct Assay Kit (Epigentek, Farmingdale, NY) following the manufacturer's instruction. The nuclear extract was prepared by using NE-PER™ Nuclear and Cytoplasmic Protein Extraction Reagents (ThermoFisher Scientific) and quantitated. The relative HDAC activity was calculated as the ratio of the HDAC activity of the ibuprofen group compared with that of the control (DMSO) group. Statistical analysis All data were analysed using GraphPad Prism5 software (GraphPad Software, San Diego, CA, USA). Values were expressed as means ± SEM. P values were calculated using a two-tailed Student's t test (two groups) or one-way ANOVA (more than two groups), unless otherwise noted. A value of P < 0.05 was used as the criterion for statistical significance. *Indicates significant difference with P < 0.05, **indicates significant difference with P < 0.01, ***indicates significant difference with P < 0.001. 6,28 Ibuprofen diminishes cancer cell stemness properties in vitro In order to establish the proper working concentrations of ibuprofen in various cancer cells, we determined the IC 50 of ibuprofen in A549 lung cancer cells, MDA-MB-231 breast cancer cells and HepG2 liver cancer cells using a cytotoxicity assay. Our results showed a 3.0 mM IC 50 in A549 lung cancer cells, a 1.8 mM IC 50 in MDA-MB-231 breast cancer cells and a 1.2 mM IC 50 in HepG2 liver cancer cells ( Supplementary Fig. S1). Based on the IC 50 , we chose working concentrations of 0, 0.5 and 1 mM ibuprofen for three types of cancer cells in our studies. In order to determine the in vitro effects of ibuprofen on cancer cell stemness, we investigated ALDH + sub-population changes in A549 lung cancer cells, MDA-MB-231 breast cancer cells and HepG2 liver cancer cells using the ALDH staining assay. Our results indicated that the ALDH + subpopulation decreases in the ibuprofen-treated groups versus controls (Fig. 1a, b). In order to determine the effects of ibuprofen on cancer cell stemness, we next investigated the changes in the side population in the three cancer cell lines using the side-population assay. Our results indicated that the side population decreases in the ibuprofentreated groups versus controls (Fig. 1c, d). In order to determine the effects of ibuprofen on cancer cell stemness, we next investigated the changes in cell-sphere formation in the three cancer cell lines using the sphere-formation assay. Our results showed that sphere formation decreases in the ibuprofen-treated groups versus controls (Fig. 1e, f). To further determine the effects of ibuprofen on cancer stem cells and normal cancer cells, we sorted the ALDH + cells (CSC) and ALDH− cells (normal cancer cell) from 231 and A549 cells and treated ibuprofen, respectively ( Supplementary Fig. S2A), and checked the stemness markers' expression by western blot. The results showed that ibuprofen reduced the stemness marker (ALDH1A1, SOX2, OCT4 and NANOG) expression in both ALDH + and ALDH− cells ( Supplementary Fig. S2B). Collectively, the above-mentioned findings suggest that ibuprofen diminishes cancer cell stemness properties with the CSC non-specific target way in vitro. Ibuprofen diminishes cancer cell metastasis and stemness properties in vivo In order to determine the effects of ibuprofen on cancer cell metastasis and stemness in vivo, we implanted 4T1-luciferase cells into the fourth fat pad of female Balb/c mice. Seven days after implantation, we IP-injected the mice with 20 mg/kg ibuprofen, 40 mg/kg ibuprofen or DMSO (control group) two times per week (Fig. 2a). Our results showed that tumour volume decreases in the ibuprofen-treated groups versus the control (Fig. 2b). However, we found that the body weight did not change in the ibuprofentreated groups versus the control (Fig. 2c). In addition, we found that the survival time increases in the ibuprofen-treated groups versus control (Fig. 2d). With respect to the effect of ibuprofen on cancer cell metastasis, we found that lung metastasis decreases in ibuprofen-treated groups versus the control ( Fig. 2e-g). With respect to the effect of ibuprofen on cancer cell stemness properties, we found that the immunocytochemical staining of SOX2 and OCT4 stemness markers decreases in the ibuprofentreated groups versus DMSO controls (Fig. 2h, i). To further verify the effects of ibuprofen on cancer stem cells and normal cancer cells in vivo, we sorted the ALDH + cells (CSC) and ALDH− cells (normal cancer cell) from 231 cells and injected the nude mice subcutaneously ( Supplementary Fig. S2C). About 12 days after implantation, we IP-injected the mice with 20 mg/kg ibuprofen, 40 mg/kg ibuprofen or DMSO (control group) every 3 days (Supplementary Fig. S2D). The results showed that tumour growth and volume decreased in the ibuprofen-treated groups versus the control in both ALDH + and ALDH− cell groups ( Supplementary Fig. S2E, S2H). Together, the above-mentioned findings suggest that ibuprofen diminishes cancer cell metastasis and stemness properties with the CSC non-specific target way in vivo. Ibuprofen reduces cancer cell chemoresistance in vivo As chemoresistance was a very important feature of cancer cell stemness in the clinic, cancer cells with stemness property were resistant to a chemoreagent like cisplatin therapy. In order to determine the effects of ibuprofen on cancer cell chemoresistance in vivo, we implanted 4T1-luciferase cells into the fourth fat pad of female Balb/c mice. About 8 days after implantation, we IP-injected the mice with 2 mg/kg cisplatin + 10 mg/kg ibuprofen, 2 mg/kg cisplatin + 20 mg/kg ibuprofen or 2 mg/kg cisplatin + DMSO (control group) every 3 days (Fig. 2j). Our results showed that tumour volume and tumour growth speed decrease in the cisplatin/ibuprofen-treated versus the cisplatin/ DMSO control ( Fig. 2k & n). However, we found that the body weight did not change in the cisplatin/ibuprofen-treated versus the cisplatin/DMSO control (Fig. 2l). In addition, we found that the survival time increases in the cisplatin/ibuprofentreated groups versus the cisplatin/DMSO control (Fig. 2m). Consistent with the in vitro cell apoptosis assay, we also found that the cell apoptosis (TUNEL + cells) was increased in the cisplatin/ibuprofen-treated versus the cisplatin/DMSO control (Fig. 2q). With respect to the effect of ibuprofen on cancer cell stemness properties, we found that the immunocytochemical staining of SOX2 and OCT4 stemness markers decreases in the cisplatin/ibuprofen-treated groups versus cisplatin/DMSO control (Fig. 2o, p). To further identify the effects of ibuprofen on cancer stem cells and normal cancer cells' chemoresistance in vivo, we sorted the ALDH + cells (CSC) and ALDH− cells (normal cancer cell) from 231 cells, and injected the nude mice subcutaneously (Supplementary Fig. S2C). About 12 days after implantation, we IPinjected the mice with 2 mg/kg cisplatin + 10 mg/kg ibuprofen, 2 mg/kg cisplatin + 20 mg/kg ibuprofen or 2 mg/kg cisplatin + DMSO (control group) every 3 days ( Supplementary Fig. S2F). The results showed that tumour volume and tumour growth speed decrease in the cisplatin/ibuprofen-treated versus the cisplatin/ DMSO control in both ALDH + and ALDH− cell groups (Supplementary Fig. S2G, S2H). Thus, the above-mentioned findings suggest that ibuprofen reduces cancer cell chemoresistance with the CSC non-specific target way in vivo. Ibuprofen inhibits the expression of inflammation-related stemness genes in vitro and in vivo Our previously published report established a medium-throughput siRNA screening platform that identifies inflammation genes that regulate cancer cell stemness. Specifically, we identified several novel candidate genes that decrease OCT4 expression and the ALDH + subpopulation, both of which characterise stemness (Fig. 3a). In order to determine whether ibuprofen decreases the expression of these novel candidate genes to further diminish cancer cell stemness, we investigated the expression of novel candidate genes and stemness markers (SOX2 and OCT4) in A549 lung cancer cells, MDA-MB-231 breast cancer cells and HepG2 liver cancer cells using western blot. Our results showed that ICAM3, CCL16, PDE3A, PRTN3, SOX2 and OCT4 protein expression decreases in the ibuprofen-treated groups versus controls (Fig. 3b). We also found that ICAM3, CCL16, PDE3A, PRTN3, TRAF6, BCAR1, IL-1α, IL-1β, NFκB1, IκBκB, SOX2 and OCT4 mRNA expression decreases in the ibuprofen-treated group versus control (Fig. 3c). Moreover, ICAM3, CCL16, PDE3A, PRTN3, TRAF6, BCAR1, IL-1α, IL-1β, NFκB1, SOX2 and OCT4 decreases the protein expression as indicated by immunofluorescence staining in the ibuprofen-treated MDA-MB-231 breast cancer cells versus the control (Fig. 3d). To further investigate the role of ibuprofen on novel candidate genes in cancer stem cells and normal cancer cells, we checked these genes' expression in ALDH + and ALDH− cells treated with ibuprofen by western blot. The results showed that these genes' expression decreased in both ALDH + and ALDH− cells (Supplementary Fig. S3A). Ibuprofen mediates histone 3 methylation and acetylation to ICAM3 expression in vitro and in vivo In order to determine the mechanism underlying the action of ibuprofen, we explored the regulatory effect of ibuprofen on histone 3 modification markers in A549 lung cancer cells, MDA-MB-231 breast cancer cells and HepG2 liver cancer cells using western blot. Our results indicated that the expression of H3 trimethylation markers (i.e., H3K4-3Me, H3K9-3Me, H3K27-3Me, H3K36-3Me and H3K79-3Me) increases in the ibuprofen-treated groups versus control (Fig. 4a). We also found that the expression of histone demethylases (i.e., KDM6A and KDM6B) decreases in the ibuprofen-treated groups versus control (Fig. 4a). In addition, we found that the expression of H3 acetylation markers (i.e., H3K18-Ac and H3K27-Ac) increases in the ibuprofen-treated groups versus control (Fig. 4a). Since HDACs play a key role in regulating H3 acetylation, we determined the expression levels of HDACs using western blot. We found that the expression of HDAC 1-5 decreases in the ibuprofen-treated groups versus control (Fig. 4a). Accordingly, HDAC activity decreases in the ibuprofen-treated groups versus control (Fig. 4d). To further ensure the regulatory effect of ibuprofen on histone 3 modifications in cancer stem cells and normal cancer cells, we detected that in ALDH + and ALDH− cells treated with ibuprofen by western blot. The results showed that the H3 modification markers were increased, the histone demethylases (KDM6A and KDM6B) and HDAC 1-5 decreased in both ALDH + and ALDH− cells ( Supplementary Fig. S3B). In order to verify the above findings, we studied the protein expression of H3K4-3Me, H3K9-3Me, H3K27-3Me, H3K36-3Me, H3K79-3Me and H3 in A549 lung cancer cells, MDA-MB-231 breast cancer cells and HepG2 liver cancer cells using immunofluorescence. Our results showed that the protein expression of the H3 modification markers within the nucleus increases in the ibuprofen-treated groups versus control (Fig. 4b). In order to identify the role of H3 modification in regulating selected inflammation-related stemness genes, we measured the amount of ICAM3 DNA fragments in H3 modification marker pulldown DNAs in A549 lung cancer cells, MDA-MB-231 breast cancer cells and HepG2 liver cancer cells using the CHIP-qPCR assay. We selected ICAM3 since our previous studies demonstrated that ICAM mediates cancer cell inflammation and stemness. Our results demonstrated that the amount of ICAM3 DNA fragments in the various H3 modification marker pull-down DNAs decreases in all three cancer cell lines (Fig. 4c). The abovementioned findings suggest that ibuprofen reduces histone demethylase (i.e., KDM6A and KDM6B) and HDAC expression that mediates histone 3 methylation and acetylation, and thereby inhibits gene expression. In order to confirm the above in vitro results, we next examined H3 methylation and acetylation marker expression in tumours from ibuprofen-treated mice versus control using immunocytochemistry. Our results demonstrated that the H3 methylation and acetylation marker immunostaining within the nucleus increases in the ibuprofen-treated group versus control (Fig. 4e, f). We also found that the amount of ICAM3 DNA fragments in the various H3 modification marker pull-down DNAs decreases in the ibuprofentreated group versus control, indicating that ICAM3 expression is blocked (Fig. 4g). These findings suggest that ibuprofen mediates H3 methylation and acetylation, and thereby regulates ICAM3 expression in vivo. In addition, we found that the survival time increases in the ibuprofen-treated group and the ibuprofen + inhibitor-treated groups versus DMSO control (Fig. 6e). These results suggest that ibuprofen combined with HDAC/HDM (KDM6A/B) inhibitors diminishes cancer progression in vivo and may serve as a therapeutic strategy. Proposed model of ibuprofen in mediating cancer cell stemness and cancer progression Based on our findings, we propose the following model (Fig. 6f). Ibuprofen inhibits histone demethylase (HDM) and HDAC expression that then mediates histone 3 methylation (H3K4-3Me, H3K9-3Me, H3K27-3Me, H3K36-3Me and H3K79-3Me) and acetylation (H3K18-Ac and H3K27-Ac), respectively. These H3 modifications then inhibit the expression of various inflammation-related stemness genes previously identified by high-throughput siRNA screening (IL-1α, IL-1β, ICAM3, CCL16, TRAF6, PDE3A, PRTN3, NFκB1, IκBκB and BCAR1). Using the ICAM3 gene as a representative of the inflammation-related stemness genes, ICAM3 expression is inhibited by the ibuprofen-mediated H3 modifications. Importantly, the above function of ibuprofen was mainly depending on COX2 expression. Thus, ibuprofen may diminish cancer cell stemness properties and cancer progression in vitro and in vivo by inhibiting the expression of various inflammation-related stemness genes with a COX2dependent manner. DISCUSSION The anticancer potential of ibuprofen (a non-steroidal and antiinflammatory drug) has created a broad interest to explore the clinical benefits of ibuprofen in cancer therapy. [29][30][31] Previous findings by many investigators have established that ibuprofen induces apoptosis in cancer cells, and inhibits proliferation and metastasis of cancer cells. 11,12,32,33 In addition, ibuprofen inhibits cancer stemness in gastric cancer although the mechanism of action remains unclear. 11 In this study, we investigated the role of ibuprofen on cancer stemness in breast cancer, lung cancer and liver cancer. We found that ibuprofen diminishes cancer cell stemness properties that include reducing the ALDH + subpopulation, side population and sphere formation in all three cancer types in vitro. Also, ibuprofen inhibits tumour growth, metastasis, chemoresistance and prolongs survival in vivo. Our in vitro and in vivo studies reveal that the inhibitory role of ibuprofen occurs on multiple fronts in all three cancer types. The well-characterised mechanism of action for ibuprofen involves the modification of Cox enzymes. 34,35 In regard to ibuprofen in cancer therapy, an early report focused on the inhibition of the Cox-dependent pathway that leads to reduced inflammation and hence the anticancer properties of ibuprofen. 36 Besides the inhibition of inflammation, it is not clear that other pathways or molecular mechanisms play a role in the anticancer properties of ibuprofen. In our study, we performed a more detailed analysis of the action of ibuprofen on histone modification. Our findings indicate that ibuprofen reduces HDACs and histone demethylase (KDM6A/B) expression that mediates histone acetylation and histone 3 methylation, and thereby suppresses inflammation-related stemness gene expression. Unexpectedly, the above process mainly depends on COX2 expression. However, we did not investigate the regulation of COX2 on HDAC or HDM6A/B in our present study. Collectively, these findings suggest a novel molecular mechanism that explains the anticancer properties of ibuprofen. Cancer stem cells (CSCs) are a small population of cancer cells that possess the ability to self-renew, differentiate and modulate cancer growth, recurrence, metastasis and chemoresistance. 3,6,37,38 The maintenance of cancer cell stemness largely depends on the surrounding inflammatory microenvironment. 39 Our previous work established a medium-throughput siRNA screening platform to identify inflammation genes that regulate cancer cell stemness, and identified several novel candidates (e.g., ICAM3). ICAM3 mediates cancer cell stemness as well as cancer-related inflammation via Src/PI3K/AKT signalling. 6 In our study, we clearly demonstrated that ibuprofen inhibits the expression of inflammation-related stemness genes (especially ICAM3). Moreover, ibuprofen mediates histone modification that causes an inhibition of inflammationrelated stemness gene transcription that further suppresses cancer stemness. Our findings identify novel targets of ibuprofen that may explain the anticancer properties of ibuprofen, and may lead to new therapeutic strategies. In conclusion, our results demonstrated that ibuprofen diminishes cancer cell stemness properties and cancer progression in vitro and in vivo. Moreover, ibuprofen inhibits inflammation-related stemness gene expression, especially ICAM3 that we screened by high-throughput siRNA platform. Our investigation of the underlying molecular mechanism demonstrated that ibuprofen reduces HDACs and histone demethylase (KDM6A/B) expression. This reduction mediates histone acetylation and histone 3 methylation, and thereby inhibits inflammation-related stemness gene expression. In regard to therapeutic strategies, ibuprofen combined with HDAC/HDM inhibitors diminished cancer progression in vivo. Therefore, our findings reveal a novel molecular mechanism that sheds further light on the anticancer properties of ibuprofen, and suggest therapeutic strategies for the prevention of cancer progression. AUTHOR CONTRIBUTIONS S.W.Z., Z.X.Y. and D.R.L. designed and performed the experiments. Z.X.Y., D.R.L., G.W.J. and W.J. analysed the data. Y.W.C. and B.Y.H. provided the materials or methods for the experiment, and helped S.W.Z. to prepare the papers. L.N. and L.J.J. helped to group the figures and repaired the papers. ADDITIONAL INFORMATION Ethics approval and consent to participate All animal experiments were performed in accordance with Nankai University and Jining Medical University Animal Welfare Guidelines. Consent to publish Not applicable. Data availability The data are available for all study authors. The data sets used and analysed during the current study are available from the corresponding author on reasonable request. Note This work is published under the standard license to publish agreement. After 12 months the work will become freely available and the license terms will switch to a Creative Commons Attribution 4.0 International (CC BY 4.0). Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
2020-06-12T14:27:48.199Z
2020-06-12T00:00:00.000
{ "year": 2020, "sha1": "fae7f4f9785a7e44198f7198d89505196b134973", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1038/s41416-020-0906-7", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "1f3b16145c769c1fcf0ce4db6aec4aff3de48b25", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }